Healthcare
Shikhar Srivastava, Dr. Ibrahim Almakky, Hanan Ghani
MBZUAI
Computer vision
Nil
Nil
Nil
Knowledge transfer between tasks has greatly benefited the computer vision community over the years, by reducing the reliance on large annotated training data. This had a particular impact on the medical imaging domain where there is scarcity of data. Transfer learning in particular proved to be effective in this regard. The multi-attribute nature of medical imaging presents a potential direction where the transfer relationships between images with different attributes, different domain, modality, organ, pathology can be exploited for more robust and efficient transfers.
However, the limited work that leverages this potential either use brute-force techniques to exhaustively gauge transfer relationships or propose large scale homogeneous datasets to learn cross-attribute transfer relationships. These methods are further computationally expensive and fail to generalize to multi-attribute scenarios such as multi-domain, multi-organ or multi-modality settings. There is a glaring need for a more structured multi-modal approach to improve over the classical ImageNet pre-training method, one that generalizes to these manners of attribute-shifts, and leverages the myriad of cross-attribute properties endemic to the medical imaging setting.
In this project, we introduce Meta-Contrastive Transfer Learning (MCTL), a meta-level transfer learning framework for medical imaging that learns a shared embedding space for datasets and models agnostic to their modality, domain, pathology or several other commonly limiting attributes. The goal is to capture cross-attribute properties informative of transfer relationships and even general taxonomic relationships between datasets.