--- license: other license_name: msrla license_link: https://huggingface.co/microsoft/rad-dino-maira-2/blob/main/LICENSE library_name: transformers --- # Model card for RAD-DINO-MAIRA-2 ## Model description RAD-DINO-MAIRA-2 is a vision transformer model trained to encode chest X-rays using the self-supervised learning method [DINOv2](https://openreview.net/forum?id=a68SUt6zFt). RAD-DINO-MAIRA-2 is a variant of [RAD-DINO](https://huggingface.co/microsoft/rad-dino), which is described in detail in [RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (F. Pérez-García, H. Sharma, S. Bond-Taylor, et al., 2024)](https://arxiv.org/abs/2401.10815). RAD-DINO-MAIRA-2 is the version of RAD-DINO used in [MAIRA-2: Grounded Radiology Report Generation (S. Bannur, K. Bouzid, et al., 2024)](https://arxiv.org/abs/2406.04449). Relative to [RAD-DINO](https://huggingface.co/microsoft/rad-dino), it was trained on more data. - **Developed by:** Microsoft Health Futures - **Model type:** Vision transformer - **License:** [MSRLA](./LICENSE) - **Finetuned from model:** [`dinov2-base`](https://huggingface.co/facebook/dinov2-base) ## Uses RAD-DINO-MAIRA-2 is shared for research purposes only. It is **not meant to be used for clinical practice**. The model is a vision backbone that can be plugged to other models for downstream tasks. Some potential uses are: - Image classification, with a classifier trained on top of the `CLS` token - Image segmentation, with a decoder trained using the patch tokens - Clustering, using the image embeddings directly - Image retrieval, using nearest neighbors of the CLS token - Report generation, with a language model to decode text Fine-tuning RAD-DINO-MAIRA-2 is typically not necessary to obtain good performance in downstream tasks. ## Biases, risks, and limitations RAD-DINO-MAIRA-2 was trained with data from three countries, therefore it might be biased towards population in the training data. Underlying biases of the training datasets may not be well characterized. ## Getting started ``` from transformers import pipeline pipe = pipeline(task="image-feature-extraction", model="microsoft/rad-dino-maira-2", pool=False) patch_features = pipe("https://www.bhf.org.uk/-/media/images/information-support/tests/chest-x-ray/normal-chest-x-ray-620x400.jpg") ``` Refer to [RAD-DINO](https://huggingface.co/microsoft/rad-dino) for a more detailed example. ## Training details ### Training data We used images from five public and one private deidentified chest X-ray datasets to train RAD-DINO-MAIRA-2. | Dataset | Num. images | | --------- | ----------: | | [MIMIC-CXR](https://www.nature.com/articles/s41597-019-0322-0) | 368 960 | | [CheXpert](https://ojs.aaai.org/index.php/AAAI/article/view/3834) | 223 648 | | [NIH-CXR](https://openaccess.thecvf.com/content_cvpr_2017/html/Wang_ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.html) | 112 120 | | [PadChest](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301614) | 136 787 | | [BRAX](https://www.nature.com/articles/s41597-022-01608-8) | 41 260 | | USMix (Private) | 521 608 | | **TOTAL** | 1 404 383 | Images in the validation and test sets used to train [MAIRA-2](https://arxiv.org/abs/2406.04449) were excluded from the training set of RAD-DINO-MAIRA-2. We used 8 nodes with 4 A100 GPUs each, and a batch size of 40 images per GPU. We share the last checkpoint, trained for 105 000 steps. ### Training procedure We refer to the [manuscript](https://arxiv.org/abs/2401.10815) for a detailed description of the training procedure. #### Preprocessing All DICOM files were resized using B-spline interpolation so that their shorter size was 518, min-max scaled to [0, 255], and stored as PNG files. #### Training hyperparameters - **Training regime:** fp16 using PyTorch-FSDP mixed-precision. ## Evaluation Our evaluation is best described in the [manuscript](https://arxiv.org/abs/2401.10815). ## Environmental impact - **Hardware type:** NVIDIA A100 GPUs - **Hours used:** 41 hours/GPU × 8 nodes × 4 GPUs/node = 1312 GPU-hours - **Cloud provider:** Azure - **Compute region:** West US 2 - **Carbon emitted:** 98.4 kg CO₂ eq. ### Compute infrastructure RAD-DINO-MAIRA-2 was trained on [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning). #### Hardware We used 8 `Standard_NC96ads_A100_v4` nodes with four NVIDIA A100 (80 GB) GPUs each. #### Software We leveraged the code in [DINOv2](https://openreview.net/forum?id=a68SUt6zFt) for training. We used [SimpleITK](https://simpleitk.org/) and [Pydicom](https://pydicom.github.io/) for processing of DICOM files. ## Citation **BibTeX:** ```bibtex @misc{perezgarcia2024raddino, title={{RAD-DINO}: Exploring Scalable Medical Image Encoders Beyond Text Supervision}, author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay}, year={2024}, eprint={2401.10815}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` **APA:** > Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (2024). *RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision*. ArXiv, abs/2401.10815. ## Model card contact Fernando Pérez-García ([`fperezgarcia@microsoft.com`](mailto:fperezgarcia@microsoft.com)).