1aurent's picture
Update README.md
d899b8a
metadata
base_model: 1aurent/vit_base_patch16_224.owkin_pancancer
tags:
  - image-classification
  - timm
  - owkin
  - biology
  - cancer
  - lung
library_name: timm
datasets:
  - 1aurent/LC25000
metrics:
  - accuracy
pipeline_tag: image-classification
model-index:
  - name: owkin_pancancer_ft_lc25000_lung
    results:
      - task:
          type: image-classification
          name: Image Classification
        dataset:
          name: 1aurent/LC25000
          type: image-classification
        metrics:
          - type: accuracy
            value: 0.999
            name: accuracy
            verified: false
widget:
  - src: >-
      https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/5000/image/image.jpg
    example_title: benign
  - src: >-
      https://datasets-server.huggingface.co/assets/1aurent/LC25000/--/default/train/0/image/image.jpg
    example_title: adenocarcinomas
  - src: >-
      https://datasets-server.huggingface.co/cached-assets/1aurent/LC25000/--/56a7c495692c27afd294a88b7aadaa7b79d8e270/--/default/train/10000/image/image.jpg
    example_title: squamous carcinomas
license: other
license_name: owkin-non-commercial
license_link: https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt

Model card for vit_base_patch16_224.owkin_pancancer_ft_lc25000_lung

A Vision Transformer (ViT) image classification model.
Trained by Owkin on 40M pan-cancer histology tiles from TCGA.
Fine-tuned on LC25000's lung subset.

Model Details

Model Usage

Image Classification

from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(
  urlopen(
    "https://datasets-server.huggingface.co/assets/1aurent/LC25000/--/default/train/0/image/image.jpg"
  )
)

# load model from the hub
model = timm.create_model(
  model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_lung",
  pretrained=True,
).eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1

Image Embeddings

from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(
  urlopen(
    "https://datasets-server.huggingface.co/assets/1aurent/LC25000/--/default/train/0/image/image.jpg"
  )
)

# load model from the hub
model = timm.create_model(
  model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_lung",
  pretrained=True,
  num_classes=0,
).eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # output is (batch_size, num_features) shaped tensor

Citation

@article {Filiot2023.07.21.23292757,
  author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
  title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
  elocation-id = {2023.07.21.23292757},
  year = {2023},
  doi = {10.1101/2023.07.21.23292757},
  publisher = {Cold Spring Harbor Laboratory Press},
  URL = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
  eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
  journal = {medRxiv}
}