NYUAD-ComNets's picture
Update README.md
9eb206d verified
metadata
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
model-index:
  - name: paligemma_race
    results: []

FaceScanPaliGemma_Race


from PIL import Image
import torch
from transformers import PaliGemmaProcessor, PaliGemmaForConditionalGeneration, BitsAndBytesConfig, TrainingArguments, Trainer


model = PaliGemmaForConditionalGeneration.from_pretrained('NYUAD-ComNets/FaceScanPaliGemma_Race',torch_dtype=torch.bfloat16)

input_text = "what is the race of the person in the image?"

processor = PaliGemmaProcessor.from_pretrained("google/paligemma-3b-pt-224")

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model.to(device)


input_image = Image.open('image_path')
inputs = processor(text=input_text, images=input_image, padding="longest", do_convert_rgb=True, return_tensors="pt").to(device)
inputs = inputs.to(dtype=model.dtype)
      
with torch.no_grad():
          output = model.generate(**inputs, max_length=500)
result=processor.decode(output[0], skip_special_tokens=True)[len(input_text):].strip()

Loading in 4-bit / 8-bit


from transformers import AutoProcessor, PaliGemmaForConditionalGeneration, BitsAndBytesConfig
from PIL import Image
import requests
import torch
import time


device = "cuda:0"
dtype = torch.bfloat16

quantization_config = BitsAndBytesConfig(load_in_8bit=True)

model = PaliGemmaForConditionalGeneration.from_pretrained(
    "NYUAD-ComNets/FaceScanPaliGemma_Race", quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained("google/paligemma-3b-pt-224")

prompt = "what is the race of the person in the image?"

image = Image.open('image_path')
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]

with torch.inference_mode():
        generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
        generation = generation[0][input_len:]
        decoded = processor.decode(generation, skip_special_tokens=True)
        print(decoded)

Model description

This model is a fine-tuned version of google/paligemma-3b-pt-224 on the FairFace dataset. The model aims to classify the race of face image or image with one person into seven categoris such as Black, East Asian, Indian, Latino_Hispanic, Middle Eastern, Southeast Asian, White

Model Performance

Accuracy: 81 %, F1 score: 79 %

Intended uses & limitations

This model is used for research purposes

Training and evaluation data

FairFace dataset was used for training and validating the model

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 5

Training results

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.1.2+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1

BibTeX entry and citation info


@article{aldahoul2024exploring,
  title={Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age},
  author={AlDahoul, Nouar and Tan, Myles Joshua Toledo and Kasireddy, Harishwar Reddy and Zaki, Yasir},
  journal={arXiv preprint arXiv:2410.24148},
  year={2024}
}


@misc{ComNets,
      url={https://huggingface.co/NYUAD-ComNets/FaceScanPaliGemma_Race](https://huggingface.co/NYUAD-ComNets/FaceScanPaliGemma_Race)},
      title={FaceScanPaliGemma_Race},
      author={Nouar AlDahoul, Yasir Zaki}
}