Safetensors
llava
multimodal
txiong23 commited on
Commit
edcf3cd
1 Parent(s): 561ee99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -3
README.md CHANGED
@@ -1,3 +1,115 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - lmms-lab/llava-critic-113k
5
+ base_model:
6
+ - lmms-lab/llava-onevision-qwen2-72b-ov-sft
7
+ tags:
8
+ - multimodal
9
+ ---
10
+
11
+ # LLaVA-Critic-72B
12
+
13
+ ## Model Summary
14
+
15
+ `llava-critic-72b` is the first open-source large multimodal model (LMM) designed as a generalist evaluator for assessing model performance across diverse multimodal scenarios. Built on the foundation of `llava-onevision-72b-ov`, it has been finetuned on [LLaVA-Critic-113k](https://huggingface.co/datasets/lmms-lab/llava-critic-113k) dataset to develop its "critic" capacities.
16
+
17
+ LLaVA-Critic excels in two primary scenarios:
18
+ - 1️⃣ LMM-as-a-Judge: It delivers judgments closely aligned with human, and provides concrete, image-grounded reasons. An open-source alternative to GPT for evaluations.
19
+ - 2️⃣ Preference Learning: Reliable reward signals power up visual chat, leading to LLaVA-OV-Chat [7B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov-chat)/[72B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov-chat).
20
+
21
+ As shown in our paper, `llava-critic-72b` matches or even surpasses GPT-4V in providing human-aligned judgments across different evaluation scenarios.
22
+
23
+ For further details, please refer to the following resources:
24
+ - 📰 Paper: https://arxiv.org/abs/2410.02712
25
+ - 🪐 Project Page: https://llava-vl.github.io/blog/2024-10-03-llava-critic/
26
+ - 📦 Datasets: https://huggingface.co/datasets/lmms-lab/llava-critic-113k
27
+ - 🤗 Model Collections: https://huggingface.co/collections/lmms-lab/llava-critic-66fe3ef8c6e586d8435b4af8
28
+ - 👋 Point of Contact: [Tianyi Xiong](https://tyxiong23.github.io/)
29
+
30
+
31
+ ## Use
32
+
33
+ ### Intended Use
34
+
35
+ The model demonstrates general capacities in providing quantitative judgments and qualitative justifications for evaluating LMM-generated responses. It mainly focuses on two evaluation settings:
36
+ - *Pointwise scoring*, where it assigns a score to an individual candidate response.
37
+ - *Pairwise ranking*, where it compares two candidate responses to determine their relative quality.
38
+
39
+ ### Quick Start
40
+
41
+ ~~~python
42
+ # pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
43
+ from llava.model.builder import load_pretrained_model
44
+ from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
45
+ from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
46
+ from llava.conversation import conv_templates, SeparatorStyle
47
+
48
+ from PIL import Image
49
+ import requests
50
+ import copy
51
+ import torch
52
+
53
+ import sys
54
+ import warnings
55
+ import os
56
+
57
+
58
+ warnings.filterwarnings("ignore")
59
+ pretrained = "lmms-lab/llava-critic-72b"
60
+ model_name = "llava_qwen"
61
+ device = "cuda"
62
+ device_map = "auto"
63
+ tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args
64
+
65
+ model.eval()
66
+
67
+ url = "https://github.com/LLaVA-VL/blog/blob/main/2024-10-03-llava-critic/static/images/critic_img_seven.png?raw=True"
68
+ image = Image.open(requests.get(url, stream=True).raw)
69
+ image_tensor = process_images([image], image_processor, model.config)
70
+ image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]
71
+
72
+ conv_template = "qwen_1_5" # Make sure you use correct chat template for different models
73
+
74
+ # pairwise ranking
75
+ critic_prompt = "Given an image and a corresponding question, please serve as an unbiased and fair judge to evaluate the quality of the answers provided by a Large Multimodal Model (LMM). Determine which answer is better and explain your reasoning with specific details. Your task is provided as follows:\nQuestion: [What this image presents?]\nThe first response: [The image is a black and white sketch of a line that appears to be in the shape of a cross. The line is a simple and straightforward representation of the cross shape, with two straight lines intersecting at a point.]\nThe second response: [This is a handwritten number seven.]\nASSISTANT:\n"
76
+
77
+ # pointwise scoring
78
+ # critic_prompt = "Given an image and a corresponding question, please serve as an unbiased and fair judge to evaluate the quality of answer answers provided by a Large Multimodal Model (LMM). Score the response out of 100 and explain your reasoning with specific details. Your task is provided as follows:\nQuestion: [What this image presents?]\nThe LMM response: [This is a handwritten number seven.]\nASSISTANT:\n "
79
+
80
+ question = DEFAULT_IMAGE_TOKEN + "\n" + critic_prompt
81
+ conv = copy.deepcopy(conv_templates[conv_template])
82
+ conv.append_message(conv.roles[0], question)
83
+ conv.append_message(conv.roles[1], None)
84
+ prompt_question = conv.get_prompt()
85
+
86
+ input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
87
+ image_sizes = [image.size]
88
+
89
+
90
+ cont = model.generate(
91
+ input_ids,
92
+ images=image_tensor,
93
+ image_sizes=image_sizes,
94
+ do_sample=False,
95
+ temperature=0,
96
+ max_new_tokens=4096,
97
+ )
98
+ text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
99
+ print(text_outputs[0])
100
+ ~~~
101
+
102
+
103
+ ## Citation
104
+
105
+ ```
106
+ @article{xiong2024llavacritic,
107
+ title={LLaVA-Critic: Learning to Evaluate Multimodal Models},
108
+ author={Xiong, Tianyi and Wang, Xiyao and Guo, Dong and Ye, Qinghao and Fan, Haoqi and Gu, Quanquan and Huang, Heng and Li, Chunyuan},
109
+ year={2024},
110
+ eprint={2410.02712},
111
+ archivePrefix={arXiv},
112
+ primaryClass={cs.CV},
113
+ url={https://arxiv.org/abs/2410.02712},
114
+ }
115
+ ```