Papers
arxiv:2407.04842

MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?

Published on Jul 5
Β· Submitted by yichaodu on Jul 9
#1 Paper of the day

Abstract

While text-to-image models like DALLE-3 and Stable Diffusion are rapidly proliferating, they often encounter challenges such as hallucination, bias, and the production of unsafe, low-quality output. To effectively address these issues, it is crucial to align these models with desired behaviors based on feedback from a multimodal judge. Despite their significance, current multimodal judges frequently undergo inadequate evaluation of their capabilities and limitations, potentially leading to misalignment and unsafe fine-tuning outcomes. To address this issue, we introduce MJ-Bench, a novel benchmark which incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across four key perspectives: alignment, safety, image quality, and bias. Specifically, we evaluate a large variety of multimodal judges including smaller-sized CLIP-based scoring models, open-source VLMs (e.g. LLaVA family), and close-source VLMs (e.g. GPT-4o, Claude 3) on each decomposed subcategory of our preference dataset. Experiments reveal that close-source VLMs generally provide better feedback, with GPT-4o outperforming other judges in average. Compared with open-source VLMs, smaller-sized scoring models can provide better feedback regarding text-image alignment and image quality, while VLMs provide more accurate feedback regarding safety and generation bias due to their stronger reasoning capabilities. Further studies in feedback scale reveal that VLM judges can generally provide more accurate and stable feedback in natural language (Likert-scale) than numerical scales. Notably, human evaluations on end-to-end fine-tuned models using separate feedback from these multimodal judges provide similar conclusions, further confirming the effectiveness of MJ-Bench. All data, code, models are available at https://huggingface.co/MJ-Bench.

Community

Paper author Paper submitter

MJ-Bench is the first benchmark that incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across four key perspectives: alignment, safety, image quality, and bias.

MJ-Bench evaluates a wide variety of multimodal judges, including scoring models, open-source VLMs, and closed-source VLMs, on each decomposed subcategory of the preference dataset.

More Details:

It's really cool to see MJ-Bench's focus on safety and bias πŸ”₯ Have you considered including ethical considerations as well?

Β·
Paper author

Hi Adina, thanks for liking our paper! πŸ€— We do consider some aspects of ethical considerations and categorize them into bias and safety, such as generation disparity, harmful/unethical image output. You can find more details in the paper😊

Great paper! Will future enhancements of MJ Bench focus on more computer vision centric metrics like bounding box fairness or class distribution consistency?

Sign up or log in to comment

Models citing this paper 23

Browse 23 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 1

Collections including this paper 10