JudgeLM: Fine-tuned Large Language Models are Scalable Judges Paper • 2310.17631 • Published Oct 26, 2023 • 32
Prometheus: Inducing Fine-grained Evaluation Capability in Language Models Paper • 2310.08491 • Published Oct 12, 2023 • 53
Judging LLM-as-a-judge with MT-Bench and Chatbot Arena Paper • 2306.05685 • Published Jun 9, 2023 • 29
Benchmarking Cognitive Biases in Large Language Models as Evaluators Paper • 2309.17012 • Published Sep 29, 2023 • 1
Evaluating Large Language Models: A Comprehensive Survey Paper • 2310.19736 • Published Oct 30, 2023 • 2
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models Paper • 2305.13711 • Published May 23, 2023 • 2
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment Paper • 2303.16634 • Published Mar 29, 2023 • 3
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models Paper • 2405.01535 • Published May 2 • 116
JudgeBench: A Benchmark for Evaluating LLM-based Judges Paper • 2410.12784 • Published 20 days ago • 40