Papers
arxiv:2410.05080

ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery

Published on Oct 7
· Submitted by ysu-nlp on Oct 8
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
Yu Su ,

Abstract

The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about the true capabilities of such agents. In this work, we argue that for an agent to fully automate scientific discovery, it must be able to complete all essential tasks in the workflow. Thus, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation. To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery. To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them. We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs. Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility. We also propose two effective strategies to mitigate data contamination concerns. Using our benchmark, we evaluate five open-weight and proprietary LLMs, each with three frameworks: direct prompting, OpenHands, and self-debug. Given three attempts for each task, the best-performing agent can only solve 32.4% of the tasks independently and 34.3% with expert-provided knowledge. These results underscore the limited capacities of current language agents in generating code for data-driven discovery, let alone end-to-end automation for scientific research.

Community

Paper author Paper submitter

AI agents will not replace human scientists, but they will become a powerful automation tool to assist scientists. I am proud to introduce ScienceAgentBench, a new benchmark carefully co-designed with subject matter experts to drive and track the progress of coding agents that directly assist scientists in their existing workflows!

Several highlights:

🌟 Scientific authenticity through co-design with subject matter experts
We ensure the authenticity of tasks in our benchmark by directly extracting them from peer-reviewed publications and engaging nine subject matter experts (incl. senior Ph.D. students and professors) from the respective disciplines to validate them. This approach also minimizes the sim2real gap for agents developed on our benchmark to real-world scenarios

🌟 Rigorous graded evaluation
Reliable evaluation for language agents is notably difficult due to the openendedness and complexity of data-driven discovery tasks. We first unify the target output for every task as a self-contained Python program, and then employ an array of evaluation metrics that examine the generated programs, execution results (e.g., rendered figures or test set predictions), and costs. We also provide step-by-step rubrics specific to each task to enable graded evaluation

🌟Careful multi-stage quality control
Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its quality and scientific plausibility. We also propose two effective strategies to mitigate data contamination concerns due to LLM pre-training.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.05080 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.05080 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.05080 in a Space README.md to link it from this page.

Collections including this paper 3