00:00:00

Share Your Feedback 🏝️

Paper Bench

Paper Bench

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: Meta | LLaMa 4 Next: InfiniteICL

Paper Bench

  • Related Project: Private
  • Category: Paper Review
  • Date: 2025-04-06

PaperBench: Evaluating AI’s Ability to Replicate AI Research

  • url: https://arxiv.org/abs/2504.01848
  • abstract: We introduce PaperBench, a benchmark evaluating the ability of AI agents to replicate state-of-the-art AI research. Agents must replicate 20 ICML 2024 Spotlight and Oral papers from scratch, including understanding paper contributions, developing a codebase, and successfully executing experiments. For objective evaluation, we develop rubrics that hierarchically decompose each replication task into smaller sub-tasks with clear grading criteria. In total, PaperBench contains 8,316 individually gradable tasks. Rubrics are co-developed with the author(s) of each ICML paper for accuracy and realism. To enable scalable evaluation, we also develop an LLM-based judge to automatically grade replication attempts against rubrics, and assess our judge’s performance by creating a separate benchmark for judges. We evaluate several frontier models on PaperBench, finding that the best-performing tested agent, Claude 3.5 Sonnet (New) with open-source scaffolding, achieves an average replication score of 21.0\%. Finally, we recruit top ML PhDs to attempt a subset of PaperBench, finding that models do not yet outperform the human baseline. We \href{this https URL}{open-source our code} to facilitate future research in understanding the AI engineering capabilities of AI agents.
Previous: Meta | LLaMa 4 Next: InfiniteICL

post contain ""

    No matching posts found containing ""