abstract: In this white paper, we present AlphaEvolve, an evolutionary coding agent that substantially enhances
capabilities of state-of-the-art LLMs on highly challenging tasks such as tackling open scientific problems
or optimizing critical pieces of computational infrastructure. AlphaEvolve orchestrates an autonomous
pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code. Using
an evolutionary approach, continuously receiving feedback from one or more evaluators, AlphaEvolve
iteratively improves the algorithm, potentially leading to new scientific and practical discoveries. We
demonstrate the broad applicability of this approach by applying it to a number of important computational problems. When applied to optimizing critical components of large-scale computational
stacks at Google, AlphaEvolve developed a more efficient scheduling algorithm for data centers, found
a functionally equivalent simplification in the circuit design of hardware accelerators, and accelerated the training of the LLM underpinning AlphaEvolve itself. Furthermore, AlphaEvolve discovered
novel, provably correct algorithms that surpass state-of-the-art solutions on a spectrum of problems
in mathematics and computer science, significantly expanding the scope of prior automated discovery
methods (Romera-Paredes et al., 2023). Notably, AlphaEvolve developed a search algorithm that found a
procedure to multiply two 4 × 4 complex-valued matrices using 48 scalar multiplications; offering the
first improvement, after 56 years, over Strassen’s algorithm in this setting. We believe AlphaEvolve and
coding agents like it can have a significant impact in improving solutions of problems across many areas
of science and computation.