00:00:00

Share Your Feedback 🏝️

Optimizing Pretraining Data Mixtures

Optimizing Pretraining Data Mixtures

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: RAG | VideoRAG Next: RL Reasoning | Advancing Language Model Reasoning

Optimizing Pretraining Data Mixtures

  • Related Project: Private
  • Category: Paper Review
  • Date: 2025-01-22

Optimizing Pretraining Data Mixtures with LLM-Estimated Utility

  • url: https://arxiv.org/abs/2501.11747
  • pdf: https://arxiv.org/pdf/2501.11747
  • abstract: Large Language Models improve with increasing amounts of high-quality training data. However, leveraging larger datasets requires balancing quality, quantity, and diversity across sources. After evaluating nine baseline methods under both compute- and data-constrained scenarios, we find token-count heuristics outperform manual and learned mixes, indicating that simple approaches accounting for dataset size and diversity are surprisingly effective. Building on this insight, we propose two complementary approaches: UtiliMax, which extends token-based heuristics by incorporating utility estimates from reduced-scale ablations, achieving up to a 10.6x speedup over manual baselines; and Model Estimated Data Utility (MEDU), which leverages LLMs to estimate data utility from small samples, matching ablation-based performance while reducing computational requirements by ∼200x. Together, these approaches establish a new framework for automated, compute-efficient data mixing that is robust across training regimes.
Previous: RAG | VideoRAG Next: RL Reasoning | Advancing Language Model Reasoning

post contain ""

    No matching posts found containing ""