00:00:00

Share Your Feedback 🏝️

VLM | Scaling VLM

VLM | Scaling VLM

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: VLM | LLMs Can Easily Learn from Structure, not content Next: Attn | Prune Sub-quadratic Attention

VLM | Scaling VLM

  • Related Project: Private
  • Category: Paper Review
  • Date: 2025-02-12

Scaling Pre-training to One Hundred Billion Data for Vision Language Models

  • url: https://arxiv.org/abs/2502.07617
  • pdf: https://arxiv.org/pdf/2502.07617
  • html: https://arxiv.org/html/2502.07617v1
  • abstract: We provide an empirical investigation of the potential of pre-training vision-language models on an unprecedented scale: 100 billion examples. We find that model performance tends to saturate at this scale on many common Western-centric classification and retrieval benchmarks, such as COCO Captions. Nevertheless, tasks of cultural diversity achieve more substantial gains from the 100-billion scale web data, thanks to its coverage of long-tail concepts. Furthermore, we analyze the model’s multilinguality and show gains in low-resource languages as well. In addition, we observe that reducing the size of the pretraining dataset via quality filters like using CLIP, typically used to enhance performance, may inadvertently reduce the cultural diversity represented even in large-scale datasets. Our results highlight that while traditional benchmarks may not benefit significantly from scaling noisy, raw web data to 100 billion examples, this data scale is vital for building truly inclusive VLM systems.
Previous: VLM | LLMs Can Easily Learn from Structure, not content Next: Attn | Prune Sub-quadratic Attention

post contain ""

    No matching posts found containing ""