00:00:00

Share Your Feedback 🏝️

MultiModal | Meta AI - Chameleon

MultiModal | Meta AI - Chameleon

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: Semi-working Mask Next: Post | Ruby + Gem + Jekyll on Ubuntu 22.04

MultiModal | Meta AI - Chameleon

  • Related Project: Private
  • Category: Paper Review
  • Date: 2024-07-31

Chameleon: Mixed-Modal Early-Fusion Foundation Models

  • url: https://arxiv.org/abs/2405.09818
  • pdf: https://arxiv.org/pdf/2405.09818
  • html: https://arxiv.org/html/2405.09818v1
  • abstract: We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents.

Previous: Semi-working Mask Next: Post | Ruby + Gem + Jekyll on Ubuntu 22.04

post contain ""

    No matching posts found containing ""