00:00:00

Share Your Feedback 🏝️

Flow to the Mode

Flow to the Mode

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: Block Diffusion Next: Gemini Embedding

Flow to the Mode

  • Related Project: Private
  • Category: Paper Review
  • Date: 2025-03-18

Flow to the Mode: Mode-Seeking Diffusion Autoencoders for State-of-the-Art Image Tokenization

  • url: https://arxiv.org/abs/2503.11056
  • pdf: https://arxiv.org/pdf/2503.11056
  • html: https://arxiv.org/html/2503.11056v1
  • abstract: Since the advent of popular visual generation frameworks like VQGAN and latent diffusion models, state-of-the-art image generation systems have generally been two-stage systems that first tokenize or compress visual data into a lower-dimensional latent space before learning a generative model. Tokenizer training typically follows a standard recipe in which images are compressed and reconstructed subject to a combination of MSE, perceptual, and adversarial losses. Diffusion autoencoders have been proposed in prior work as a way to learn end-to-end perceptually-oriented image compression, but have not yet shown state-of-the-art performance on the competitive task of ImageNet-1K reconstruction. We propose FlowMo, a transformer-based diffusion autoencoder that achieves a new state-of-the-art for image tokenization at multiple compression rates without using convolutions, adversarial losses, spatially-aligned two-dimensional latent codes, or distilling from other tokenizers. Our key insight is that FlowMo training should be broken into a mode-matching pre-training stage and a mode-seeking post-training stage. In addition, we conduct extensive analyses and explore the training of generative models atop the FlowMo tokenizer. Our code and models will be available at this http URL.
Previous: Block Diffusion Next: Gemini Embedding

post contain ""

    No matching posts found containing ""