00:00:00

Share Your Feedback 🏝️

Swan-GPT

Swan-GPT

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: GigaTok Next: Scaling Laws for Multimodal

Swan-GPT

  • Related Project: Private
  • Category: Paper Review
  • Date: 2025-04-14

SWAN-GPT: An Efficient and Scalable Approach for Long-Context Language Modeling

  • url: https://arxiv.org/abs/2504.08719
  • pdf: https://arxiv.org/pdf/2504.08719
  • html: https://arxiv.org/html/2504.08719v1
  • abstract: We present a decoder-only Transformer architecture that robustly generalizes to sequence lengths substantially longer than those seen during training. Our model, SWAN-GPT, interleaves layers without positional encodings (NoPE) and sliding-window attention layers equipped with rotary positional encodings (SWA-RoPE). Experiments demonstrate strong performance on sequence lengths significantly longer than the training length without the need for additional long-context training. This robust length extrapolation is achieved through our novel architecture, enhanced by a straightforward dynamic scaling of attention scores during inference. In addition, SWAN-GPT is more computationally efficient than standard GPT architectures, resulting in cheaper training and higher throughput. Further, we demonstrate that existing pre-trained decoder-only models can be efficiently converted to the SWAN architecture with minimal continued training, enabling longer contexts. Overall, our work presents an effective approach for scaling language models to longer contexts in a robust and efficient manner.
Previous: GigaTok Next: Scaling Laws for Multimodal

post contain ""

    No matching posts found containing ""