00:00:00

Share Your Feedback 🏝️

SAS

SAS

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: Why is Your Language Model a Poor Implicit Reward Model? Next: Synergy Dilemma

SAS

  • Related Project: Private
  • Category: Paper Review
  • Date: 2025-07-11

SAS: Simulated Attention Score

  • url https://arxiv.org/abs/2507.07694
  • pdf https://arxiv.org/pdf/2507.07694
  • abstract The attention mechanism is a core component of the Transformer architecture. Various methods have been developed to compute attention scores, including multi-head attention (MHA), multi-query attention, group-query attention and so on. We further analyze the MHA and observe that its performance improves as the number of attention heads increases, provided the hidden size per head remains sufficiently large. Therefore, increasing both the head count and hidden size per head with minimal parameter overhead can lead to significant performance gains at a low cost. Motivated by this insight, we introduce Simulated Attention Score (SAS), which maintains a compact model size while simulating a larger number of attention heads and hidden feature dimension per head. This is achieved by projecting a low-dimensional head representation into a higher-dimensional space, effectively increasing attention capacity without increasing parameter count. Beyond the head representations, we further extend the simulation approach to feature dimension of the key and query embeddings, enhancing expressiveness by mimicking the behavior of a larger model while preserving the original model size. To control the parameter cost, we also propose Parameter-Efficient Attention Aggregation (PEAA). Comprehensive experiments on a variety of datasets and tasks demonstrate the effectiveness of the proposed SAS method, achieving significant improvements over different attention variants.
Previous: Why is Your Language Model a Poor Implicit Reward Model? Next: Synergy Dilemma

post contain ""

    No matching posts found containing ""