Efficient Sparse Attention needs Adaptive Token Release
Abstract
In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide array of text-centric tasks. However, their `large' scale introduces significant computational and storage challenges, particularly in managing the key-value states of the transformer, which limits their wider applicability. Therefore, we propose to adaptively release resources from caches and rebuild the necessary key-value states. Particularly, we accomplish this by a lightweight controller module to approximate an ideal top-K sparse attention. This module retains the tokens with the highest top-K attention weights and simultaneously rebuilds the discarded but necessary tokens, which may become essential for future decoding. Comprehensive experiments in natural language generation and modeling reveal that our method is not only competitive with full attention in terms of performance but also achieves a significant throughput improvement of up to 221.8%. The code for replication is available on the https://github.com/WHUIR/ADORE.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LoCoCo: Dropping In Convolutions for Long Context Compression (2024)
- Attention Score is not All You Need for Token Importance Indicator in KV Cache Reduction: Value Also Matters (2024)
- Neurocache: Efficient Vector Retrieval for Long-range Language Modeling (2024)
- Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference (2024)
- LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper