GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding
Abstract
Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs. In this study, we introduce GliDe and CaPE, two low-hassle modifications to vanilla speculative decoding to further improve the decoding speed of a frozen LLM. Specifically, GliDe is a modified draft model architecture that reuses the cached keys and values from the target LLM, while CaPE is a proposal expansion method that uses the draft model's confidence scores to help select additional candidate tokens for verification. Extensive experiments on different benchmarks demonstrate that our proposed GliDe draft model significantly reduces the expected decoding latency. Additional evaluation using walltime reveals that GliDe can accelerate Vicuna models up to 2.17x and further extend the improvement to 2.61x with CaPE. We will release our code, data, and the trained draft models.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting (2024)
- Recurrent Drafter for Fast Speculative Decoding in Large Language Models (2024)
- Clover: Regressive Lightweight Speculative Decoding with Sequential Knowledge (2024)
- SDSAT: Accelerating LLM Inference through Speculative Decoding with Semantic Adaptive Tokens (2024)
- Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper