nGPT: Normalized Transformer with Representation Learning on the Hypersphere
Abstract
We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices and hidden states are unit norm normalized. The input stream of tokens travels on the surface of a hypersphere, with each layer contributing a displacement towards the target output predictions. These displacements are defined by the MLP and attention blocks, whose vector components also reside on the same hypersphere. Experiments show that nGPT learns much faster, reducing the number of training steps required to achieve the same accuracy by a factor of 4 to 20, depending on the sequence length.
Community
Hi, this was a wonderful read. We summarised this paper and a few others in our biweekly blog.
- nGPT: Normalized Transformer with Representation Learning on the Hypersphere
- LAUREL: Learned Augmented Residual Layer
- TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
Please give a read and share your thoughts/feedback.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper