-
Recurrent Neural Network Regularization
Paper • 1409.2329 • Published -
Pointer Networks
Paper • 1506.03134 • Published -
Order Matters: Sequence to sequence for sets
Paper • 1511.06391 • Published -
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Paper • 1811.06965 • Published
Collections
Discover the best community collections!
Collections including paper arxiv:1706.03762
-
Addition is All You Need for Energy-efficient Language Models
Paper • 2410.00907 • Published • 143 -
Emu3: Next-Token Prediction is All You Need
Paper • 2409.18869 • Published • 89 -
An accurate detection is not all you need to combat label noise in web-noisy datasets
Paper • 2407.05528 • Published • 3 -
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP
Paper • 2407.00402 • Published • 22
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
Playing Atari with Deep Reinforcement Learning
Paper • 1312.5602 • Published -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11
-
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 24 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 11 -
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning
Paper • 2307.08691 • Published • 8
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 11 -
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Paper • 2305.13245 • Published • 5 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 242
-
Recurrent Neural Network Regularization
Paper • 1409.2329 • Published -
Pointer Networks
Paper • 1506.03134 • Published -
Order Matters: Sequence to sequence for sets
Paper • 1511.06391 • Published -
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Paper • 1811.06965 • Published
-
Attention Is All You Need
Paper • 1706.03762 • Published • 44 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 13 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 16 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 22