QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models Paper • 2310.16795 • Published Oct 25, 2023 • 26
Ensemble-Instruct: Generating Instruction-Tuning Data with a Heterogeneous Mixture of LMs Paper • 2310.13961 • Published Oct 21, 2023 • 4
The Consensus Game: Language Model Generation via Equilibrium Search Paper • 2310.09139 • Published Oct 13, 2023 • 12
Large Language Model Cascades with Mixture of Thoughts Representations for Cost-efficient Reasoning Paper • 2310.03094 • Published Oct 4, 2023 • 12
EcoAssistant: Using LLM Assistant More Affordably and Accurately Paper • 2310.03046 • Published Oct 3, 2023 • 5
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration Paper • 2310.00280 • Published Sep 30, 2023 • 3
Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference Paper • 2308.12066 • Published Aug 23, 2023 • 4
Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference Paper • 2303.06182 • Published Mar 10, 2023 • 1
SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing Paper • 2212.05191 • Published Dec 10, 2022 • 1
Cross-Domain Ensemble Distillation for Domain Generalization Paper • 2211.14058 • Published Nov 25, 2022 • 1
Multi-Head Adapter Routing for Cross-Task Generalization Paper • 2211.03831 • Published Nov 7, 2022 • 2
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts Paper • 2401.04081 • Published Jan 8 • 70