Efficient-Large-Model
AI & ML interests
None defined yet.
Welcome to Efficient Large Model Team! 👋
We are researchers from NVIDIA and MIT working on GPU accelerated large models for generative AI.
Click to know more!
🚀 Introduction
Efficient Large Model Team is a collaboration between researchers from NVIDIA and MIT dedicated to the development and optimization of GPU-accelerated efficient large models. We focuses on pushing the boundaries of generative AI by designing models that are not only powerful but also efficient in terms of computational resources. We are committed to advancing the field of AI by making state-of-the-art models deployable, scalable and accessible.
🌈 Contribution Guidelines
We welcome contributions from the community to help us further improve and expand our research efforts. Whether you're an experienced researcher, a student eager to learn, or a developer passionate about efficiency in AI, there are several ways to get involved:
- Contribute Code: Help us develop and optimize efficient large models by contributing code to our GitHub repositories.
- Report Issues: If you encounter any bugs or have suggestions for improvement, please open an issue on the respective repository.
- Provide Feedback: Share your insights and ideas through discussions on our GitHub repositories or join our community forums.
- Spread the Word: Let others know about our work and encourage them to join our community.
- Internship: We have openings at both MIT and NVIDIA for excellent contributors with proven track record.
🍿 Fun Facts
Our team comprises researchers from diverse backgrounds, bringing together expertise from both industry and academia. We're passionate about optimizing AI models not just for performance but also for sustainability and accessibility. In our spare time, we love experimenting with new algorithms and techniques to enhance the efficiency of our models, and skiing at the speed of GPU. Join us on this exciting journey of building the next generation of efficient large models! 🌟
👩💻 Useful Resources
MIT HAN Lab: https://hanlab.mit.edu NVIDIA TensorRT-LLM: https://github.com/NVIDIA/TensorRT-LLM
Collections
3
-
Efficient-Large-Model/Llama-3-LongVILA-8B-128Frames
Text Generation • Updated • 8.59k • 6 -
Efficient-Large-Model/Llama-3-LongVILA-8B-256Frames
Text Generation • Updated • 2.77k • 1 -
Efficient-Large-Model/Llama-3-LongVILA-8B-512Frames
Text Generation • Updated • 17 • 2 -
Efficient-Large-Model/Llama-3-LongVILA-8B-1024Frames
Text Generation • Updated • 101