svanlin-tencent commited on
Commit
f5ca05e
1 Parent(s): 85ae140

change rdm

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ### Model Introduction
2
 
3
- With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. <span style="background:#fff88f">The currently unveiled Hunyuan-Large (Hunyuan-MoE-A50B) model is the largest open-source Transformer-based MoE model </span>in the industry, featuring a total of 389 billion parameters and <span style="background:#fff88f">50</span> billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 50 billion active parameters.
4
 
5
  By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
6
 
 
1
  ### Model Introduction
2
 
3
+ With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant progress in fields such as natural language processing, computer vision, and scientific tasks. However, as the scale of these models increases, optimizing resource consumption while maintaining high performance has become a key challenge. To address this challenge, we have explored Mixture of Experts (MoE) models. The currently unveiled Hunyuan-Large (Hunyuan-MoE-A50B) model is the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 50 billion active parameters. This is currently the largest open-source Transformer-based MoE model in the industry, featuring a total of 389 billion parameters and 50 billion active parameters.
4
 
5
  By open-sourcing the Hunyuan-Large model and revealing related technical details, we hope to inspire more researchers with innovative ideas and collectively advance the progress and application of AI technology. We welcome you to join our open-source community to explore and optimize future AI models together!
6