shreyajn commited on
Commit
f777769
1 Parent(s): 5734701

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -15,11 +15,13 @@ tags:
15
  # Baichuan2-7B: Optimized for Mobile Deployment
16
  ## State-of-the-art large language model useful on a variety of language understanding and generation tasks
17
 
 
18
  Baichuan2-7B is a family of LLMs. It achieves the state-of-the-art performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). 4-bit weights and 16-bit activations making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Baichuan2-PromptProcessor-Quantized's latency and average time per addition token is Baichuan2-TokenGenerator-Quantized's latency.
19
 
20
- This is based on the implementation of Baichuan2-7B found
21
- [here]({source_repo}). More details on model performance
22
- accross various devices, can be found [here](https://aihub.qualcomm.com/models/baichuan2_7b_quantized).
 
23
 
24
  ### Model Details
25
 
 
15
  # Baichuan2-7B: Optimized for Mobile Deployment
16
  ## State-of-the-art large language model useful on a variety of language understanding and generation tasks
17
 
18
+
19
  Baichuan2-7B is a family of LLMs. It achieves the state-of-the-art performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). 4-bit weights and 16-bit activations making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Baichuan2-PromptProcessor-Quantized's latency and average time per addition token is Baichuan2-TokenGenerator-Quantized's latency.
20
 
21
+ This model is an implementation of Posenet-Mobilenet found [here](https://github.com/baichuan-inc/Baichuan-7B/).
22
+
23
+
24
+ More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/baichuan2_7b_quantized).
25
 
26
  ### Model Details
27