Yaserrati commited on
Commit
a8178d8
1 Parent(s): ef38235

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -9,19 +9,19 @@ tags:
9
  - code
10
  ---
11
 
12
- ## Model Summary
13
 
14
  Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
15
 
16
  Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
17
 
18
- ## How to Use
19
 
20
  Phi-2 has been integrated in the `transformers` version 4.37.0, please ensure that you are using a version equal or higher than it.
21
 
22
  Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the [PhiAttention.forward()](https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/modeling_phi.py#L306) function.
23
 
24
- ## Intended Uses
25
 
26
  Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
27
 
 
9
  - code
10
  ---
11
 
12
+ ## Model Summary:
13
 
14
  Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
15
 
16
  Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
17
 
18
+ ## How to Use:
19
 
20
  Phi-2 has been integrated in the `transformers` version 4.37.0, please ensure that you are using a version equal or higher than it.
21
 
22
  Phi-2 is known for having an attention overflow issue (with FP16). If you are facing this issue, please enable/disable autocast on the [PhiAttention.forward()](https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/modeling_phi.py#L306) function.
23
 
24
+ ## Intended Uses:
25
 
26
  Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
27