gugarosa eltociear commited on
Commit
834565c
1 Parent(s): e35b92d

Update README.md (#69)

Browse files

- Update README.md (36b427216ebb497ea0091db5187760725c3164ac)


Co-authored-by: Ikko Eltociear Ashimine <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -127,7 +127,7 @@ Furthermore, in the forward pass of the model, we currently do not support outpu
127
 
128
  * Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
129
 
130
- * Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring trainig data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
131
 
132
  * Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
133
 
 
127
 
128
  * Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
129
 
130
+ * Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
131
 
132
  * Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
133