Fix context length in model card
#2
by
SaisExperiments
- opened
README.md
CHANGED
@@ -33,7 +33,7 @@ Our appreciation for the sponsors of Dolphin 2.9.4:
|
|
33 |
|
34 |
This model is based on Google Gemma2 2b, and is governed by the Gemma license.
|
35 |
|
36 |
-
The base model has
|
37 |
|
38 |
`ollama run CognitiveComputations/dolphin-gemma2:2b`
|
39 |
|
|
|
33 |
|
34 |
This model is based on Google Gemma2 2b, and is governed by the Gemma license.
|
35 |
|
36 |
+
The base model has 8K context, and our finetuning used 8192 sequence length.
|
37 |
|
38 |
`ollama run CognitiveComputations/dolphin-gemma2:2b`
|
39 |
|