fbaldassarri
commited on
Commit
•
efc51d3
1
Parent(s):
3d357a1
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Llama-2-70B-Chat-GGUF-tokenizer-legacy
|
2 |
|
3 |
## Tokenizer for llama-2-70b-chat
|
@@ -10,5 +16,4 @@ Note: converted using [convert_llama_weights_to_hf.py](https://github.com/huggin
|
|
10 |
|
11 |
1. Download a .gguf file from [TheBloke/Llama-2-70B-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGUF) based on your preferred quantization method;
|
12 |
|
13 |
-
2. Place your .gguf in a subfolder of models/ along with these 4 files.
|
14 |
-
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
---
|
7 |
# Llama-2-70B-Chat-GGUF-tokenizer-legacy
|
8 |
|
9 |
## Tokenizer for llama-2-70b-chat
|
|
|
16 |
|
17 |
1. Download a .gguf file from [TheBloke/Llama-2-70B-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGUF) based on your preferred quantization method;
|
18 |
|
19 |
+
2. Place your .gguf in a subfolder of models/ along with these 4 files.
|
|