Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,11 @@ library_name: transformers
|
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
|
|
|
|
|
9 |
A 4-bits quantization of [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) with only less than 8 GB VRAM is required.
|
10 |
|
|
|
11 |
```python
|
12 |
# init parameters
|
13 |
model_name: str = 'scb10x/typhoon-7b'
|
@@ -75,4 +78,13 @@ generator = TextGenerationPipeline(
|
|
75 |
sample: str = 'ความหมายของชีวิตคืออะไร?\n'
|
76 |
output = generator(sample, pad_token_id = tokenizer.eos_token_id)
|
77 |
print(output[0]['generated_text'])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
```
|
|
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
|
9 |
+
# Summary
|
10 |
+
|
11 |
A 4-bits quantization of [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) with only less than 8 GB VRAM is required.
|
12 |
|
13 |
+
# Steps to reproduce
|
14 |
```python
|
15 |
# init parameters
|
16 |
model_name: str = 'scb10x/typhoon-7b'
|
|
|
78 |
sample: str = 'ความหมายของชีวิตคืออะไร?\n'
|
79 |
output = generator(sample, pad_token_id = tokenizer.eos_token_id)
|
80 |
print(output[0]['generated_text'])
|
81 |
+
```
|
82 |
+
|
83 |
+
# `requirement.txt`
|
84 |
+
```txt
|
85 |
+
torch==2.1.2
|
86 |
+
accelerate==0.25.0
|
87 |
+
bitsandbytes==0.41.3
|
88 |
+
#transformers==4.37.0.dev0
|
89 |
+
transformers @ git+https://github.com/huggingface/transformers
|
90 |
```
|