add example for text-generation-inference
#3
by
nbroad
HF staff
- opened
README.md
CHANGED
@@ -84,6 +84,15 @@ clean_output = output_text.split("### Response:")[1].strip()
|
|
84 |
print(clean_output)
|
85 |
```
|
86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
### Limitations and Biases
|
89 |
|
|
|
84 |
print(clean_output)
|
85 |
```
|
86 |
|
87 |
+
It can also be used with text-generation-inference
|
88 |
+
|
89 |
+
```sh
|
90 |
+
model=Writer/InstructPalmyra-20b
|
91 |
+
volume=$PWD/data
|
92 |
+
|
93 |
+
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model
|
94 |
+
```
|
95 |
+
|
96 |
|
97 |
### Limitations and Biases
|
98 |
|