Spaces:
Running
on
A10G
Running
on
A10G
update readme for card generation
#128
by
ariG23498
HF staff
- opened
app.py
CHANGED
@@ -174,6 +174,14 @@ def process_model(model_id, q_method, use_imatrix, imatrix_q_method, private_rep
|
|
174 |
# {new_repo_id}
|
175 |
This model was converted to GGUF format from [`{model_id}`](https://huggingface.co/{model_id}) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
176 |
Refer to the [original model card](https://huggingface.co/{model_id}) for more details on the model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
177 |
|
178 |
## Use with llama.cpp
|
179 |
Install llama.cpp through brew (works on Mac and Linux)
|
|
|
174 |
# {new_repo_id}
|
175 |
This model was converted to GGUF format from [`{model_id}`](https://huggingface.co/{model_id}) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
176 |
Refer to the [original model card](https://huggingface.co/{model_id}) for more details on the model.
|
177 |
+
|
178 |
+
## Use with ollama
|
179 |
+
Install ollama from the [official website](https://ollama.com/).
|
180 |
+
|
181 |
+
Run the model on the CLI.
|
182 |
+
```sh
|
183 |
+
ollama run hf.co/{model_id}
|
184 |
+
```
|
185 |
|
186 |
## Use with llama.cpp
|
187 |
Install llama.cpp through brew (works on Mac and Linux)
|