Spaces:
Sleeping
Sleeping
:pencil: [Doc] Readme: New model and shields badge, and upgrade to v1.1.1
Browse files- README.md +6 -3
- configs/config.json +1 -1
README.md
CHANGED
@@ -8,19 +8,22 @@ app_port: 23333
|
|
8 |
---
|
9 |
|
10 |
## HF-LLM-API
|
|
|
|
|
|
|
11 |
Huggingface LLM Inference API in OpenAI message format.
|
12 |
|
13 |
Project link: https://github.com/Hansimov/hf-llm-api
|
14 |
|
15 |
## Features
|
16 |
|
17 |
-
- Available Models (2024/
|
18 |
-
- `mistral-7b`, `mixtral-8x7b`, `nous-mixtral-8x7b`, `gemma-7b`
|
19 |
- Adaptive prompt templates for different models
|
20 |
- Support OpenAI API format
|
21 |
- Enable api endpoint via official `openai-python` package
|
22 |
- Support both stream and no-stream response
|
23 |
-
- Support API Key via both HTTP auth header and env
|
24 |
- Docker deployment
|
25 |
|
26 |
## Run API service
|
|
|
8 |
---
|
9 |
|
10 |
## HF-LLM-API
|
11 |
+
|
12 |
+
![](https://img.shields.io/github/v/release/hansimov/hf-llm-api?label=HF-LLM-API&color=blue&cacheSeconds=60)
|
13 |
+
|
14 |
Huggingface LLM Inference API in OpenAI message format.
|
15 |
|
16 |
Project link: https://github.com/Hansimov/hf-llm-api
|
17 |
|
18 |
## Features
|
19 |
|
20 |
+
- Available Models (2024/04/07):
|
21 |
+
- `mistral-7b`, `mixtral-8x7b`, `nous-mixtral-8x7b`, `gemma-7b`, `gpt-3.5.turbo`
|
22 |
- Adaptive prompt templates for different models
|
23 |
- Support OpenAI API format
|
24 |
- Enable api endpoint via official `openai-python` package
|
25 |
- Support both stream and no-stream response
|
26 |
+
- Support API Key via both HTTP auth header and env variable
|
27 |
- Docker deployment
|
28 |
|
29 |
## Run API service
|
configs/config.json
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
{
|
2 |
"app_name": "HuggingFace LLM API",
|
3 |
-
"version": "1.1",
|
4 |
"host": "0.0.0.0",
|
5 |
"port": 23333
|
6 |
}
|
|
|
1 |
{
|
2 |
"app_name": "HuggingFace LLM API",
|
3 |
+
"version": "1.1.1",
|
4 |
"host": "0.0.0.0",
|
5 |
"port": 23333
|
6 |
}
|