julien-c HF staff commited on
Commit
932ee7e
1 Parent(s): 19db9db

Attempt to clarify how hosted API ≠ local endpoint (#373)

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -55,11 +55,11 @@ docker run -d -p 27017:27017 --name mongo-chatui mongo:latest
55
 
56
  In which case the url of your DB will be `MONGODB_URL=mongodb://localhost:27017`.
57
 
58
- Alternatively, you can use a [free MongoDB Atlas](https://www.mongodb.com/pricing) instance for this, Chat UI should fit comfortably within the free tier. After which you can set the `MONGODB_URL` variable in `.env.local` to match your instance.
59
 
60
  ### Hugging Face Access Token
61
 
62
- You will need a Hugging Face access token to run Chat UI locally, using the remote inference endpoints. You can get one from [your Hugging Face profile](https://huggingface.co/settings/tokens).
63
 
64
  ## Launch
65
 
@@ -152,7 +152,11 @@ You can change things like the parameters, or customize the preprompt to better
152
 
153
  #### Running your own models using a custom endpoint
154
 
155
- If you want to, you can even run your own models locally, by having a look at our endpoint project, [text-generation-inference](https://github.com/huggingface/text-generation-inference). You can then add your own endpoints to the `MODELS` variable in `.env.local`, by adding an `"endpoints"` key for each model in `MODELS`.
 
 
 
 
156
 
157
  ```
158
 
 
55
 
56
  In which case the url of your DB will be `MONGODB_URL=mongodb://localhost:27017`.
57
 
58
+ Alternatively, you can use a [free MongoDB Atlas](https://www.mongodb.com/pricing) instance for this, Chat UI should fit comfortably within their free tier. After which you can set the `MONGODB_URL` variable in `.env.local` to match your instance.
59
 
60
  ### Hugging Face Access Token
61
 
62
+ You will need a Hugging Face access token to run Chat UI locally, if you use a remote inference endpoint. You can get one from [your Hugging Face profile](https://huggingface.co/settings/tokens).
63
 
64
  ## Launch
65
 
 
152
 
153
  #### Running your own models using a custom endpoint
154
 
155
+ If you want to, instead of hitting models on the Hugging Face Inference API, you can run your own models locally.
156
+
157
+ A good option is to hit a [text-generation-inference](https://github.com/huggingface/text-generation-inference) endpoint. This is what is done in the official [Chat UI Spaces Docker template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) for instance: both this app and a text-generation-inference server run inside the same container.
158
+
159
+ To do this, you can add your own endpoints to the `MODELS` variable in `.env.local`, by adding an `"endpoints"` key for each model in `MODELS`.
160
 
161
  ```
162