Spaces:
Build error
Build error
Tom
commited on
Commit
•
66adc5d
1
Parent(s):
0ad340e
feat: add support for endpoints requiring client authentication using PKI (#393)
Browse files* feat: add support for endpoints requiring client authentication using PKI
This enables a global mTLS context for client requests using the native fetch
API. It can be used when custom endpoints require client authentication via
PKI.
* fix: Dockerfile syntax required for macOS
Set the Dockerfile syntax explictly to enable new features in Dockerfile.
The default syntax uses by Docker Desktop for macOS does not include support
for `--link` in the COPY command for example.
- .env +7 -0
- Dockerfile +1 -0
- README.md +12 -2
- src/lib/server/modelEndpoint.ts +22 -1
- src/lib/utils/loadClientCerts.ts +50 -0
.env
CHANGED
@@ -19,6 +19,13 @@ OPENID_CLIENT_SECRET=
|
|
19 |
OPENID_SCOPES="openid profile" # Add "email" for some providers like Google that do not provide preferred_username
|
20 |
OPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
# 'name', 'userMessageToken', 'assistantMessageToken' are required
|
24 |
MODELS=`[
|
|
|
19 |
OPENID_SCOPES="openid profile" # Add "email" for some providers like Google that do not provide preferred_username
|
20 |
OPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com
|
21 |
|
22 |
+
# Parameters to enable a global mTLS context for client fetch requests
|
23 |
+
USE_CLIENT_CERTIFICATE=false
|
24 |
+
CERT_PATH=#
|
25 |
+
KEY_PATH=#
|
26 |
+
CA_PATH=#
|
27 |
+
CLIENT_KEY_PASSWORD=#
|
28 |
+
REJECT_UNAUTHORIZED=true
|
29 |
|
30 |
# 'name', 'userMessageToken', 'assistantMessageToken' are required
|
31 |
MODELS=`[
|
Dockerfile
CHANGED
@@ -1,3 +1,4 @@
|
|
|
|
1 |
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
|
2 |
# you will also find guides on how best to write your Dockerfile
|
3 |
FROM node:19 as builder-production
|
|
|
1 |
+
# syntax=docker/dockerfile:1
|
2 |
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
|
3 |
# you will also find guides on how best to write your Dockerfile
|
4 |
FROM node:19 as builder-production
|
README.md
CHANGED
@@ -152,7 +152,7 @@ MODELS=`[
|
|
152 |
|
153 |
You can change things like the parameters, or customize the preprompt to better suit your needs. You can also add more models by adding more objects to the array, with different preprompts for example.
|
154 |
|
155 |
-
|
156 |
|
157 |
If you want to, instead of hitting models on the Hugging Face Inference API, you can run your own models locally.
|
158 |
|
@@ -171,7 +171,9 @@ To do this, you can add your own endpoints to the `MODELS` variable in `.env.loc
|
|
171 |
|
172 |
If `endpoints` is left unspecified, ChatUI will look for the model on the hosted Hugging Face inference API using the model name.
|
173 |
|
174 |
-
|
|
|
|
|
175 |
|
176 |
Custom endpoints may require authorization, depending on how you configure them. Authentication will usually be set either with `Basic` or `Bearer`.
|
177 |
|
@@ -196,6 +198,14 @@ You can then add the generated information and the `authorization` parameter to
|
|
196 |
|
197 |
```
|
198 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
199 |
#### Models hosted on multiple custom endpoints
|
200 |
|
201 |
If the model being hosted will be available on multiple servers/instances add the `weight` parameter to your `.env.local`. The `weight` will be used to determine the probability of requesting a particular endpoint.
|
|
|
152 |
|
153 |
You can change things like the parameters, or customize the preprompt to better suit your needs. You can also add more models by adding more objects to the array, with different preprompts for example.
|
154 |
|
155 |
+
### Running your own models using a custom endpoint
|
156 |
|
157 |
If you want to, instead of hitting models on the Hugging Face Inference API, you can run your own models locally.
|
158 |
|
|
|
171 |
|
172 |
If `endpoints` is left unspecified, ChatUI will look for the model on the hosted Hugging Face inference API using the model name.
|
173 |
|
174 |
+
### Custom endpoint authorization
|
175 |
+
|
176 |
+
#### Basic and Bearer
|
177 |
|
178 |
Custom endpoints may require authorization, depending on how you configure them. Authentication will usually be set either with `Basic` or `Bearer`.
|
179 |
|
|
|
198 |
|
199 |
```
|
200 |
|
201 |
+
#### Client Certificate Authentication (mTLS)
|
202 |
+
|
203 |
+
Custom endpoints may require client certificate authentication, depending on how you configure them. To enable mTLS between Chat UI and your custom endpoint, you will need to set the `USE_CLIENT_CERTIFICATE` to `true`, and add the `CERT_PATH` and `KEY_PATH` parameters to your `.env.local`. These parameters should point to the location of the certificate and key files on your local machine. The certificate and key files should be in PEM format. The key file can be encrypted with a passphrase, in which case you will also need to add the `CLIENT_KEY_PASSWORD` parameter to your `.env.local`.
|
204 |
+
|
205 |
+
If you're using a certificate signed by a private CA, you will also need to add the `CA_PATH` parameter to your `.env.local`. This parameter should point to the location of the CA certificate file on your local machine.
|
206 |
+
|
207 |
+
If you're using a self-signed certificate, e.g. for testing or development purposes, you can set the `REJECT_UNAUTHORIZED` parameter to `false` in your `.env.local`. This will disable certificate validation, and allow Chat UI to connect to your custom endpoint.
|
208 |
+
|
209 |
#### Models hosted on multiple custom endpoints
|
210 |
|
211 |
If the model being hosted will be available on multiple servers/instances add the `weight` parameter to your `.env.local`. The `weight` will be used to determine the probability of requesting a particular endpoint.
|
src/lib/server/modelEndpoint.ts
CHANGED
@@ -1,7 +1,28 @@
|
|
1 |
-
import {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
import { sum } from "$lib/utils/sum";
|
3 |
import type { BackendModel } from "./models";
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
/**
|
6 |
* Find a random load-balanced endpoint
|
7 |
*/
|
|
|
1 |
+
import {
|
2 |
+
HF_ACCESS_TOKEN,
|
3 |
+
HF_API_ROOT,
|
4 |
+
USE_CLIENT_CERTIFICATE,
|
5 |
+
CERT_PATH,
|
6 |
+
KEY_PATH,
|
7 |
+
CA_PATH,
|
8 |
+
CLIENT_KEY_PASSWORD,
|
9 |
+
REJECT_UNAUTHORIZED,
|
10 |
+
} from "$env/static/private";
|
11 |
import { sum } from "$lib/utils/sum";
|
12 |
import type { BackendModel } from "./models";
|
13 |
|
14 |
+
import { loadClientCertificates } from "$lib/utils/loadClientCerts";
|
15 |
+
|
16 |
+
if (USE_CLIENT_CERTIFICATE === "true") {
|
17 |
+
loadClientCertificates(
|
18 |
+
CERT_PATH,
|
19 |
+
KEY_PATH,
|
20 |
+
CA_PATH,
|
21 |
+
CLIENT_KEY_PASSWORD,
|
22 |
+
REJECT_UNAUTHORIZED === "true"
|
23 |
+
);
|
24 |
+
}
|
25 |
+
|
26 |
/**
|
27 |
* Find a random load-balanced endpoint
|
28 |
*/
|
src/lib/utils/loadClientCerts.ts
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import * as fs from "fs";
|
2 |
+
import { setGlobalDispatcher, Agent } from "undici";
|
3 |
+
|
4 |
+
/**
|
5 |
+
* Load client certificates for mutual TLS authentication. This function must be called before any HTTP requests are made.
|
6 |
+
* This is a global setting that affects all HTTP requests made by the application using the native fetch API.
|
7 |
+
*
|
8 |
+
* @param clientCertPath Path to client certificate
|
9 |
+
* @param clientKeyPath Path to client key
|
10 |
+
* @param caCertPath Path to CA certificate [optional]
|
11 |
+
* @param clientKeyPassword Password for client key [optional]
|
12 |
+
* @param rejectUnauthorized Reject unauthorized certificates.
|
13 |
+
* Only use for testing/development, not recommended in production environments [optional]
|
14 |
+
*
|
15 |
+
* @returns void
|
16 |
+
*
|
17 |
+
* @example
|
18 |
+
* ```typescript
|
19 |
+
* loadClientCertificates("cert.pem", "key.pem", "ca.pem", "password", false);
|
20 |
+
* ```
|
21 |
+
*
|
22 |
+
* @see
|
23 |
+
* [Undici Agent](https://undici.nodejs.org/#/docs/api/Agent)
|
24 |
+
* @see
|
25 |
+
* [Undici Dispatcher](https://undici.nodejs.org/#/docs/api/Dispatcher)
|
26 |
+
* @see
|
27 |
+
* [NodeJS Native Fetch API](https://nodejs.org/docs/latest-v19.x/api/globals.html#fetch)
|
28 |
+
*/
|
29 |
+
export function loadClientCertificates(
|
30 |
+
clientCertPath: string,
|
31 |
+
clientKeyPath: string,
|
32 |
+
caCertPath?: string,
|
33 |
+
clientKeyPassword?: string,
|
34 |
+
rejectUnauthorized?: boolean
|
35 |
+
): void {
|
36 |
+
const clientCert = fs.readFileSync(clientCertPath);
|
37 |
+
const clientKey = fs.readFileSync(clientKeyPath);
|
38 |
+
const caCert = caCertPath ? fs.readFileSync(caCertPath) : undefined;
|
39 |
+
const agent = new Agent({
|
40 |
+
connect: {
|
41 |
+
cert: clientCert,
|
42 |
+
key: clientKey,
|
43 |
+
ca: caCert,
|
44 |
+
passphrase: clientKeyPassword,
|
45 |
+
rejectUnauthorized: rejectUnauthorized,
|
46 |
+
},
|
47 |
+
});
|
48 |
+
|
49 |
+
setGlobalDispatcher(agent);
|
50 |
+
}
|