Improve prompts
41ac6cc
-
gradio_cached_examples
Show iteration count and time used
-
35 Bytes
Update .gitignore file to include .jsonl and .json files
-
1.72 kB
Comment out llama-cpp-python installation command in Docker for HuggingFace Space
-
1.07 kB
Initial commit
-
6.1 kB
Readme: Note on Mistral API used, serverless backend for reliability
-
3.79 kB
Explain data use in Gradio app
-
8.39 kB
Add log_to_jsonl function to data.py and remove duplicate function from utils.py
-
1.59 kB
Add local logging option if SKIP_NETWORK environment variable is set
-
940 Bytes
Add data capture endpoint using Gradio's API hosted by HF's Gradio dynamic hostname
-
233 Bytes
Expose json typed LLM interface for RunPod
-
5.47 kB
remove unused helpers
-
4.02 kB
Improve prompts
-
1.17 kB
Rename serverless test file, set default model to Phi 2 for test, removed jq install, and env vars that are set the same in utils.py already, ignore .cache in git
-
1.71 kB
Avoid unneeded imports, make serverless output more sensible, removing some debugging and comments
-
4.11 kB
Documents serverless motivation and testing instructions
-
1.16 kB
Rename serverless test file, set default model to Phi 2 for test, removed jq install, and env vars that are set the same in utils.py already, ignore .cache in git
-
3.21 kB
Add system map and worker architecture details
-
10.5 kB
Fix: when using http worker, only download if inference is on localhost