LeroyDyer commited on
Commit
fc84c3f
1 Parent(s): e39a326

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned
3
+ datasets:
4
+ - gretelai/synthetic_text_to_sql
5
+ - HuggingFaceTB/cosmopedia
6
+ - teknium/OpenHermes-2.5
7
+ - Open-Orca/SlimOrca
8
+ - Open-Orca/OpenOrca
9
+ - cognitivecomputations/dolphin-coder
10
+ - databricks/databricks-dolly-15k
11
+ - yahma/alpaca-cleaned
12
+ - uonlp/CulturaX
13
+ - mwitiderrick/SwahiliPlatypus
14
+ - swahili
15
+ - Rogendo/English-Swahili-Sentence-Pairs
16
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
17
+ - meta-math/MetaMathQA
18
+ - abacusai/ARC_DPO_FewShot
19
+ - abacusai/MetaMath_DPO_FewShot
20
+ - abacusai/HellaSwag_DPO_FewShot
21
+ - HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
22
+ - HuggingFaceFW/fineweb
23
+ - occiglot/occiglot-fineweb-v0.5
24
+ - omi-health/medical-dialogue-to-soap-summary
25
+ - keivalya/MedQuad-MedicalQnADataset
26
+ - ruslanmv/ai-medical-dataset
27
+ - Shekswess/medical_llama3_instruct_dataset_short
28
+ - ShenRuililin/MedicalQnA
29
+ - virattt/financial-qa-10K
30
+ - PatronusAI/financebench
31
+ - takala/financial_phrasebank
32
+ - Replete-AI/code_bagel
33
+ - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
34
+ - IlyaGusev/gpt_roleplay_realm
35
+ - rickRossie/bluemoon_roleplay_chat_data_300k_messages
36
+ - jtatman/hypnosis_dataset
37
+ - Hypersniper/philosophy_dialogue
38
+ - Locutusque/function-calling-chatml
39
+ - bible-nlp/biblenlp-corpus
40
+ - DatadudeDev/Bible
41
+ - Helsinki-NLP/bible_para
42
+ - HausaNLP/AfriSenti-Twitter
43
+ - aixsatoshi/Chat-with-cosmopedia
44
+ - xz56/react-llama
45
+ - BeIR/hotpotqa
46
+ - YBXL/medical_book_train_filtered
47
+ - SkunkworksAI/reasoning-0.01
48
+ - THUDM/LongWriter-6k
49
+ - WhiteRabbitNeo/WRN-Chapter-1
50
+ - WhiteRabbitNeo/Code-Functions-Level-Cyber
51
+ - WhiteRabbitNeo/Code-Functions-Level-General
52
+ language:
53
+ - en
54
+ - sw
55
+ - ig
56
+ - so
57
+ - es
58
+ - ca
59
+ - xh
60
+ - zu
61
+ - ha
62
+ - tw
63
+ - af
64
+ - hi
65
+ - bm
66
+ - su
67
+ library_name: transformers
68
+ tags:
69
+ - llama-cpp
70
+ - gguf-my-repo
71
+ ---
72
+
73
+ # LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned-Q4_K_M-GGUF
74
+ This model was converted to GGUF format from [`LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned`](https://huggingface.co/LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
75
+ Refer to the [original model card](https://huggingface.co/LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned) for more details on the model.
76
+
77
+ ## Use with llama.cpp
78
+ Install llama.cpp through brew (works on Mac and Linux)
79
+
80
+ ```bash
81
+ brew install llama.cpp
82
+
83
+ ```
84
+ Invoke the llama.cpp server or the CLI.
85
+
86
+ ### CLI:
87
+ ```bash
88
+ llama-cli --hf-repo LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_react_project_ultrafinetuned-q4_k_m.gguf -p "The meaning to life and the universe is"
89
+ ```
90
+
91
+ ### Server:
92
+ ```bash
93
+ llama-server --hf-repo LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_react_project_ultrafinetuned-q4_k_m.gguf -c 2048
94
+ ```
95
+
96
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
97
+
98
+ Step 1: Clone llama.cpp from GitHub.
99
+ ```
100
+ git clone https://github.com/ggerganov/llama.cpp
101
+ ```
102
+
103
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
104
+ ```
105
+ cd llama.cpp && LLAMA_CURL=1 make
106
+ ```
107
+
108
+ Step 3: Run inference through the main binary.
109
+ ```
110
+ ./llama-cli --hf-repo LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_react_project_ultrafinetuned-q4_k_m.gguf -p "The meaning to life and the universe is"
111
+ ```
112
+ or
113
+ ```
114
+ ./llama-server --hf-repo LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned-Q4_K_M-GGUF --hf-file _spydaz_web_ai_chatqa_react_project_ultrafinetuned-q4_k_m.gguf -c 2048
115
+ ```