tastypear commited on
Commit
9305bd4
1 Parent(s): 6690024

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -0
README.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: CausalLM/7B-DPO-alpha
3
+ datasets:
4
+ - JosephusCheung/GuanacoDataset
5
+ - Open-Orca/OpenOrca
6
+ - stingning/ultrachat
7
+ - meta-math/MetaMathQA
8
+ - liuhaotian/LLaVA-Instruct-150K
9
+ - jondurbin/airoboros-3.1
10
+ - WizardLM/WizardLM_evol_instruct_V2_196k
11
+ - RyokoAI/ShareGPT52K
12
+ - RyokoAI/Fandom23K
13
+ - milashkaarshif/MoeGirlPedia_wikitext_raw_archive
14
+ - wikipedia
15
+ - wiki_lingua
16
+ - fnlp/moss-003-sft-data
17
+ - garage-bAInd/Open-Platypus
18
+ - LDJnr/Puffin
19
+ - openbmb/llava_zh
20
+ - BAAI/COIG
21
+ - TigerResearch/tigerbot-zhihu-zh-10k
22
+ - liwu/MNBVC
23
+ - teknium/openhermes
24
+ inference: false
25
+ language:
26
+ - en
27
+ - zh
28
+ license: wtfpl
29
+ model_creator: CausalLM
30
+ model_name: CausalLM 7B-DPO-alpha
31
+ model_type: llama
32
+ pipeline_tag: text-generation
33
+ prompt_template: '<|im_start|>system
34
+
35
+ {system_message}<|im_end|>
36
+
37
+ <|im_start|>user
38
+
39
+ {prompt}<|im_end|>
40
+
41
+ <|im_start|>assistant
42
+
43
+ '
44
+ quantized_by: tastypear
45
+ tags:
46
+ - llama
47
+ - llama2
48
+ - qwen
49
+ ---
50
+ <!-- header start -->
51
+ I made a quantized version of this model by referring to TheBloke's publishing format and based on the recommendation of TheBloke/CausalLM-7B-GGUF.
52
+
53
+ 我参考 TheBloke 的发布格式,并根据 TheBloke/CausalLM-7B-GGUF 的推荐,制作了这个模型的量化版本。
54
+
55
+ ---
56
+
57
+ <!-- header end -->
58
+ <!-- markdownlint-disable MD041 -->
59
+
60
+ # CausalLM 7B-DPO-alpha - GGUF
61
+ - Model creator: [CausalLM](https://huggingface.co/CausalLM)
62
+ - Original model: [CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha)
63
+
64
+ <!-- description start -->
65
+ ## Description
66
+
67
+ This repo contains GGUF format model files for [CausalLM's CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha).
68
+
69
+ <!-- description end -->
70
+
71
+ <!-- README_GGUF.md-about-gguf start -->
72
+ ### About GGUF
73
+
74
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
75
+
76
+ Here is an incomplate list of clients and libraries that are known to support GGUF:
77
+
78
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
79
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
80
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
81
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
82
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
83
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
84
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
85
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
86
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
87
+
88
+ <!-- README_GGUF.md-about-gguf end -->
89
+
90
+ <!-- prompt-template start -->
91
+ ## Prompt template: ChatML
92
+
93
+ ```
94
+ <|im_start|>system
95
+ {system_message}<|im_end|>
96
+ <|im_start|>user
97
+ {prompt}<|im_end|>
98
+ <|im_start|>assistant
99
+
100
+ ```
101
+
102
+ <!-- prompt-template end -->
103
+ <!-- licensing start -->
104
+ ## Licensing
105
+
106
+ The creator of the source model has listed its license as `wtfpl`, and this quantization has therefore used that same license.
107
+
108
+ As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
109
+
110
+ In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [CausalLM's CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha).
111
+ <!-- licensing end -->
112
+ <!-- compatibility_gguf start -->
113
+ ## Compatibility
114
+
115
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
116
+
117
+ They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
118
+
119
+ ## Explanation of quantisation methods
120
+
121
+ <details>
122
+ <summary>Click to see details</summary>
123
+
124
+ The new methods available are:
125
+
126
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
127
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
128
+
129
+ Refer to the Provided Files table below to see what files use which methods, and how.
130
+ </details>
131
+ <!-- compatibility_gguf end -->
132
+
133
+ <!-- README_GGUF.md-provided-files start -->
134
+ ## Provided files
135
+
136
+ | Name | Quant method | Bits | Size |
137
+ | ---- | ---- | ---- | ---- |
138
+ | [causallm_7b.Q4_K_M.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q4_K_M.gguf) | Q4_K_M | 4 | 4.77 GB|
139
+ | [causallm_7b.Q5_K_S.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q5_K_S.gguf) | Q5_K_S | 5 | 5.40 GB|
140
+ | [causallm_7b.Q5_K_M.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q5_K_M.gguf) | Q5_K_M | 5 | 5.53 GB|
141
+
142
+ <!-- README_GGUF.md-provided-files end -->
143
+
144
+ <!-- footer start -->
145
+
146
+ <!-- original-model-card start -->
147
+ # Original model card: CausalLM's CausalLM 7B-DPO-alpha
148
+
149
+ For details, please refer to the version without DPO training: [CausalLM/7B](https://huggingface.co/CausalLM/7B).
150
+
151
+ | Model | MT-Bench |
152
+ | ------------------------- | ------------ |
153
+ | GPT-4 | 8.99 |
154
+ | GPT-3.5-Turbo | 7.94 |
155
+ | | |
156
+ | Zephyr-7b-β (Overfitting) | 7.34 |
157
+ | Zephyr-7b-α | 6.88 |
158
+ | | |
159
+ | **CausalLM/14B-DPO-α** | **7.618868** |
160
+ | **CausalLM/7B-DPO-α** | **7.038125** |
161
+
162
+ It should be noted that this is not a version that continues training on CausalLM/14B & 7B, but rather an optimized version that has undergone DPO training concurrently on a previous training branch, and some detailed parameters may have changed. You will still need to download the full model.
163
+
164
+ The beta branch will soon be released, employing some aggressive approaches that might be detrimental in certain tasks, in order to achieve better alignment with human preferences, aiming to meet or exceed the GPT-3.5 benchmarks. Stay tuned.
165
+
166
+ Disclaimer: Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
167
+
168
+ 更多详情,请参见未经DPO训练的版本:[CausalLM/14B](https://huggingface.co/CausalLM/14B)
169
+
170
+ 需要注意的是,这并不是在 CausalLM/14B & 7B 上继续训练的版本,而是在之前的训练分支上同时进行了 DPO 训练的优化版本,一些细节参数可能发生了变化。 您仍然需要下载完整模型。
171
+
172
+ 很快将会发布beta分支,采用了一些可能不利于某些任务的激进方法,以实现更好地符合人类偏好以接近和超过GPT-3.5基准。敬请期待。
173
+
174
+ 免责声明:请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。
175
+
176
+ <!-- original-model-card end -->