duyntnet commited on
Commit
b34527f
1 Parent(s): ab62012

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - OpenOrcaxOpenChat-Preview2-13B
12
+ ---
13
+ Quantizations of https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B
14
+
15
+
16
+ ### Inference Clients/UIs
17
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
18
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
19
+ * [ollama](https://github.com/ollama/ollama)
20
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
21
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
22
+ * [jan](https://github.com/janhq/jan)
23
+ ---
24
+
25
+ # From original readme
26
+
27
+ We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune Llama2-13B using [OpenChat](https://huggingface.co/openchat) packing.
28
+ This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707).
29
+
30
+ This second preview release is trained on a curated filtered subset of most of our GPT-4 augmented data.
31
+
32
+ This release highlights that our dataset and training methods have surpassed performance parity with the Orca paper.
33
+ We measured this with BigBench-Hard and AGIEval results with the same methods as used in the Orca paper, finding **~103%** of original Orca's performance on average.
34
+ As well, this is done with <1/10th the compute requirement and using <20% of the dataset size from the original Orca paper.
35
+
36
+ We have run extensive evaluations internally and expect this model to **place number 1** on both the HuggingFaceH4 Open LLM Leaderboard and the GPT4ALL Leaderboard for 13B models.
37
+
38
+ "One" of [OpenChat](https://huggingface.co/openchat) has joined our team, and we'd like to provide special thanks for their training of this model!
39
+ We have utilized OpenChat [MultiPack algorithm](https://github.com/imoneoi/multipack_sampler) which achieves 99.85% bin-packing efficiency on our dataset.
40
+ This has significantly reduced training time, with efficiency improvement of 3-10X over traditional methods.
41
+
42
+
43
+ <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 40%">
44
+
45
+
46
+ Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
47
+
48
+
49
+ [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
50
+
51
+
52
+ We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
53
+
54
+ We will also give sneak-peak announcements on our Discord, which you can find here:
55
+
56
+ https://AlignmentLab.ai
57
+
58
+ # Prompt Template
59
+
60
+ We use our own prompt template which we call "`OpenChat Llama2 V1`".
61
+
62
+ The model is heavily conditioned to work using this format only and will likely encounter issues such as run-on output which emulates a chat between a user and assistant if this format is not properly followed.
63
+
64
+
65
+ Examples:
66
+ ```
67
+ # Single-turn `OpenChat Llama2 V1`
68
+ tokenize("You are OpenOrcaChat.<|end_of_turn|>User: Hello<|end_of_turn|>Assistant:")
69
+ # [1, 887, 526, 4673, 2816, 1113, 1451, 271, 29889, 32000, 4911, 29901, 15043, 32000, 4007, 22137, 29901]
70
+
71
+ # Multi-turn `OpenChat Llama2 V1`
72
+ tokenize("You are OpenOrcaChat.<|end_of_turn|>User: Hello<|end_of_turn|>Assistant: Hi<|end_of_turn|>User: How are you today?<|end_of_turn|>Assistant:")
73
+ # [1, 887, 526, 4673, 2816, 1113, 1451, 271, 29889, 32000, 4911, 29901, 15043, 32000, 4007, 22137, 29901, 6324, 32000, 4911, 29901, 1128, 526, 366, 9826, 29973, 32000, 4007, 22137, 29901]
74
+ ```
75
+
76
+ For UIs with Prefix and Suffix fields, these will likely work:
77
+
78
+ Prefix (include a space after colon):
79
+ ```
80
+ User:
81
+ ```
82
+
83
+ Suffix (space after colon):
84
+ ```
85
+ <|end_of_turn|>\nAssistant:
86
+ ```
87
+
88
+ **Oobabooga's text-generation-webui instructions can be found [further down the page](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B#serving-with-oobabooga--text-generation-webui).**