ohwi commited on
Commit
d6df93a
โ€ข
1 Parent(s): 40722c0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +175 -0
README.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ja
4
+ tags:
5
+ - japanese-stablelm
6
+ - causal-lm
7
+ pipeline_tag: text-generation
8
+ base_model: ohwi/japanese-stablelm-instruct-gamma-7b-repro
9
+ datasets:
10
+ - argilla/ultrafeedback-binarized-preferences-cleaned
11
+ - llm-jp/hh-rlhf-12k-ja
12
+ license: apache-2.0
13
+ extra_gated_fields:
14
+ Name: text
15
+ Email: text
16
+ Country: text
17
+ Organization or Affiliation: text
18
+ I allow Stability AI to contact me about information related to its models and research: checkbox
19
+ ---
20
+
21
+
22
+ # Japanese Stable LM Instruct Gamma 7B + DPO
23
+
24
+ ## Model Description
25
+
26
+ This is a 7B-parameter decoder-only Japanese language model fine-tuned on preference datasets, built on top of the STF model [Japanese Stable LM Instruct Gamma 7B Reproduced](https://huggingface.co/ohwi/japanese-stablelm-instruct-gamma-7b-repro).
27
+
28
+ This model is trained with [notus](https://github.com/argilla-io/notus) code base.
29
+
30
+
31
+ ### Training Datasets
32
+
33
+ - Machine Translated [Ultrafeedback dataset](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
34
+ - [hh-rlhf-12k-ja](https://huggingface.co/datasets/llm-jp/hh-rlhf-12k-ja)
35
+
36
+ The dataset is machine translated version of `Ultrafeedback` and `hh-rlhf-12k-ja`. Some samples from Ultrafeedback dataset are missing because of API request failure.
37
+
38
+
39
+ ### Benchmarks
40
+
41
+ The result is evaluated by [Nejumi-leaderboard Neo](https://github.com/wandb/llm-leaderboard/tree/b2723944d4955768cb93c18ffe162a8ff4e88955).
42
+
43
+ - llm-jp-eval:
44
+
45
+ |AVG |EL |FA |MC |MR |NLI |QA |RC |chabsa|jamp |janli|jcommonsenseqa|jemhopqa|jnli |jsem |jsick|jsquad |mawps |niilc |wiki_coreference|wiki_dependency|wiki_ner|wiki_pas|wiki_reading|
46
+ |-------|----|-------|-----|-----|------|-------|-------|------|-----|-----|--------------|--------|-----|-----|-----|-------|------|------|----------------|---------------|--------|--------|------------|
47
+ |0.3207 |0.0 |0.1505 |0.81 |0.16 |0.268 |0.1823 |0.6744 |0.0 |0.09 |0.56 |0.81 |0.1546 |0.01 |0.57 |0.11 |0.6744 |0.16 |0.21 |0.0 |0.0 |0.0 |0.0 |0.7525 |
48
+
49
+
50
+ - Japanese Mt-Bench:
51
+
52
+ |coding|extraction|humanities|math|reasoning|roleplay|stem|writing|
53
+ |------|----------|----------|----|---------|--------|----|-------|
54
+ |2.5 |3.7 |3.75 |1.65|3.45 |6.95 |5.3 |7.15 |
55
+
56
+
57
+ - Overall Average: 0.3756625
58
+
59
+
60
+ ---
61
+
62
+
63
+ ( Below is the original readme of `Japanese Stable LM Instruct Gamma 7B` )
64
+
65
+
66
+ <br>
67
+
68
+
69
+ # Japanese Stable LM Instruct Gamma 7B
70
+
71
+ ## Model Description
72
+
73
+ This is a 7B-parameter decoder-only Japanese language model fine-tuned on instruction-following datasets, built on top of the base model [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b).
74
+
75
+ *If you are in search of a smaller model, please check [Japanese StableLM-3B-4E1T Instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base/blob/main/README.md).*
76
+
77
+ ## Usage
78
+
79
+ Ensure you are using Transformers 4.34.0 or newer.
80
+
81
+ ```python
82
+ import torch
83
+ from transformers import AutoTokenizer, AutoModelForCausalLM
84
+
85
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-instruct-gamma-7b")
86
+ model = AutoModelForCausalLM.from_pretrained(
87
+ "stabilityai/japanese-stablelm-instruct-gamma-7b",
88
+ torch_dtype="auto",
89
+ )
90
+ model.eval()
91
+
92
+ if torch.cuda.is_available():
93
+ model = model.to("cuda")
94
+
95
+ def build_prompt(user_query, inputs="", sep="\n\n### "):
96
+ sys_msg = "ไปฅไธ‹ใฏใ€ใ‚ฟใ‚นใ‚ฏใ‚’่ชฌๆ˜Žใ™ใ‚‹ๆŒ‡็คบใจใ€ๆ–‡่„ˆใฎใ‚ใ‚‹ๅ…ฅๅŠ›ใฎ็ต„ใฟๅˆใ‚ใ›ใงใ™ใ€‚่ฆๆฑ‚ใ‚’้ฉๅˆ‡ใซๆบ€ใŸใ™ๅฟœ็ญ”ใ‚’ๆ›ธใใชใ•ใ„ใ€‚"
97
+ p = sys_msg
98
+ roles = ["ๆŒ‡็คบ", "ๅฟœ็ญ”"]
99
+ msgs = [": \n" + user_query, ": \n"]
100
+ if inputs:
101
+ roles.insert(1, "ๅ…ฅๅŠ›")
102
+ msgs.insert(1, ": \n" + inputs)
103
+ for role, msg in zip(roles, msgs):
104
+ p += sep + role + msg
105
+ return p
106
+
107
+ # Infer with prompt without any additional input
108
+ user_inputs = {
109
+ "user_query": "ไธŽใˆใ‚‰ใ‚ŒใŸใ“ใจใ‚ใ–ใฎๆ„ๅ‘ณใ‚’ๅฐๅญฆ็”Ÿใงใ‚‚ๅˆ†ใ‹ใ‚‹ใ‚ˆใ†ใซๆ•™ใˆใฆใใ ใ•ใ„ใ€‚",
110
+ "inputs": "ๆƒ…ใ‘ใฏไบบใฎใŸใ‚ใชใ‚‰ใš"
111
+ }
112
+ prompt = build_prompt(**user_inputs)
113
+
114
+ input_ids = tokenizer.encode(
115
+ prompt,
116
+ add_special_tokens=True,
117
+ return_tensors="pt"
118
+ )
119
+
120
+ tokens = model.generate(
121
+ input_ids.to(device=model.device),
122
+ max_new_tokens=256,
123
+ temperature=1,
124
+ top_p=0.95,
125
+ do_sample=True,
126
+ )
127
+
128
+ out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
129
+ print(out)
130
+ ```
131
+
132
+ ## Model Details
133
+
134
+ * **Developed by**: [Stability AI](https://stability.ai/)
135
+ * **Model type**: `Japanese Stable LM Instruct Gamma 7B` model is an auto-regressive language model based on the transformer decoder architecture.
136
+ * **Language(s)**: Japanese
137
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
138
+ * **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
139
+
140
+ ### Model Architecture
141
+
142
+ For details, please see Mistral AI's [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
143
+
144
+
145
+ ### Training Datasets
146
+
147
+ - [Japanese translation of the Databricks Dolly-15k dataset](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
148
+ - [Japanese translation of the subset of the Anthropic HH dataset](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
149
+ - [Wikinews](https://ja.wikinews.org/wi) [subset](https://huggingface.co/datasets/fujiki/llm-japanese-dataset_wikinews) of the [izumi-lab/llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
150
+
151
+
152
+
153
+ ## Use and Limitations
154
+
155
+ ### Intended Use
156
+
157
+ The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
158
+
159
+ ### Limitations and bias
160
+
161
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
162
+
163
+ ## Credits
164
+
165
+ The fine-tuning was carried out by [Fujiki Nakamura](https://huggingface.co/fujiki).
166
+ Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), [Naoki Orii](https://huggingface.co/mrorii), and [Takuya Akiba](https://huggingface.co/iwiwi).
167
+
168
+
169
+ ## Acknowledgements
170
+
171
+ This model is based on Mistral-7B-v0.1 released by the Mistral AI team. We are grateful to the Mistral AI team for providing such an excellent base model.
172
+
173
+ We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
174
+
175
+ We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.