Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## ๐ Introduction
|
2 |
+
|
3 |
+
**Qwen2-7B-Instruct-Response-Exp** is a powerful large language model that can automatically expand datasets and annotations like self-instruct.
|
4 |
+
|
5 |
+
We fine-tuned **Qwen2-7B-Instruct** to obtain **Qwen2-7B-Instruct-Response-Exp**.
|
6 |
+
We sampled the dataset from OpenHermes and the LCCD dataset, ensuring a balanced task distribution. For training set annotations, we used Qwen-max with incorporated our handwritten examples as in-context prompts.
|
7 |
+
|
8 |
+
#### Example Input
|
9 |
+
>\#\#\# Instruction
|
10 |
+
>
|
11 |
+
>Here are some concepts: car, driving, travel Can you make a sentence with these concepts?
|
12 |
+
>
|
13 |
+
>\#\#\# Response
|
14 |
+
>
|
15 |
+
>Of course you can! Here is a sentence: "We drove the car and started an unforgettable journey.
|
16 |
+
>
|
17 |
+
>\#\#\# Instruction
|
18 |
+
>
|
19 |
+
>Why does the sun rise in the east every day?
|
20 |
+
>
|
21 |
+
>\#\#\# Response
|
22 |
+
>
|
23 |
+
>
|
24 |
+
>The sun rises in the east every day because of the direction of the earth's rotation and geographical features. The earth rotates from west to east, so an observer on the surface of the earth will see the sun rise from the eastern horizon, pass through the sky, and finally set on the western horizon. Specifically, the earth's rotation axis refers to an imaginary axis that passes through the north and south poles of the earth, and the earth rotates from west to east around this axis. Because of this rotation, any place on the earth will experience day and night. When a place rotates to the side facing the sun, it will usher in daytime; when it rotates to the side facing away from the sun, it will usher in night. Since the direction of the earth's rotation is from west to east, the sun appears to rise from the east and set in the west. This phenomenon is a direct result of the earth's rotation, not the movement of the sun itself.
|
25 |
+
>
|
26 |
+
>\#\#\# Instruction
|
27 |
+
>
|
28 |
+
#### Example Output
|
29 |
+
>\#\#\# Instruction
|
30 |
+
>
|
31 |
+
>Here are some concepts: books, learning, knowledge Can you make a sentence with these concepts?
|
32 |
+
>
|
33 |
+
>\#\#\# Response
|
34 |
+
>
|
35 |
+
>Of course you can! Here is a sentence that contains these concepts: By reading books, we can continue to learn and enrich our knowledge.
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## ๐ Quick Start
|
40 |
+
|
41 |
+
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
|
42 |
+
|
43 |
+
```python
|
44 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
+
device = "cuda" # the device to load the model onto
|
46 |
+
|
47 |
+
model = AutoModelForCausalLM.from_pretrained(
|
48 |
+
"alibaba-pai/Qwen2-7B-Instruct-Response-Exp",
|
49 |
+
torch_dtype="auto",
|
50 |
+
device_map="auto"
|
51 |
+
)
|
52 |
+
tokenizer = AutoTokenizer.from_pretrained("alibaba-pai/Qwen2-7B-Instruct-Response-Exp")
|
53 |
+
|
54 |
+
prompt = "Give me a short introduction to large language model."
|
55 |
+
messages = [
|
56 |
+
{"role": "user", "content": prompt}
|
57 |
+
]
|
58 |
+
text = tokenizer.apply_chat_template(
|
59 |
+
messages,
|
60 |
+
tokenize=False,
|
61 |
+
add_generation_prompt=True
|
62 |
+
)
|
63 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(device)
|
64 |
+
|
65 |
+
generated_ids = model.generate(
|
66 |
+
model_inputs.input_ids,
|
67 |
+
max_new_tokens=2048๏ผ
|
68 |
+
eos_token_id=151645๏ผ
|
69 |
+
)
|
70 |
+
generated_ids = [
|
71 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
72 |
+
]
|
73 |
+
|
74 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
75 |
+
```
|
76 |
+
|
77 |
+
## ๐ Evaluation
|
78 |
+
|
79 |
+
| | Diversity | Length | Complexity | Factuality |
|
80 |
+
|-----------------|-----------|--------|------------|------------|
|
81 |
+
| Self-Instruct | 9.6 | 15.8 | 0.32 | 5.0 |
|
82 |
+
| Qwen2-7B-Instruct-Response-Exp | 17.2 | 26.3 | 4.97 | 4.9 |
|
83 |
+
|