RichardErkhov commited on
Commit
0fe53bd
1 Parent(s): 68cf207

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +164 -0
README.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Qwen2.5-Math-7B-Instruct - GGUF
11
+ - Model creator: https://huggingface.co/Qwen/
12
+ - Original model: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Qwen2.5-Math-7B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q2_K.gguf) | Q2_K | 2.81GB |
18
+ | [Qwen2.5-Math-7B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
19
+ | [Qwen2.5-Math-7B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.IQ3_S.gguf) | IQ3_S | 3.26GB |
20
+ | [Qwen2.5-Math-7B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
21
+ | [Qwen2.5-Math-7B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.IQ3_M.gguf) | IQ3_M | 3.33GB |
22
+ | [Qwen2.5-Math-7B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q3_K.gguf) | Q3_K | 3.55GB |
23
+ | [Qwen2.5-Math-7B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
24
+ | [Qwen2.5-Math-7B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
25
+ | [Qwen2.5-Math-7B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
26
+ | [Qwen2.5-Math-7B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q4_0.gguf) | Q4_0 | 4.13GB |
27
+ | [Qwen2.5-Math-7B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
28
+ | [Qwen2.5-Math-7B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
29
+ | [Qwen2.5-Math-7B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q4_K.gguf) | Q4_K | 4.36GB |
30
+ | [Qwen2.5-Math-7B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
31
+ | [Qwen2.5-Math-7B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q4_1.gguf) | Q4_1 | 4.54GB |
32
+ | [Qwen2.5-Math-7B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q5_0.gguf) | Q5_0 | 4.95GB |
33
+ | [Qwen2.5-Math-7B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
34
+ | [Qwen2.5-Math-7B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q5_K.gguf) | Q5_K | 5.07GB |
35
+ | [Qwen2.5-Math-7B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
36
+ | [Qwen2.5-Math-7B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q5_1.gguf) | Q5_1 | 5.36GB |
37
+ | [Qwen2.5-Math-7B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q6_K.gguf) | Q6_K | 5.82GB |
38
+ | [Qwen2.5-Math-7B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2.5-Math-7B-Instruct-gguf/blob/main/Qwen2.5-Math-7B-Instruct.Q8_0.gguf) | Q8_0 | 7.54GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ base_model: Qwen/Qwen2.5-Math-7B
46
+ language:
47
+ - en
48
+ pipeline_tag: text-generation
49
+ tags:
50
+ - chat
51
+ library_name: transformers
52
+ license: apache-2.0
53
+ license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct/blob/main/LICENSE
54
+ ---
55
+
56
+
57
+ # Qwen2.5-Math-7B-Instruct
58
+
59
+ > [!Warning]
60
+ > <div align="center">
61
+ > <b>
62
+ > 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.
63
+ > </b>
64
+ > </div>
65
+
66
+ ## Introduction
67
+
68
+ In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**.
69
+
70
+ Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.
71
+
72
+ ![](http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/qwen2.5-math-pipeline.jpeg)
73
+
74
+ While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.
75
+
76
+ ## Model Details
77
+
78
+
79
+ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math).
80
+
81
+
82
+ ## Requirements
83
+ * `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended.
84
+
85
+ > [!Warning]
86
+ > <div align="center">
87
+ > <b>
88
+ > 🚨 This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>.
89
+ > </b>
90
+ > </div>
91
+
92
+ For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
93
+
94
+ ## Quick Start
95
+
96
+ > [!Important]
97
+ >
98
+ > **Qwen2.5-Math-7B-Instruct** is an instruction model for chatting;
99
+ >
100
+ > **Qwen2.5-Math-7B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning.
101
+ >
102
+
103
+ ### 🤗 Hugging Face Transformers
104
+
105
+ Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`:
106
+
107
+ ```python
108
+ from transformers import AutoModelForCausalLM, AutoTokenizer
109
+
110
+ model_name = "Qwen/Qwen2.5-Math-7B-Instruct"
111
+ device = "cuda" # the device to load the model onto
112
+
113
+ model = AutoModelForCausalLM.from_pretrained(
114
+ model_name,
115
+ torch_dtype="auto",
116
+ device_map="auto"
117
+ )
118
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
119
+
120
+ prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
121
+
122
+ # CoT
123
+ messages = [
124
+ {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
125
+ {"role": "user", "content": prompt}
126
+ ]
127
+
128
+ # TIR
129
+ messages = [
130
+ {"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."},
131
+ {"role": "user", "content": prompt}
132
+ ]
133
+
134
+ text = tokenizer.apply_chat_template(
135
+ messages,
136
+ tokenize=False,
137
+ add_generation_prompt=True
138
+ )
139
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
140
+
141
+ generated_ids = model.generate(
142
+ **model_inputs,
143
+ max_new_tokens=512
144
+ )
145
+ generated_ids = [
146
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
147
+ ]
148
+
149
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
150
+ ```
151
+
152
+ ## Citation
153
+
154
+ If you find our work helpful, feel free to give us a citation.
155
+
156
+ ```
157
+ @article{yang2024qwen25mathtechnicalreportmathematical,
158
+ title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement},
159
+ author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
160
+ journal={arXiv preprint arXiv:2409.12122},
161
+ year={2024}
162
+ }
163
+ ```
164
+