Triangle104 commited on
Commit
cd4244d
1 Parent(s): a2248d8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +158 -0
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: anthracite-org/magnum-v2-12b
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - es
8
+ - it
9
+ - pt
10
+ - ru
11
+ - zh
12
+ - ja
13
+ license: apache-2.0
14
+ pipeline_tag: text-generation
15
+ tags:
16
+ - chat
17
+ - llama-cpp
18
+ - gguf-my-repo
19
+ model-index:
20
+ - name: magnum-v2-12b
21
+ results:
22
+ - task:
23
+ type: text-generation
24
+ name: Text Generation
25
+ dataset:
26
+ name: IFEval (0-Shot)
27
+ type: HuggingFaceH4/ifeval
28
+ args:
29
+ num_few_shot: 0
30
+ metrics:
31
+ - type: inst_level_strict_acc and prompt_level_strict_acc
32
+ value: 37.62
33
+ name: strict accuracy
34
+ source:
35
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-12b
36
+ name: Open LLM Leaderboard
37
+ - task:
38
+ type: text-generation
39
+ name: Text Generation
40
+ dataset:
41
+ name: BBH (3-Shot)
42
+ type: BBH
43
+ args:
44
+ num_few_shot: 3
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 28.79
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-12b
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: MATH Lvl 5 (4-Shot)
57
+ type: hendrycks/competition_math
58
+ args:
59
+ num_few_shot: 4
60
+ metrics:
61
+ - type: exact_match
62
+ value: 4.76
63
+ name: exact match
64
+ source:
65
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-12b
66
+ name: Open LLM Leaderboard
67
+ - task:
68
+ type: text-generation
69
+ name: Text Generation
70
+ dataset:
71
+ name: GPQA (0-shot)
72
+ type: Idavidrein/gpqa
73
+ args:
74
+ num_few_shot: 0
75
+ metrics:
76
+ - type: acc_norm
77
+ value: 5.48
78
+ name: acc_norm
79
+ source:
80
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-12b
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: MuSR (0-shot)
87
+ type: TAUR-Lab/MuSR
88
+ args:
89
+ num_few_shot: 0
90
+ metrics:
91
+ - type: acc_norm
92
+ value: 11.37
93
+ name: acc_norm
94
+ source:
95
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-12b
96
+ name: Open LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: MMLU-PRO (5-shot)
102
+ type: TIGER-Lab/MMLU-Pro
103
+ config: main
104
+ split: test
105
+ args:
106
+ num_few_shot: 5
107
+ metrics:
108
+ - type: acc
109
+ value: 24.08
110
+ name: accuracy
111
+ source:
112
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v2-12b
113
+ name: Open LLM Leaderboard
114
+ ---
115
+
116
+ # Triangle104/magnum-v2-12b-Q5_0-GGUF
117
+ This model was converted to GGUF format from [`anthracite-org/magnum-v2-12b`](https://huggingface.co/anthracite-org/magnum-v2-12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
118
+ Refer to the [original model card](https://huggingface.co/anthracite-org/magnum-v2-12b) for more details on the model.
119
+
120
+ ## Use with llama.cpp
121
+ Install llama.cpp through brew (works on Mac and Linux)
122
+
123
+ ```bash
124
+ brew install llama.cpp
125
+
126
+ ```
127
+ Invoke the llama.cpp server or the CLI.
128
+
129
+ ### CLI:
130
+ ```bash
131
+ llama-cli --hf-repo Triangle104/magnum-v2-12b-Q5_0-GGUF --hf-file magnum-v2-12b-q5_0.gguf -p "The meaning to life and the universe is"
132
+ ```
133
+
134
+ ### Server:
135
+ ```bash
136
+ llama-server --hf-repo Triangle104/magnum-v2-12b-Q5_0-GGUF --hf-file magnum-v2-12b-q5_0.gguf -c 2048
137
+ ```
138
+
139
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
140
+
141
+ Step 1: Clone llama.cpp from GitHub.
142
+ ```
143
+ git clone https://github.com/ggerganov/llama.cpp
144
+ ```
145
+
146
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
147
+ ```
148
+ cd llama.cpp && LLAMA_CURL=1 make
149
+ ```
150
+
151
+ Step 3: Run inference through the main binary.
152
+ ```
153
+ ./llama-cli --hf-repo Triangle104/magnum-v2-12b-Q5_0-GGUF --hf-file magnum-v2-12b-q5_0.gguf -p "The meaning to life and the universe is"
154
+ ```
155
+ or
156
+ ```
157
+ ./llama-server --hf-repo Triangle104/magnum-v2-12b-Q5_0-GGUF --hf-file magnum-v2-12b-q5_0.gguf -c 2048
158
+ ```