File size: 6,906 Bytes
04ffc5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


yi-bagel-2x34b - GGUF
- Model creator: https://huggingface.co/NLPinas/
- Original model: https://huggingface.co/NLPinas/yi-bagel-2x34b/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [yi-bagel-2x34b.Q2_K.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q2_K.gguf) | Q2_K | 11.94GB |
| [yi-bagel-2x34b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.IQ3_XS.gguf) | IQ3_XS | 13.26GB |
| [yi-bagel-2x34b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.IQ3_S.gguf) | IQ3_S | 13.99GB |
| [yi-bagel-2x34b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q3_K_S.gguf) | Q3_K_S | 13.93GB |
| [yi-bagel-2x34b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.IQ3_M.gguf) | IQ3_M | 14.5GB |
| [yi-bagel-2x34b.Q3_K.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q3_K.gguf) | Q3_K | 15.51GB |
| [yi-bagel-2x34b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q3_K_M.gguf) | Q3_K_M | 15.51GB |
| [yi-bagel-2x34b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q3_K_L.gguf) | Q3_K_L | 16.89GB |
| [yi-bagel-2x34b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.IQ4_XS.gguf) | IQ4_XS | 17.36GB |
| [yi-bagel-2x34b.Q4_0.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q4_0.gguf) | Q4_0 | 18.13GB |
| [yi-bagel-2x34b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.IQ4_NL.gguf) | IQ4_NL | 18.3GB |
| [yi-bagel-2x34b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q4_K_S.gguf) | Q4_K_S | 18.25GB |
| [yi-bagel-2x34b.Q4_K.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q4_K.gguf) | Q4_K | 19.24GB |
| [yi-bagel-2x34b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q4_K_M.gguf) | Q4_K_M | 12.05GB |
| [yi-bagel-2x34b.Q4_1.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q4_1.gguf) | Q4_1 | 17.51GB |
| [yi-bagel-2x34b.Q5_0.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q5_0.gguf) | Q5_0 | 17.49GB |
| [yi-bagel-2x34b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q5_K_S.gguf) | Q5_K_S | 21.55GB |
| [yi-bagel-2x34b.Q5_K.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q5_K.gguf) | Q5_K | 22.65GB |
| [yi-bagel-2x34b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q5_K_M.gguf) | Q5_K_M | 22.65GB |
| [yi-bagel-2x34b.Q5_1.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q5_1.gguf) | Q5_1 | 24.05GB |
| [yi-bagel-2x34b.Q6_K.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q6_K.gguf) | Q6_K | 26.28GB |
| [yi-bagel-2x34b.Q8_0.gguf](https://huggingface.co/RichardErkhov/NLPinas_-_yi-bagel-2x34b-gguf/blob/main/yi-bagel-2x34b.Q8_0.gguf) | Q8_0 | 34.03GB |




Original model description:
---
base_model:
- jondurbin/bagel-dpo-34b-v0.2
- jondurbin/nontoxic-bagel-34b-v0.2
tags:
- mergekit
- merge
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
# yi-bagel-2x34b

Released January 11, 2024

![bagel-burger](bagel-burger.png)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). For more information, kindly refer to the model cards from jondurbin linked in the section below. This model debuted in the [leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) at rank #4 (January 11, 2024).

## Merge Details
### Merge Method

This model is an expertimental merge using the [linear](https://arxiv.org/abs/2203.05482) merge method. This is to assess the degree of which the DPO has an effect, in terms of censoring, as used in [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2).

### Models Merged

The following models were included in the merge:
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [jondurbin/nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2)

## Open LLM Leaderboard Metrics (as of January 11, 2024)
| Metric                | Value |
|-----------------------|-------|
| MMLU (5-shot)         | 76.60  |
| ARC (25-shot)         | 72.70  |
| HellaSwag (10-shot)   | 85.44  |
| TruthfulQA (0-shot)   | 71.42  |
| Winogrande (5-shot)   | 82.72  |
| GSM8K (5-shot)        | 60.73  |
| Average               | 74.93  |

According to the leaderboard description, here are the benchmarks used for the evaluation:
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
- [Winogrande](https://arxiv.org/abs/1907.10641) (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- [GSM8k](https://arxiv.org/abs/2110.14168) (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: jondurbin/nontoxic-bagel-34b-v0.2
    parameters:
      weight: 0.5
  - model: jondurbin/bagel-dpo-34b-v0.2
    parameters:
      weight: 0.5
merge_method: linear
dtype: float16
```

## Further Information
For additional information or inquiries about yi-bagel-2x34b, please contact the developer through email: [email protected].