aashish1904
commited on
Commit
•
f14fed2
1
Parent(s):
cdca5dc
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
library_name: transformers
|
5 |
+
model_name: Vikhr-Gemma-2B-instruct
|
6 |
+
base_model:
|
7 |
+
- google/gemma-2-2b-it
|
8 |
+
language:
|
9 |
+
- ru
|
10 |
+
license: apache-2.0
|
11 |
+
datasets:
|
12 |
+
- Vikhrmodels/GrandMaster-PRO-MAX
|
13 |
+
|
14 |
+
---
|
15 |
+
|
16 |
+
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
|
17 |
+
|
18 |
+
# QuantFactory/Vikhr-Gemma-2B-instruct-GGUF
|
19 |
+
This is quantized version of [Vikhrmodels/Vikhr-Gemma-2B-instruct](https://huggingface.co/Vikhrmodels/Vikhr-Gemma-2B-instruct) created using llama.cpp
|
20 |
+
|
21 |
+
# Original Model Card
|
22 |
+
|
23 |
+
|
24 |
+
# 💨 Vikhr-Gemma-2B-instruct
|
25 |
+
|
26 |
+
#### RU
|
27 |
+
|
28 |
+
Мощная инструктивная модель на основе Gemma 2 2B, обученная на русскоязычном датасете GrandMaster-PRO-MAX.
|
29 |
+
|
30 |
+
#### EN
|
31 |
+
|
32 |
+
A powerful instructive model based on Gemma 2 2B, trained on the Russian-language dataset GrandMaster-PRO-MAX.
|
33 |
+
|
34 |
+
## GGUF
|
35 |
+
|
36 |
+
- [Vikhrmodels/Vikhr-Gemma-2B-instruct-GGUF](https://huggingface.co/Vikhrmodels/Vikhr-Gemma-2B-instruct-GGUF)
|
37 |
+
|
38 |
+
## Особенности:
|
39 |
+
|
40 |
+
- 📚 Основа / Base: [gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it)
|
41 |
+
- 🇷🇺 Специализация / Specialization: **RU**
|
42 |
+
- 💾 Датасет / Dataset: [GrandMaster-PRO-MAX](https://huggingface.co/datasets/Vikhrmodels/GrandMaster-PRO-MAX)
|
43 |
+
|
44 |
+
## Попробовать / Try now:
|
45 |
+
|
46 |
+
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1htw3x1OS73vIJrMYvdQfflGg4ASdGg9P)
|
47 |
+
|
48 |
+
## Описание:
|
49 |
+
|
50 |
+
#### RU
|
51 |
+
|
52 |
+
Vikhr-Gemma-2B-instruct — это мощная и компактная языковая модель, обученная на датасете GrandMaster-PRO-MAX, специально доученная для обработки русского языка.
|
53 |
+
|
54 |
+
#### EN
|
55 |
+
|
56 |
+
Vikhr-Gemma-2B-instruct is a powerful and compact language model trained on the GrandMaster-PRO-MAX dataset, specifically designed for processing the Russian language.
|
57 |
+
|
58 |
+
## Пример кода для запуска / Sample code to run:
|
59 |
+
|
60 |
+
```python
|
61 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
62 |
+
|
63 |
+
# Загрузка модели и токенизатора
|
64 |
+
model_name = "Vikhrmodels/Vikhr-Gemma-2B-instruct"
|
65 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
66 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
67 |
+
|
68 |
+
# Подготовка входного текста
|
69 |
+
input_text = "Напиши стихотворение о весне в России."
|
70 |
+
|
71 |
+
# Токенизация и генерация текста
|
72 |
+
input_ids = tokenizer.encode(input_text, return_tensors="pt")
|
73 |
+
output = model.generate(input_ids, max_length=200, num_return_sequences=1, no_repeat_ngram_size=2)
|
74 |
+
|
75 |
+
# Декодирование и вывод результата
|
76 |
+
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
77 |
+
print(generated_text)
|
78 |
+
```
|
79 |
+
|
80 |
+
#### Ответ модели / Model response:
|
81 |
+
|
82 |
+
|
83 |
+
> Весна в России – это время обновления природы, когда природа пробуждается >от зимнего сна. Вот стихотворение, отражающее эту красоту:
|
84 |
+
>
|
85 |
+
> ---
|
86 |
+
>
|
87 |
+
> **Весна в России**
|
88 |
+
>
|
89 |
+
> Зимняя тишина утихла,
|
90 |
+
> Весна в России пришла.
|
91 |
+
> Солнце светит, словно в сказке,
|
92 |
+
> В небесах – птицы в полете.
|
93 |
+
>
|
94 |
+
> Снег пошел, ушел вдаль,
|
95 |
+
> И в каждом уголке – весна.
|
96 |
+
> Лед промерз, вода в реке –
|
97 |
+
> Ветры вьют, и листья поют.
|
98 |
+
>
|
99 |
+
> Цветы распустились, как будто
|
100 |
+
> В честь весны, в честь жизни.
|
101 |
+
> Зеленая трава, как полотно,
|
102 |
+
> Под ногами – мягкость.
|
103 |
+
>
|
104 |
+
> Весна в России – это чудо,
|
105 |
+
> Счастье, что в сердце живет.
|
106 |
+
> И каждый день – праздник,
|
107 |
+
> Когда природа в цвету.
|
108 |
+
>
|
109 |
+
> ---
|
110 |
+
>
|
111 |
+
> Надеюсь, это стихотворение передало дух и красоту весны в России.
|
112 |
+
|
113 |
+
|
114 |
+
## Метрики на ru_arena_general / Metrics on ru_arena_general
|
115 |
+
|
116 |
+
| Model | Score | 95% CI | Avg Tokens | Std Tokens | LC Score |
|
117 |
+
| ---------------------------------------------- | --------- | --------------- | ---------- | ---------- | --------- |
|
118 |
+
| suzume-llama-3-8B-multilingual-orpo-borda-half | 90.89 | +1.1 / -1.1 | 2495.38 | 1211.62 | 55.86 |
|
119 |
+
| mistral-nemo-instruct-2407 | 50.53 | +2.5 / -2.2 | 403.17 | 321.53 | 50.08 |
|
120 |
+
| sfr-iterative-dpo-llama-3-8b-r | 50.06 | +2.1 / -2.1 | 516.74 | 316.84 | 50.01 |
|
121 |
+
| gpt-3.5-turbo-0125 | 50.00 | +0.0 / -0.0 | 220.83 | 170.30 | 50.00 |
|
122 |
+
| glm-4-9b-chat | 49.75 | +1.9 / -2.3 | 568.81 | 448.76 | 49.96 |
|
123 |
+
| c4ai-command-r-v01 | 48.95 | +2.6 / -1.7 | 529.34 | 368.98 | 49.85 |
|
124 |
+
| llama-3-instruct-8b-sppo-iter3 | 47.45 | +2.0 / -2.2 | 502.27 | 304.27 | 49.63 |
|
125 |
+
| **Vikhrmodels-vikhr-gemma-2b-it** | **45.82** | **+2.4 / -2.0** | **722.83** | **710.71** | **49.40** |
|
126 |
+
| suzume-llama-3-8b-multilingual | 45.71 | +2.4 / -1.7 | 641.18 | 858.96 | 49.38 |
|
127 |
+
| yandex_gpt_pro | 45.11 | +2.2 / -2.5 | 345.30 | 277.64 | 49.30 |
|
128 |
+
| hermes-2-theta-llama-3-8b | 44.07 | +2.0 / -2.2 | 485.99 | 390.85 | 49.15 |
|
129 |
+
| gpt-3.5-turbo-1106 | 41.48 | +1.9 / -2.0 | 191.19 | 177.31 | 48.77 |
|
130 |
+
| llama-3-smaug-8b | 40.80 | +2.1 / -1.6 | 524.02 | 480.56 | 48.68 |
|
131 |
+
| llama-3-8b-saiga-suzume-ties | 39.94 | +2.0 / -1.7 | 763.27 | 699.39 | 48.55 |
|
132 |
+
|
133 |
+
```
|
134 |
+
@article{nikolich2024vikhr,
|
135 |
+
title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
|
136 |
+
author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
|
137 |
+
journal={arXiv preprint arXiv:2405.13929},
|
138 |
+
year={2024},
|
139 |
+
url={https://arxiv.org/pdf/2405.13929}
|
140 |
+
}
|
141 |
+
```
|
142 |
+
|