File size: 1,988 Bytes
03bf099
348797f
03bf099
fec773a
4502c72
 
 
39914ce
 
 
 
348797f
39914ce
 
4502c72
39914ce
8ea5f3c
6a9d16b
348797f
 
 
 
 
 
 
 
 
a3c29b2
 
 
 
 
 
 
 
 
 
 
 
348797f
 
a3c29b2
348797f
a3c29b2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
library_name: peft
---
<a target="_blank" href="https://colab.research.google.com/github/szymonrucinski/finetune-llm/blob/main/pollama_Xb_inference.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>

## Introduction
Krakowiak-7B is a finetuned version of Meta's [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). It was trained on the modified and updated dataset originally created by [Chris Ociepa](https://huggingface.co/datasets/szymonindy/ociepa-raw-self-generated-instructions-pl)
containing ~ 50K instructions. Making it one of the best and biggest available LLM's.
Name [krakowiak](https://www.youtube.com/watch?v=OeQ6jYzt6cM) refers to one of the most popular and characteristic Polish folk dances, with its very lively, even wild, tempo, and long, easy strides, demonstrating spirited abandon and elegance at the same time.

## How to test it?
The model can be ran using the Huggingface library or in the browser using this [Google Colab](https://colab.research.google.com/drive/1IM7j57g9ZHj-Pw2EXGyacNuKHjvK3pIc?usp=sharing)

## Training procedure
Model was trained for 3 epochs, feel free [to read the report](https://api.wandb.ai/links/szymonindy/tkr343ad)

The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16

The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions

- PEFT 0.4.0

- PEFT 0.4.0