File size: 1,101 Bytes
1a3914b
 
 
 
bdb8bf0
1a3914b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c470e5
 
1a3914b
 
 
 
 
62f7f8d
 
1a3914b
 
 
 
 
 
 
 
 
 
80273ab
88ea1a4
92b2a46
 
aaebe99
3ebd405
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
inference: false
language:
- bg
license: mit
tags:
- torch
---

# LLaMA-7B

This repo contains a low-rank adapter for LLaMA-7b trained on Bulgarian dataset.

This model was introduced in [this paper](https://arxiv.org/abs/2106.09685). 

## Model description

The training data is private Bulgarian dataset. 

## Intended uses & limitations

This is an instruction-based model similart to ChatGPT but in Bulgarian.

### How to use

Here is how to use this model in PyTorch:

```bash
>>> git clone https://github.com/tloen/alpaca-lora.git
>>> cd alpaca-lora
>>> pip install -r requirements.txt
>>>
>>> python generate.py \
    --load_8bit \
    --base_model 'yahma/llama-7b-hf' \
    --lora_weights 'rmihaylov/alpaca-lora-bg-7b' \
    --share_gradio
```

This will download both a base model and an adapter from huggingface. Then it will run a gradio interface for chatting. 

Example using this model: [Colab](https://colab.research.google.com/drive/1IPz8QqOa5ZUBz7ZyXE4hhh7XwMEH-D9S?usp=sharing). You need a Colab Pro because the model needs high ram when loading.

### Interface

![](example.jpg)