Delete README.md
Browse files
README.md
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
---
|
2 |
-
library_name: peft
|
3 |
-
base_model: AI-Sweden-Models/gpt-sw3-1.3b
|
4 |
-
datasets:
|
5 |
-
- barbaroo/Faroese_BLARK_small
|
6 |
-
- barbaroo/Books_Faroese
|
7 |
-
language:
|
8 |
-
- fo
|
9 |
-
- sv
|
10 |
-
- is
|
11 |
-
- da
|
12 |
-
- 'no'
|
13 |
-
- en
|
14 |
-
---
|
15 |
-
licence: [LICENCE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/blob/main/LICENSE)
|
16 |
-
|
17 |
-
# Model Card for Model ID
|
18 |
-
|
19 |
-
|
20 |
-
## Model Details
|
21 |
-
|
22 |
-
### Model Description
|
23 |
-
|
24 |
-
<!-- Provide a longer summary of what this model is. -->
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
- **Developed by:** Barbara Scalvini, Language Technology Center, University of the Faroe Islands
|
29 |
-
|
30 |
-
- **Model type:** This is a LoRA adapter for GPT-Sw3 with continued pre-training on Faroese data (BLARK corpus, private Faroese books repository). Training was performed for 10 epochs (more checkpoints to come).
|
31 |
-
- **Language(s) (NLP):** Swedish, English, Norwegian, Danish, Icelandic, Faroese
|
32 |
-
- **from model [optional]:** AI-Sweden-Models/gpt-sw3-1.3b
|
33 |
-
|
34 |
-
|
35 |
-
## How to Get Started with the Model
|
36 |
-
|
37 |
-
Use the code below to get started with the model.
|
38 |
-
|
39 |
-
```python
|
40 |
-
from peft import PeftModel, PeftConfig
|
41 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
42 |
-
|
43 |
-
# Load the Peft configuration and model
|
44 |
-
config = PeftConfig.from_pretrained("barbaroo/gptsw3_lora_fo_1.3b")
|
45 |
-
model = AutoModelForCausalLM.from_pretrained("AI-Sweden-Models/gpt-sw3-1.3b")
|
46 |
-
model = PeftModel.from_pretrained(model, "barbaroo/gptsw3_lora_fo_1.3b")
|
47 |
-
|
48 |
-
# Load the tokenizer
|
49 |
-
tokenizer = AutoTokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-1.3b")
|
50 |
-
|
51 |
-
# Define the prompt
|
52 |
-
prompt = "fortel mær eina søgu:"
|
53 |
-
|
54 |
-
# Tokenize the input
|
55 |
-
inputs = tokenizer(prompt, return_tensors="pt")
|
56 |
-
|
57 |
-
# Generate text
|
58 |
-
output = model.generate(**inputs, max_length=100,do_sample=True, temperature=0.7)
|
59 |
-
|
60 |
-
# Decode the generated text
|
61 |
-
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
62 |
-
|
63 |
-
print(generated_text)
|
64 |
-
|
65 |
-
```
|
66 |
-
|
67 |
-
## Uses
|
68 |
-
|
69 |
-
Language generation tasks, such as translation, summarization, conversational AI, etc.
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
## Training Details
|
74 |
-
|
75 |
-
### Training Data
|
76 |
-
|
77 |
-
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --
|
78 |
-
[More Information Needed]
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
83 |
-
|
84 |
-
We trained our model on a corpus derived from the Basic Language Resource Kit for Faroese. For detailed information about the dataset, please see the [BLARK_small](https://huggingface.co/datasets/barbaroo/Faroese_BLARK_small)
|
85 |
-
Extra training data was taken from a private corpus of Faroese books ( [Faroese Books](https://huggingface.co/datasets/barbaroo/Books_Faroese))
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
### Testing Data, Factors & Metrics
|
90 |
-
|
91 |
-
#### Testing Data
|
92 |
-
|
93 |
-
<!-- This should link to a Data Card if possible. -->
|
94 |
-
|
95 |
-
Validation/testing was performed on the test split of the Faroese books corpus ( [Faroese Books](https://huggingface.co/datasets/barbaroo/Books_Faroese))
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
## Training procedure
|
100 |
-
|
101 |
-
|
102 |
-
The following `bitsandbytes` quantization config was used during training:
|
103 |
-
- quant_method: bitsandbytes
|
104 |
-
- load_in_8bit: True
|
105 |
-
- load_in_4bit: False
|
106 |
-
- llm_int8_threshold: 6.0
|
107 |
-
- llm_int8_skip_modules: None
|
108 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
109 |
-
- llm_int8_has_fp16_weight: False
|
110 |
-
- bnb_4bit_quant_type: fp4
|
111 |
-
- bnb_4bit_use_double_quant: False
|
112 |
-
- bnb_4bit_compute_dtype: float32
|
113 |
-
|
114 |
-
### Framework versions
|
115 |
-
|
116 |
-
- PEFT 0.6.2.dev0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|