--- language: - fi library_name: peft base_model: HPLT/gpt-7b-nordic-prerelease license: apache-2.0 datasets: - pinzhenchen/alpaca-cleaned-fi --- # Model Card for Alpaca-Finnish-V1-7B-LoRA LoRA trained in 4-bit using [HPLT/gpt-7b-nordic-prerelease](https://huggingface.co/HPLT/gpt-7b-nordic-prerelease/) as the base model for 1 epoch. Dataset used with the LoRA is [pinzhenchen/alpaca-cleaned-fi](https://huggingface.co/datasets/pinzhenchen/alpaca-cleaned-fi/). It uses Alpaca format but with a translated instruction at the start: ``` { "instruction,output": "Alla on ohje, jossa kuvataan tehtävä. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%", "instruction,input,output": "Alla on ohje, jossa kuvataan tehtävä ja joka on yhdistetty kontekstia lisäävään syötteeseen. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%" } ``` Using the following settings: ```json { "lora_name": "Alpaca-Finnish-v1", "always_override": false, "q_proj_en": true, "v_proj_en": true, "k_proj_en": false, "o_proj_en": false, "gate_proj_en": false, "down_proj_en": false, "up_proj_en": false, "save_steps": 250.0, "micro_batch_size": 4, "batch_size": 128, "epochs": 3.0, "learning_rate": "3e-4", "lr_scheduler_type": "linear", "lora_rank": 256, "lora_alpha": 512, "lora_dropout": 0.05, "cutoff_len": 384, "dataset": "alpaca_data_cleaned.fi", "eval_dataset": "None", "format": "alpaca-format-finnish", "eval_steps": 100.0, "raw_text_file": "None", "overlap_len": 128, "newline_favor_len": 128, "higher_rank_limit": false, "warmup_steps": 100.0, "optimizer": "adamw_torch", "hard_cut_string": "\\n\\n\\n", "train_only_after": "", "stop_at_loss": 0, "add_eos_token": false, "min_chars": 0.0, "report_to": "None" } ``` ### Framework versions - PEFT 0.8.2