Aria Code is based on Code LLAMA 34B Instruct finetuned on French
Aria code is a model built for educational purposes in French speaking countries on coding.
The model is built to help students to learn coding in their native language while having the base qualities of LLAMA CODE which has been trained over 500 billions token of coding content. We belive coding skills are very valuable to reduce youth unemployment rates around the world and gives more scientific skills to students. LLAMA 2 base models have enough safeguards and censorship to ensure a safe use by kids and within academic environments.
GPU used for training: Nvidia A100
Timing: Less than 24 hours
Number of finetuning tokens : Over 10.000 high quality french language tokens.
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PEFT 0.6.0.dev0
- Downloads last month
- 5
Model tree for Faradaylab/ARIA-CODE
Base model
codellama/CodeLlama-34b-Instruct-hf