Edit model card

AusLegalQA

AusLegalQA is a fine-tune of Mistral-8x7B-Instruct-0.1 using PEFT techniques, trained on the Open Australian Legal QA.

The model achieved an eval loss of 1.1391 on a subset of 100 prompts and answers from the original dataset.

The model was trained with the following hyperparameters for 3 epochs. The step with the lowest eval loss was selected (coinciding with end of epoch 2) and the resulting qLoRA (4 bits) was merged into the base model.

Hyperparameter Value
Sequence length 1024
Epochs 2
Optimiser AdamW
Learning rate 1e-4
Learning rate scheduler Cosine
Batch size 1
Weight decay 0.01
Warmup ratio 0.05
LoRA rank 64
LoRA alpha 128
LoRA dropout 0.1
LoRA target q_proj,v_proj
NEFTune alpha 5
Flash Attention on

Strengths

The model is strong at summarisation and short-form answers with the key details. It is more likely to provide responses which assume the user is located in Australia. Ideal use-case is in a LLamaIndex/LangChain environment.

Limitations

Just as the base model it does not have any moderation mechanisms.

Downloads last month
12
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train adlumal/AusLegalQA-Mixtral-8x7B-Instruct-v0.1