Edit model card

Model Card for PIPPA ShareGPT Subset Variation Two Lora 7b

It is an experimental Lora focused on Roleplay that uses a subset of PIPPA ShareGPT, the difference from the previous variant is that it was trained with different parameters micro_batch_size = 1 and gradient_accumulation_steps = 1

Usage

Custom

SYSTEM: Do thing
USER: {prompt}
CHARACTER:

Bias, Risks, and Limitations

This Lora is not intended for supplying factual information or advice in any form

Training Details

Training Data

1k of conversation from PIPPA ShareGPT

Training Procedure

The version of this Lora uploaded on this repository was trained using a 8x RTX A6000 cluster in 8-bit with regular LoRA adapters and 32-bit AdamW optimizer.

Training Hyperparameters

Training using a fork of Axolotl with two paths Patch 1 Patch 2

  • load_in_8bit: true
  • lora_r: 16
  • lora_alpha: 16
  • lora_dropout: 0.01
  • gradient_accumulation_steps: 1
  • micro_batch_size: 1
  • num_epochs: 3
  • learning_rate: 0.000065

Environmental Impact

Finetuning this model on 4xNVIDIA A6000 48GB in parallel takes about 45 minutes (7B)

Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .