Nemo-12B-Marlin-v6 / README.md
UsernameJustAnother's picture
Update README.md
d4d67c1 verified
metadata
base_model: unsloth/Mistral-Nemo-Instruct-2407
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - rp
  - gguf
  - experimental
  - long-context

Uploaded model

  • Developed by: UsernameJustAnother
  • License: apache-2.0
  • Finetuned from model : unsloth/Mistral-Nemo-Instruct-2407

Standard disclaimer: This is me teaching myself the basics of fine-tuning, with notes extensively borrowed from https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9

New for v6:

  • Slightly different source mix. Down to 8,000 records of mostly-human convos and stories, curated by me, trained in ChatML.
  • The stories have been edited to remove author's notes, and the RP chats tweaked to remove many ministrations.
  • Different learning rate and back to Celeste's scaling factor setup (but Celeste trained on -base, this is -instruct).
  • Now with added eval! I worked out how to get eval stats (and wandb) set up, so now I can see my failures in graphical form.

I pulled v7 because I honestly don't think it's as good as v6, and don't want folks to get the wrong idea that it's better just because the version number is higher.

And of course yay Unsloth for letting this all train on a single A100 with variable (wildly variable) context length.

Here's what the train/eval loss looked like (eval is orange, train is blue). I think that's not terrible, but :shrug:.

Here's what the train/eval loss looked like

It was trained with the following settings:


model = FastLanguageModel.get_peft_model(
    model,
    r = 256,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 128,  #   128 / sqrt(256) gives a scaling factor of 8
    lora_dropout = 0.1, # Supports any, but = 0 is optimized
    bias = "none",    # Supports any, but = "none" is optimized
    # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
    use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
    random_state = 3407,
    use_rslora = True,  # setting the adapter scaling factor to lora_alpha/math.sqrt(r) instead of lora_alpha/r
    loftq_config = None, # And LoftQ
)

lr_scheduler_kwargs = {
    'min_lr': 0.0000024  # Adjust this value as needed
}

        per_device_train_batch_size = 2,
        per_device_eval_batch_size = 2, # defaults to 8!
        gradient_accumulation_steps = 4,
        eval_accumulation_steps = 4,
        prediction_loss_only = True, # When performing evaluation and generating predictions, only returns the loss.
        warmup_steps = 50,
        num_train_epochs = 2, # For longer training runs! 12 hrs/epoch?
        learning_rate = 1e-5, # 8e-5 used by Celeste, 0.0001 is from the paper, halving it. tried 5e-5, now 1e-5.
        fp16 = not is_bfloat16_supported(),
        bf16 = is_bfloat16_supported(),
        fp16_full_eval = True, # stops eval from trying to use fp32
        eval_strategy = "steps", # 'no', 'steps', 'epoch'. Don't use this without an eval dataset etc
        eval_steps = 100, # is eval_strat is set to 'steps', do every N steps.
        logging_steps = 5, # so eval and logging happen on the same schedule
        optim = "adamw_8bit", # 
        weight_decay = 0, # up from 0
        lr_scheduler_type = "cosine_with_min_lr", # linear, cosine, cosine_with_min_lr, default linear
        lr_scheduler_kwargs = lr_scheduler_kwargs, # needed for cosine_with_min_lr
        seed = 3407,

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.