Luminia v3 is good at reasoning to enhance Stable Diffusion prompt from short summary description, may output NSFW content.
LoRa is include and Quants: exllamav2 2.4bpw-h6, 4.25bpw-h6, 8.0bpw-h8 | GGUF Q4_K_M, IQ4_NL |
Prompt template: Alpaca
Output example tested In text-generation-webui
Input | base llama-2-chat | QLoRa |
---|---|---|
[question]: Create stable diffusion metadata based on the given english description. Luminia \n### Input:\n favorites and popular SFW |
Answer: Luminia, a mystical world of wonder and magic 🧝♀️✨ A place where technology and nature seamlessly blend together ... |
Answer! < lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle .... |
Output prompt from QLoRa to A1111/SD-WebUI:
Full Prompt
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Create stable diffusion metadata based on the given english description. Luminia
### Input:
favorites and popular SFW
### Response:
"Luminia" can be any short description, more info on my SD dataset here.
Training Details
Click to see details
Model Description
Train by: Nekochu, Model type: Llama, Finetuned from model Llama-2-13b-chat
Continue from the base of LoRA Luminia-13B-v2-QLora
Know issue: [issue]
Trainer
hiyouga/LLaMA-Efficient-Tuning
Hardware: QLoRA training OS Windows, Python 3.10.8, CUDA 12.1 on 24GB VRAM.
Training hyperparameters
The following hyperparameters were used during training:
- num_epochs: 1.0
- finetuning_type: lora
- quantization_bit: 4
- stage: sft
- learning_rate: 5e-05
- cutoff_len: 4096
- num_train_epochs: 3.0
- max_samples: 100000
- warmup_steps: 0
- train_batch_size: 1
- distributed_type: single-GPU
- num_devices: 1
- warmup_steps: 0
- rope_scaling: linear
- lora_rank: 32
- lora_target: all
- lora_dropout: 0.15
- bnb_4bit_compute_dtype: bfloat16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
training_loss:
Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.0
- Downloads last month
- 98
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Nekochu/Luminia-13B-v3
Base model
meta-llama/Llama-2-13b-chat-hf