Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: Alignment-Lab-AI/Alignment-Lab-AIlonger
load_in_8bit: false
load_in_4bit: false
strict: false
tokenizer_type: LlamaTokenizer

datasets:
  - path: PygmalionAI/spice
    type: sharegpt
    conversation: chatml

  - path: PygmalionAI/NYROS
    type: sharegpt
    conversation: chatml

chat_template: chatml

dataset_prepared_path: /workspace/disk2/2prepath2
val_set_size: 0.05
output_dir: /workspace/disk2/Eros2
eval_sample_packing: true
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
torch_compile: true
hf_use_auth_token: true
hub_strategy: all_checkpoints
hub_model_id: PygmalionAI/Eros-ALPHA
hub_private_repo: true
push_to_hub: true
wandb_project: Erosium
wandb_entity:
wandb_watch: all
overwrite_output_dir: false
wandb_name:
wandb_log_model:
save_safetensors: true
gradient_accumulation_steps: 6
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_bnb_8bit
amsgrad: true
max_grad_norm: 0.3
lr_scheduler: 'cosine'
lr_scheduler_kwargs:
  num_cycles: 3
learning_rate: 0.000005
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
train_on_inputs: false
group_by_length: true
neftune_noise_alpha: 5
bf16: auto
fp16:
tf32: false
seed: 314159
early_stopping_patience:
local_rank:
logging_steps: 1
log_level: debug
xformers_attention:
flash_attention: true
warmup_steps:
eval_per_epoch: 0.05
save_steps: 0.10
debug:
deepspeed: ./deepspeed_configs/zero2.json
weight_decay: 0.0020
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"
tokens:
  - "<|im_start|>"
  - "<|im_end|>"

Eros-ALPHA

This model is a fine-tuned version of Alignment-Lab-AI/Alignment-Lab-AIlonger on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2012

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 314159
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 6
  • total_train_batch_size: 48
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.3214 1.02 149 1.3297
1.2704 2.02 299 1.2548
1.1581 2.95 438 1.2012

Framework versions

  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2+cu118
  • Datasets 2.18.0
  • Tokenizers 0.15.0
Downloads last month
6
Safetensors
Model size
131M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tavtav/eros-7B-ALPHA

Finetuned
(2)
this model
Quantizations
1 model