Edit model card

Model Overview

Model license: cc-by-nc-4.0
This model is trained based on EleutherAI/pythia-1.4b-deduped model that is LoRA finetuned on vicgalle/alpaca-gpt4 dataset.

Prompt Template: Alpaca

<system_prompt>

### Instruction:
<user_message>

### Response:
<assistant_response>

Intended Use

THIS IS A TEST MODEL, IT IS NOT INTENDED FOR REAL APPLICATIONS BY ANY MEANS. HOWEVER, A NEW MODEL IS COMING IN THE SAME TOPIC.
This model series will be used for small but intense applications.

Training Details

This model took 2:31:23 to train in QLoRA on a single T4 GPU.

  • epochs: 1
  • train batch size: 12
  • eval batch size: 12
  • gradient accumulation steps: 1
  • maximum gradient normal: 0.3
  • learning rate: 2e-4
  • weight decay: 0.001
  • optimizer: paged_adamw_32bit
  • learning rate schedule: cosine
  • warmup ratio (linear): 0.03
Downloads last month
4,445
Safetensors
Model size
1.41B params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AtAndDev/ShortKing-1.4b-v0.1

Quantizations
1 model

Dataset used to train AtAndDev/ShortKing-1.4b-v0.1