Physics LlAMA 2 13B
This model is a LoRA for meta-llama/Llama-2-13b-chat-hf on ArtifactAI/arxiv-physics-instruct-tune-30k.
Model description
This is a physics chatbot.
Intended uses & limitations
You can use this to assist in research, or to answer questions you may have about physics.
Training and evaluation data
This model is trained on ArtifactAI/arxiv-physics-instruct-tune-30k
Training procedure
Source code can be found here
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 100
Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sr5434/PhysicsLlama-13B
Base model
meta-llama/Llama-2-13b-chat-hf