Overview
Fine-tuned Llama-2 7B with a philosopher conversation dataset (originally from Hypersniper/philosophy_dialogue). Used QLoRA for fine-tuning. Trained for one epoch on a 40GB GPU (NVIDIA A100) instance.
The version here is the fp16 HuggingFace model.
Prompt style
The model was trained with the following prompt style:
<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>
{{ user_message }} [/INST]
{{ response }} </s>
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Teachh/llama-2-7b-chat-philosophy-qa
Base model
daryl149/llama-2-7b-chat-hf