Edit model card

Overview

Fine-tuned Llama-2 7B with a philosopher conversation dataset (originally from Hypersniper/philosophy_dialogue). Used QLoRA for fine-tuning. Trained for one epoch on a 40GB GPU (NVIDIA A100) instance.

The version here is the fp16 HuggingFace model.

Prompt style

The model was trained with the following prompt style:

<s>[INST] <<SYS>>
{{ system_prompt }}
<</SYS>>

{{ user_message }} [/INST]
{{ response }} </s>
Downloads last month
12
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Teachh/llama-2-7b-chat-philosophy-qa

Finetuned
(5)
this model

Dataset used to train Teachh/llama-2-7b-chat-philosophy-qa