Fine tunening model
Does anyone fine tuned this model with their own data set?
Already tried to use Qlora fine tuned merge but the file conversion of .safetensors got me confuse π
Already tried to use Qlora fine tuned merge but the file conversion of .safetensors got me confuse π
Qlora works for me
Already tried to use Qlora fine tuned merge but the file conversion of .safetensors got me confuse π
Qlora works for me
Thanks, I realized the problem
Hey, I was trying to finetune this model using qLora with this config:
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
print_trainable_parameters(model)
However i ran into the following error:
ValueError: Target modules ['query_key_value'] not found in the base model. Please check the target modules and try
again.
I am a complete beginner, can someone please help me out? Thanks!
Hey, I was trying to finetune this model using qLora with this config:
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)model = get_peft_model(model, config)
print_trainable_parameters(model)However i ran into the following error:
ValueError: Target modules ['query_key_value'] not found in the base model. Please check the target modules and try
again.I am a complete beginner, can someone please help me out? Thanks!
Perhaps consider checking out Maxime Labonne's Tutorial. I found the quality of writing to be superior to that of my own notebook. I attempted to train this model, and it worked. Just replace the model name to this
Hey, thanks for the tutorial, I was able to fine tune my model. However, when I load the model saved on hugging face in text gen webui, it gets loaded on the cpu ram, instead of the gpu ram. Can you pls help?
https://huggingface.co/arvind2626/Stable-Beluga-arvind this is the fine tune model
Hey, thanks for the tutorial, I was able to fine tune my model. However, when I load the model saved on hugging face in text gen webui, it gets loaded on the cpu ram, instead of the gpu ram. Can you pls help?
https://huggingface.co/arvind2626/Stable-Beluga-arvind this is the fine tune model
Hi, I tested it on my Colab, and I think it's fine. I am also a beginner here. I'm curious about the dataset format you use. Does it look like this?
{"text": "### Human: ABABA### Assistant: ABABAB### Human: ABABA### Assistant: ABABAB"}
Because the quality of the model I fine-tuned isn't very good, and I suspect the problem lies with the format. Anyway, here's the notebook I tested. I believe it works: notebook