PEFT
Persian
English

PEFT is not applicable in this setting

#1
by taghizadeh - opened

PEFT is applicable only at fine-tuning phase not pre-training.

I am not sure I understand what you mean? This model represents the PEFT weights for a base llama2-70b model instruction fine-tuned on alpaca data which has been translated to Persian. No "pre-training" has been done by me at least for this specific case.

iamshnoo changed discussion status to closed

Ok I got what you have done. Its not mentioned in model card or title so I got confused. But as you know llama-2 was mainly trained on English. The rest of the languages need full parameter tuning to get it ready for finetuning either with peft or not. Without proper base model, peft won't improve much.

taghizadeh changed discussion status to open

I agree that the performance with PEFT probably won't match full parameter tuning (which is a well-known fact and not something that this repository tries to contradict). And I would update the model card at some point to reflect more details about how this specific PEFT version was fine-tuned to avoid any confusion regarding its capabilities. Thanks for the feedback!

iamshnoo changed discussion status to closed

Sign up or log in to comment