Suggestion
#33
by
Hassan883
- opened
Hi all,
I want to use the llama 3.2-instruct 3b for now, my main goal is "I have multiple documents for now let's say I have 2 pages of document I want to finetune this model on this data, and this dataset will be increased may be turn into millions, after fine-tuning whenever user ask the specific information from the llama 3.2 the model should be able to give the response with high accuracy. I have the computation resources powers, so this is not an issue, guys I need your serious suggestion on this"
Should I fine-tune the llama 3.2 3B or any other model in the future or should I use the RAG approach that which one is best?
Thanks, Hassan Javed