It's a recent technique for creating synthetic instruction datasets.
Magpie is based on a simple but ingenious idea š if you prompt an instruction-tuned model with a pre-query template, you can make it generate a plausible user query/instruction
Here's an example: model: Llama-3-8B-Instruct pre-query template: "<|begin_of_text|><|start_header_id|>user<|end_header_id|>" generated user instruction: "What are some of the responsibilities of a commercial pilot?"
You can then feed this instruction back into the same model to get the assistant response.
By repeating this process, it's possible to generate large synthetic datasets with relatively little effort.
šŖ The authors demonstrate that using these datasets for Supervised Fine Tuning (SFT) can yield strong performance, even competitive with the original instruct model.
Most Language Models are primarily trained on English texts, so they tend to produce data in English.
How can we overcome this?
Earlier approaches were complex or costly.
Then @mrm8488 found a simple solution: add the target language to the pre-query template. For Spanish, the template becomes "<|begin_of_text|><|start_header_id|>user<|end_header_id|>spanish:".
This method works for Spanish and German!
ā Unfortunately, it does not work well for other languages (š®š¹, š³š±, ...)
I was excited to explore Llama 3.2, but as a simple šŖšŗ EU guy, I don't have access to Meta's multimodal models šæ
š¤ So I thought: why not challenge the small 3B text model with Agentic RAG?
šÆ The plan: - Build a system that tries to answer questions using a knowledge base. - If the documents don't contain the answer, use Web search for additional context.