Post
1693
๐ต๐ป ๐๐ ๐๐ง๐ญ๐ข๐ ๐๐๐ ๐ฐ๐ข๐ญ๐ก ๐ฆ ๐๐ฅ๐๐ฆ๐ 3.2
I was excited to explore Llama 3.2, but as a simple ๐ช๐บ EU guy, I don't have access to Meta's multimodal models ๐ฟ
๐ค So I thought: why not challenge the small 3B text model with Agentic RAG?
๐ฏ The plan:
- Build a system that tries to answer questions using a knowledge base.
- If the documents don't contain the answer, use Web search for additional context.
Check out my experimental notebook here: ๐ https://colab.research.google.com/github/deepset-ai/haystack-cookbook/blob/main/notebooks/llama32_agentic_rag.ipynb
My stack:
๐๏ธ haystack (https://haystack.deepset.ai/): open-source LLM orchestration framework
๐ฆ meta-llama/Llama-3.2-3B-Instruct
๐ฆ๐ free DuckDuckGo API, integrated with Haystack
โจ ๐๐ฉ๐ฆ ๐ณ๐ฆ๐ด๐ถ๐ญ๐ต๐ด? ๐๐ฏ๐ค๐ฐ๐ถ๐ณ๐ข๐จ๐ช๐ฏ๐จ - ๐ข ๐ง๐ฆ๐ธ ๐ฎ๐ฐ๐ฏ๐ต๐ฉ๐ด ๐ข๐จ๐ฐ, ๐ต๐ฉ๐ช๐ด ๐ญ๐ฆ๐ท๐ฆ๐ญ ๐ฐ๐ง ๐ฑ๐ฆ๐ณ๐ง๐ฐ๐ณ๐ฎ๐ข๐ฏ๐ค๐ฆ ๐ง๐ณ๐ฐ๐ฎ ๐ข ๐ด๐ฎ๐ข๐ญ๐ญ ๐ฎ๐ฐ๐ฅ๐ฆ๐ญ ๐ธ๐ฐ๐ถ๐ญ๐ฅ'๐ท๐ฆ ๐ฃ๐ฆ๐ฆ๐ฏ ๐ถ๐ฏ๐ต๐ฉ๐ช๐ฏ๐ฌ๐ข๐ฃ๐ญ๐ฆ!
This probably reflects the impressive IFEval score of the model (comparable to Llama 3.1 8B).
I was excited to explore Llama 3.2, but as a simple ๐ช๐บ EU guy, I don't have access to Meta's multimodal models ๐ฟ
๐ค So I thought: why not challenge the small 3B text model with Agentic RAG?
๐ฏ The plan:
- Build a system that tries to answer questions using a knowledge base.
- If the documents don't contain the answer, use Web search for additional context.
Check out my experimental notebook here: ๐ https://colab.research.google.com/github/deepset-ai/haystack-cookbook/blob/main/notebooks/llama32_agentic_rag.ipynb
My stack:
๐๏ธ haystack (https://haystack.deepset.ai/): open-source LLM orchestration framework
๐ฆ meta-llama/Llama-3.2-3B-Instruct
๐ฆ๐ free DuckDuckGo API, integrated with Haystack
โจ ๐๐ฉ๐ฆ ๐ณ๐ฆ๐ด๐ถ๐ญ๐ต๐ด? ๐๐ฏ๐ค๐ฐ๐ถ๐ณ๐ข๐จ๐ช๐ฏ๐จ - ๐ข ๐ง๐ฆ๐ธ ๐ฎ๐ฐ๐ฏ๐ต๐ฉ๐ด ๐ข๐จ๐ฐ, ๐ต๐ฉ๐ช๐ด ๐ญ๐ฆ๐ท๐ฆ๐ญ ๐ฐ๐ง ๐ฑ๐ฆ๐ณ๐ง๐ฐ๐ณ๐ฎ๐ข๐ฏ๐ค๐ฆ ๐ง๐ณ๐ฐ๐ฎ ๐ข ๐ด๐ฎ๐ข๐ญ๐ญ ๐ฎ๐ฐ๐ฅ๐ฆ๐ญ ๐ธ๐ฐ๐ถ๐ญ๐ฅ'๐ท๐ฆ ๐ฃ๐ฆ๐ฆ๐ฏ ๐ถ๐ฏ๐ต๐ฉ๐ช๐ฏ๐ฌ๐ข๐ฃ๐ญ๐ฆ!
This probably reflects the impressive IFEval score of the model (comparable to Llama 3.1 8B).