Rag Application
#11
by
vin56
- opened
Can somebody explain me how to use this llm in a Rag application build using Langchain
Choose any Vector database like Qdrant, Pinecone, Weaviate and upload your content with metadata inside it. Query the vector db to get top 'k' most relevant elements from the db.
Use these are "current context", user's prompt as "current prompt", summarise past conversation and pass it as "past context" and pass it to your LLM. Voila its done!