Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
PLBΒ 
posted an update 18 days ago
Post
346
πŸ“ˆ Increase the quality of your RAG with a simple Linear Layer! No need to change your embedding model (keep that old OpenAI API).

Introducing EmbeddingAlign RAG, a novel approach to improve Retrieval-Augmented Generation (RAG) systems.

Key highlights:
- Uses a simple linear transformation on existing embeddings
- Boosts hit rate from 89% to 95% on real-world examples
- Minor increase on latency (less than 10ms)
- Works on top of blackbox embedding models (Mistral AI, OpenAI, Cohere,...)
- No dataset needed (just your documents)
- Train easily on CPU

πŸ€— Read the full article here on HF: https://huggingface.co/blog/PLB/embedding-align-rag

Interesting, but how does this approach generalize to arbitrary user query / document domains? Would you need to train a separate network for each domain / dataset?

Β·

As always, there is a trade-off to find between generality and absolute performance. If you have multiple different domaines/dataset with a clear separation, I think it would make sens to train an adapter for each domain.

In this post