What is RAG?
RAG stands for Retrieval-Augmented Generation.
Before diving into the details, let’s first focus on the “Generation” part. Generation here refers to the process where LLMs produce text in response to a user query, known as a prompt. The problem is, these models can confidently provide incorrect or outdated answers without a reliable source.
An Example
Consider someone asking, “What’s the world record for the highest mountain summit?” Imagine answering confidently that it’s Mount Everest at 8,848 meters. While this is close, more precise measurements put the height at 8,849 meters. The key issue is twofold: the information might be outdated, and there’s no reliable source cited.
These are similar challenges faced by LLMs—providing unsourced or out-of-date information.
Image: Google and ChatGPT giving wrong answers confidently.
How RAG Solves These Issues
In the above scenario, had you first verified the information using an up-to-date source like a geographic database, you would have delivered an accurate and sourced answer. RAG applies this concept to AI models.
Here’s how it works:
- Instead of relying solely on what the model “knows” from its training, RAG integrates a content store. This store could be a curated collection of documents or even the entire internet.
- When a query is made, the model first retrieves relevant content from this store and then uses it to generate a more accurate response.
How Does RAG Work?
In a typical LLM setup, the model generates responses based only on its training data. But in a RAG framework, the process changes:
- The user prompts the LLM with a question.
- Before answering, the LLM is instructed to retrieve relevant content from the data store.
- The retrieved content, combined with the user’s question, is then used to generate a well-grounded and up-to-date response.
Addressing LLM Challenges with RAG
-
Outdated Information: Instead of retraining the model every time new data becomes available, the content store is updated. This allows the model to retrieve the latest information whenever a query is made.
-
Lack of Sources: The retrieval step ensures that the response is based on primary data, reducing the likelihood of hallucinations and incorrect claims. Additionally, the model can provide evidence, making its response more reliable.
The Balance Between Retrieval and Generation
While RAG improves accuracy, it’s not without its challenges. If the retrieval system isn’t well-optimized, the model may miss critical information, leading to incomplete or less informative responses. Research is ongoing to refine both the retrieval and generation aspects, ensuring that users receive the most accurate and well-supported answers.
TLDR; RAG enhances LLMs by making them more accurate and current, bridging the gap between training limitations and real-world information needs.