Why enterprise RAG systems fail: Google study introduces ‘sufficient context’ solution
1 min read
Summary
Google has published a study that introduces the concept of sufficient context to help LLMs provide accurate answers to queries and avoid incorrect responses.
The approach works by analysing the information provided to the LLM and the query itself to see if it has enough context to provide an accurate response.
If the information is insufficient, the model should abstain from answering, ask for more information, or provide a response highlighting that it is uncertain.
Insufficient context occurs when the query requires specialised knowledge not present, or the information is inconclusive or contradictory.
Google used the sufficient context technique to develop a technique called selective generation, which improved the accuracy of answered queries in testing by 2-10% across various models and datasets.
The study also found that providing extra context to a LLM can reduce accuracy because it can increase the tendency of the model to provide an answer, even if it is wrong, rather than abstaining.