AI lie detector: How HallOumi’s open-source approach to hallucination could unlock enterprise AI adoption
1 min read
Summary
Companies using AI are frequently troubled by the fabricated responses the technology produces, including legal sanctions and firms being required to abide by fictional policies.
Oumi, an AI platform, has come up with a system called HallOumi, an open-source model for detecting inaccuracies.
HallOumi determines whether claims made by AI have supporting evidence by analysing sentences.
It produces three outcomes for each sentence: a score to highlight the likelihood of fabrications, references to supporting evidence, and a human-readable explanation.
The model looks for nuanced indicators and can spot both unintentional and deliberate misinformation.
Oumi has made the tool open-source, meaning businesses can incorporate it into their workflows, while an API is being developed to make the process more seamless.
The platform is not just a checker and works with RAG (retrieval augmented generation) but can also work independently of it, making it a flexible tool.
Potential applications include ensuring legal documents are factually accurate and reliable, or verifying weather forecasts are as accurate as possible.