Neurosymbolic AI Could Be the Answer to Hallucination in Large Language Models
1 min read
Summary
Neurosymbolic AI has the potential to resolve the issues with large language models (LLMs), while reducing the enormous amounts of data required for training, argues this SingularityHub article.
The main problems with LLMs arehallucinations—such as US law professor Jonathan Turley being falsely accused of sexual harassment by ChatGPT in 2023, and a lack of accountability as it is hard to work out how the LLM reached a conclusion.
The field of neurosymbolic AI combines the predictive learning of neural networks with teaching the AI a series of formal rules that humans learn, allowing more reliabledeliberation.
While it is easier to apply neurosymbolic principles to AI in niche areas, there needs to be more research to refine their ability to discern general rules and perform knowledge extraction to make it feasible for general models.