Major research into ‘hallucinating’ generative models advances reliability of artificial intelligence

October 20, 2024

it spots when LLMs are uncertain about the actual meaning of an answer, not just the phrasing. Our method basically estimates probabilities in meaning-space, or ‘semantic probabilities. “Our method basically estimates probabilities in meaning-space, or ‘semantic probabilities’,” said study co-author Jannik Kossen (Department of Computer Science, University of Oxford). Dr Sebastian Farquhar, Department of Computer Science, University of OxfordCurrently, hallucinations are a critical factor holding back wider adoption of LLMs like ChatGPT or Gemini. There is still a lot of work to do.”The study ‘Detecting Hallucinations in Large Language Models Using Semantic Entropy’ has been published in Nature.

The source of this news is from University of Oxford