You can run checks and fence it in with traditional software, you can train it more narrowly…
I haven’t seen anything that suggests AI hallucinations are actually a solvable problem, because they stem from the fact that these models don’t actually think, or know anything.
They’re only useful when their output is vetted before use, because training a model that gets things 100% right 100% of the time, is like capturing lightning in a bottle.
It’s the 90/90 problem. Except with AI it’s looking more and more like a 90/99.99999999 problem.
You really can’t.
You can run checks and fence it in with traditional software, you can train it more narrowly…
I haven’t seen anything that suggests AI hallucinations are actually a solvable problem, because they stem from the fact that these models don’t actually think, or know anything.
They’re only useful when their output is vetted before use, because training a model that gets things 100% right 100% of the time, is like capturing lightning in a bottle.
It’s the 90/90 problem. Except with AI it’s looking more and more like a 90/99.99999999 problem.