This was an interesting raising-awareness project.
And the article says they didn’t let the chatbot generate its own responses (and therefore produce LLM hallucinations) but rather used an LLM in the background to categorize user’s question and return an answer from said category.
This was an interesting raising-awareness project.
And the article says they didn’t let the chatbot generate its own responses (and therefore produce LLM hallucinations) but rather used an LLM in the background to categorize user’s question and return an answer from said category.