Thoughts? Ideas? How do we align these systems, some food for thought; when we have these systems do chain of reasoning or various methods of logically going through problems and coming to conclusions we’ve found that they are telling “lies” about their method, they follow no logic even if their stated logic is coherent and makes sense.
Here’s the study I’m poorly explaining, read that instead. https://arxiv.org/abs/2305.04388
Well even if you have it explain in parallel to the answer the explanation is false (as in not matching with it’s “internal reasoning”) , from the perspective of a language model the conversation isn’t even separate moments in time, it’s responses and yours are all part of the same text dump that it’s appending likely text to the end of.
Its more like a student trying to bullshit his way out of a school oral test for not studying.
But what can be done about it? I am sure the AI actually doesn’t understand why it has written the thing it did, there would be the need of a secondary component that saves and pieces together the neural activity of the main AI, or something that compares it to the original dataset (but then it would just be a fancier search engine like Bing’s)