Was gonna say, the AI doesn’t make up or admit bullshit, its just a very advanced a
prediction algorithm. It responds with what the combination of words that is most likely the expected answer.
Wether that is accurate or not is part of training it but you’ll never get 100% accuracy to any query
If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
Was gonna say, the AI doesn’t make up or admit bullshit, its just a very advanced a prediction algorithm. It responds with what the combination of words that is most likely the expected answer.
Wether that is accurate or not is part of training it but you’ll never get 100% accuracy to any query
If it can name what the most likely combination is, couldn’t it also know how likely that combination of words is?
If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
No, because that requires it to understand the words. It doesn’t.