They’re not “smart enough to be tricked” lolololol. They’re too complicated to have precise guidelines. If something as simple and stupid as this can’t be prevented by the world’s leading experts idk. Maybe this whole idea was thrown together too quickly and it should be rebuilt from the ground up. we shouldn’t be trusting computer programs that handle sensitive stuff if experts are still only kinda guessing how it works.
And one property of actual, real-life human intelligence is “happenning in cells that operate in a wet environment” and yet it’s not logical to expect that a toilet bool with fresh poop (lots of fecal coliform cells) or a dropplet of swamp water (lots of amoeba cells) to be intelligent.
Same as we don’t expect the Sun to have life on its surface even though it, like the Earth, is “a body floating in space”.
Sharing a property with something else doesn’t make two things the same.
There is no logical reason for you to mention in this context that property of human intelligence if you do not meant to make a point that they’re related.
So there are only two logical readings for that statement of yours:
Those things are wholly unrelated in that statement which makes you a nutter, a troll or a complete total moron that goes around writting meaningless stuff because you’re irrational, taking the piss or too dumb to know better.
In the heat of the discussion you were trying to make the point that one implies the other to reinforce previous arguments you agree with, only it wasn’t quite as good a point as you expected.
I chose to believe the latter, but if you tell me it’s the former, who am I to to doubt your own self-assessment…
No, you leapt directly from what I said, which was relevant on its own, to an absurdly stronger claim.
I didn’t say that humans and AI are the same. I think the original comment, that modern AI is “smart enough to be tricked”, is essentially true: not in the sense that humans are conscious of being “tricked”, but in a similar way to how humans can be misled or can misunderstand a rule they’re supposed to be following. That’s certainly a property of the complexity of system, and the comment below it, to which I originally responded, seemed to imply that being “too complicated to have precise guidelines” somehow demonstrates that AI are not “smart”. But of course “smart” entities, such as humans, share that exact property of being “too complicated to have precise guidelines”, which was my point!
Not even close to similar. We can create rules and a human can understand if they are breaking them or not, and decide if they want to or not. The LLMs are given rules but they can be tricked into not considering them. They aren’t thinking about it and deciding it’s the right thing to do.
Have you heard of social engineering and phishing?
I consider those to be analogous to uploading new rules for ChatGPT, but since humans are still smarter, phishing and social engineering seems more advanced.
We can create rules and a human can understand if they are breaking them or not…
So I take it you are not a lawyer, nor any sort of compliance specialist?
They aren’t thinking about it and deciding it’s the right thing to do.
That’s almost certainly true; and I’m not trying to insinuate that AI is anywhere near true human-level intelligence yet. But it’s certainly got some surprisingly similar behaviors.
They’re not “smart enough to be tricked” lolololol. They’re too complicated to have precise guidelines. If something as simple and stupid as this can’t be prevented by the world’s leading experts idk. Maybe this whole idea was thrown together too quickly and it should be rebuilt from the ground up. we shouldn’t be trusting computer programs that handle sensitive stuff if experts are still only kinda guessing how it works.
Have you considered that one property of actual, real-life human intelligence is being “too complicated to have precise guidelines”?
And one property of actual, real-life human intelligence is “happenning in cells that operate in a wet environment” and yet it’s not logical to expect that a toilet bool with fresh poop (lots of fecal coliform cells) or a dropplet of swamp water (lots of amoeba cells) to be intelligent.
Same as we don’t expect the Sun to have life on its surface even though it, like the Earth, is “a body floating in space”.
Sharing a property with something else doesn’t make two things the same.
…I didn’t say that it does.
There is no logical reason for you to mention in this context that property of human intelligence if you do not meant to make a point that they’re related.
So there are only two logical readings for that statement of yours:
I chose to believe the latter, but if you tell me it’s the former, who am I to to doubt your own self-assessment…
No, you leapt directly from what I said, which was relevant on its own, to an absurdly stronger claim.
I didn’t say that humans and AI are the same. I think the original comment, that modern AI is “smart enough to be tricked”, is essentially true: not in the sense that humans are conscious of being “tricked”, but in a similar way to how humans can be misled or can misunderstand a rule they’re supposed to be following. That’s certainly a property of the complexity of system, and the comment below it, to which I originally responded, seemed to imply that being “too complicated to have precise guidelines” somehow demonstrates that AI are not “smart”. But of course “smart” entities, such as humans, share that exact property of being “too complicated to have precise guidelines”, which was my point!
Got it, makes sense.
Thanks for clarifying.
Absolutely fascinating point you make there!
Not even close to similar. We can create rules and a human can understand if they are breaking them or not, and decide if they want to or not. The LLMs are given rules but they can be tricked into not considering them. They aren’t thinking about it and deciding it’s the right thing to do.
Have you heard of social engineering and phishing? I consider those to be analogous to uploading new rules for ChatGPT, but since humans are still smarter, phishing and social engineering seems more advanced.
So I take it you are not a lawyer, nor any sort of compliance specialist?
That’s almost certainly true; and I’m not trying to insinuate that AI is anywhere near true human-level intelligence yet. But it’s certainly got some surprisingly similar behaviors.
They aren’t deciding anything. They are regurgitating. They do not have agency.