I don’t know if I just have really good eyes for a 38 year old, but I can tell at first glance, within seconds, that this photo is AI generated. It’s all about the lack of humanity in the subject’s eyes
Or, and this is just a long shot, maybe you viewed the photo knowing it was AI generated and then worked backwards to create your own internal justification as to why you’re uniquely gifted as detecting “humanity” in the eyes on webcam selfie photos.
There’s still legitimate tells, but they’re not obvious if you’re not aware of the mistakes these models are prone to making. There’s fewer mistakes left each generation, though.
I would definitely be deceived by this picture. I would not be able to tell at first glance but I understand about the lack of humanity around the eyes and ears.
It’s far more than her eyes, she is bilaterally asymmetrical. With real people you can generally take a reflection of one side and it will look fairly close to the other. This woman has so much asymmetry it is off-putting. Her eyes are different heights and shapes, her cheek bones are different, the outer part of her nostrils are at different heights, her lip sides are shaped differently, her jawlines are different, her suprasternal notch(the divot at the base of the neck) is WILDLY different. The easiest thing to spot is her different skin tones. At first, you’ll want to chalk it up to shading, but the light source isn’t to her side but in front and to the upper right, that does not allow for such a radical change if you look at her forehead.
AI notes: make face and body images more symmetrical, but not 100%. Got it.
The only reason it hasn’t done that yet is because it’s not really AI, but large model probability with training feedback, and so far the feedback has enforced the “close enough” aspect. The next versions will cross the lines that still let us sense something isn’t quite right.
I don’t know if I just have really good eyes for a 38 year old, but I can tell at first glance, within seconds, that this photo is AI generated. It’s all about the lack of humanity in the subject’s eyes
Or, and this is just a long shot, maybe you viewed the photo knowing it was AI generated and then worked backwards to create your own internal justification as to why you’re uniquely gifted as detecting “humanity” in the eyes on webcam selfie photos.
There’s still legitimate tells, but they’re not obvious if you’re not aware of the mistakes these models are prone to making. There’s fewer mistakes left each generation, though.
You can say the same on pictures of my government members.
imagine if this person is real and reads your comment and just starts crying
Then we’ll see if that short circuits everything.
If by luck of humanity you mean a single light source reflected in her left eye, and double source in her right, I agree.
That’s a bit harsh on Chad there.
I didn’t look that close at first but now I see the earrings blending into her face and the hoodie.
I would definitely be deceived by this picture. I would not be able to tell at first glance but I understand about the lack of humanity around the eyes and ears.
It’s far more than her eyes, she is bilaterally asymmetrical. With real people you can generally take a reflection of one side and it will look fairly close to the other. This woman has so much asymmetry it is off-putting. Her eyes are different heights and shapes, her cheek bones are different, the outer part of her nostrils are at different heights, her lip sides are shaped differently, her jawlines are different, her suprasternal notch(the divot at the base of the neck) is WILDLY different. The easiest thing to spot is her different skin tones. At first, you’ll want to chalk it up to shading, but the light source isn’t to her side but in front and to the upper right, that does not allow for such a radical change if you look at her forehead.
AI notes: make face and body images more symmetrical, but not 100%. Got it.
The only reason it hasn’t done that yet is because it’s not really AI, but large model probability with training feedback, and so far the feedback has enforced the “close enough” aspect. The next versions will cross the lines that still let us sense something isn’t quite right.