some_guy@lemmy.sdf.org to Technology@lemmy.world · 4 months agoMajor shifts at OpenAI spark skepticism about impending AGI timelinesarstechnica.comexternal-linkmessage-square84fedilinkarrow-up1139arrow-down110
arrow-up1129arrow-down1external-linkMajor shifts at OpenAI spark skepticism about impending AGI timelinesarstechnica.comsome_guy@lemmy.sdf.org to Technology@lemmy.world · 4 months agomessage-square84fedilink
minus-squareMentalEdgelinkfedilinkEnglisharrow-up1·4 months agoBecause how could a piece of code that can do that, not already be AGI? It would have to be able to understand EVERYTHING, and do so PERFECTLY. Only AGI could comprehend and filter input data that well. Nothing less would be enough. How could it be?
minus-squarePetter1@lemm.eelinkfedilinkEnglisharrow-up1arrow-down1·4 months agoNo it just needs to categorise into important / probably true and not important / probably nonsense, as a first step Here are Johnny harris’s words describing what I am talking about (he describes it in order to able to talk about lies better) https://youtu.be/yWgG3Mgn2Gc?si=bPcYhRAZNaY2qIJS
minus-squareMentalEdgelinkfedilinkEnglisharrow-up1·edit-24 months agoRight… As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI? You are VERY confused about how thinking works.
minus-squarePetter1@lemm.eelinkfedilinkEnglisharrow-up1arrow-down1·3 months agoYou don’t need AGI to categorise new info as probably true / probably wrong based on your base knowledge. This a simple machine learning task.
Because how could a piece of code that can do that, not already be AGI? It would have to be able to understand EVERYTHING, and do so PERFECTLY.
Only AGI could comprehend and filter input data that well. Nothing less would be enough. How could it be?
No it just needs to categorise into important / probably true and not important / probably nonsense, as a first step
Here are Johnny harris’s words describing what I am talking about (he describes it in order to able to talk about lies better)
https://youtu.be/yWgG3Mgn2Gc?si=bPcYhRAZNaY2qIJS
Right…
As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI?
You are VERY confused about how thinking works.
You don’t need AGI to categorise new info as probably true / probably wrong based on your base knowledge. This a simple machine learning task.
No it isn’t.
OK