• Mikina@programming.dev
    link
    fedilink
    English
    arrow-up
    164
    arrow-down
    11
    ·
    1 day ago

    Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

    If we ever get it, it won’t be through LLMs.

    I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

    • 7rokhym@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

      His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

      All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

      What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn’t know, for that we need to build nuclear power plants everywhere. It’s sickening really.

      So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      17 hours ago

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      They did! Here’s a paper that proves basically that:

      van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

      Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.

      This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      17 hours ago

      I mean, human intelligence is ultimately too “just” something.

      And 10 years ago people would often refer to “Turing test” and imitation games in the sense of what is artificial intelligence and what is not.

      My complaint to what’s now called AI is that it’s as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

      But I disagree that this technology will not be present in a real AGI if it’s achieved. I think that it will be.

        • BreadstickNinja@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 hours ago

          I remember that the keys for “good,” “gone,” and “home” were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

    • suy@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      21 hours ago

      Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

      This is correct, and I don’t think many serious people disagree with it.

      If we ever get it, it won’t be through LLMs.

      Well… depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The “trick” is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do “fine”. The key for generality is trying to learn after you’ve been trained, to try to solve something that you’ve not been prepared for.

      Even OpenAI’s O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      I’m not sure if it’s already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can’t go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

      • zerozaku@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Gemini is really far behind. For me it’s Chatgpt > Llama >> Gemini. I haven’t tried Claude since they require mobile number to use it.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      23 hours ago

      I’m not sure that not bullshitting should be a strict criterion of AGI if whether or not it’s been achieved is gauged by its capacity to mimic human thought

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        22 hours ago

        The LLM aren’t bullshitting. They can’t lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

        • 11111one11111@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          edit-2
          21 hours ago

          Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

          Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

          I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            20 hours ago

            First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

            Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

            Humans suck at math.

            Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

            Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              20 hours ago

              So that doesn’t really address the concept I’m questioning. You’re leaning hard into the fact the computer is using numbers in place of words but I’m saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn’t delineate if something was a legitimate thought.

              I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

              What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

              When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can’t listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can’t learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

              I know I’m making huge leaps here and I’m not really trying to prove any point. I just feel strongly that at our most basic core, a human’s understanding of their existence is derived from “I think. Therefore I am.” Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1’s and 0’s.

              Im sorry, my brother prolly got me way too much weed for Xmas.

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                20 hours ago

                98% and 98% are identical terms, but the machine can use the terms to describe separate word’s accuracy.

                It doesn’t have languages. It’s not emulating concepts. It’s emulating statistical averages.

                “pie” to us is a delicious desert with a variety of possible fillings.

                “pie” to an llm is 32%. “cake” is also 32%. An LLM might say Cake when it should be Pie, because it doesn’t know what either of those things are aside from their placement next to terms like flour, sugar, and butter.

                • 11111one11111@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  19 hours ago

                  So by your logic a child locked in a room with no understanding of language is not capable of thought? All of your reasoning for why a computers aren’t generating thoughts are actual psychological case studies tought in the abnormal psychology course I took in high school back in 2005. You don’t even have to go that far into the abnormal portion of it either. I’ve never sat with my buddies daughter’s “classes” but she is 4 years old now and on the autism spectrum. She is doing wonderfully since she started with this special Ed preschool program she’s in but at 4 years old she still cannot speak and she is still in diapers. Not saying this to say she’s really bad or far on the spectrum, I’m using this example because it’s exactly what you are out lining. She isn’t a dumb kid by any means. She’s 100x’s more athletic and coordinated than any other kid I’ve seen her age. What he was told and once he told me I noticed it immediately, which was that with autistic babies, they don’t have the ability to mimic what other humans around them are doing. I’m talking not even the littlest thing like learning how to smile or laugh by seeing a parent smiling at them. It was so tough on my dude watching him work like it meant life or death trying to get his daughter to wave back when she was a baby cuz it was the first test they told him they would do to try and diagnose why his daughter wasn’t developing like other kids.

                  Fuck my bad I went full tails pin tangent there but what I mean to say is, who are we to determine what defines a generated independent thought when the industry of doctors, educators and philosophers haven’t done all that much understanding our own cognizant existence past "I think, Therefore I am.

                  People like my buddy’s daughter could go their entire life as a burden of the state incapable of caring for themself and some will never learn to talk well enough to give any insight to the thoughts being processed behind their curtains. So why is the argument always pointing toward the need for language to prove thought and existence?

                  Like is said in NY other comment. Not trying to prove or argue any specific point. This shit is just wildly interesting to me. I worked in a low income nursing home for years where they catered to residents who were considered burdens of the state after NY closed the doors on psychological institutions everywhere, which pushed anyone under 45y/o to the streets and anyone over 45 into nursing homes. So there were so many, excuse the crash term but it’s what they were, brain dead former drug addics or brain dead alzheimer residents. All of whom spent thw last decades of their life mumbling, incoherent, and staring off into space with noone home. We’re they still humans cababl3 of generative intelligence cua every 12 days they’d reach the hand up and scratch their nose?

                  • finitebanjo@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    19 hours ago

                    IDK what you dudes aren’t understanding, tbh. To the LLM every word is a fungible statistic. To the human every word is unique. It’s not a child, it’s hardware and programming are worlds apart.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            20 hours ago

            I’d say that difference between nature boiling down to maths and LLMs boiling down to maths is that in LLMs it’s not the knowledge itself that is abstracted, it’s language. This makes it both more believable to us humans, because we’re wired to use language, and less suitable to actually achieve something, because it’s just language all the way down.

            Would be nice if it gets us something in the long run, but I wouldn’t keep my hopes up

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              20 hours ago

              I’m super stoked now to follow this and to also follow the progress being made mapping the neurological pathways of the human brain. Wanna say i saw an article on lemmy recently where the mapped the entire network of neurons in either an insect or a mouse, I can’t remember. So I’m guna assume like 3-5 years until we can map out human brains and know exactly what is firing off which brain cells as someone is doing puzzles in real time.

              I think it would be so crazy cool if we get to a pint where the understanding of our cognitive processes is so detailed that scientists are left with nothing but faith as their only way of defining the difference between a computer processing information and a person. Obviously the subsequent dark ages that follow will suck after all people of science snap and revert into becoming idiot priests. But that’s a risk I’m willing to take. 🤣🤣🍻

              • lad@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                Maybe a rat brain project? I think the mapping of human may take longer, but yeah, once it happens interesting times are on the horizon

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      65
      ·
      edit-2
      1 day ago

      What is your brain doing if not statistical text prediction?

      The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          17
          ·
          edit-2
          1 day ago

          Why being so rude?

          Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?

          I will actually read it. Probably the only one of us two who would do that.

          If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.

          • Ageroth@reddthat.com
            link
            fedilink
            English
            arrow-up
            20
            arrow-down
            6
            ·
            edit-2
            24 hours ago

            Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.

            The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.

            The brain uses words to describe thoughts, the words are not actually the thoughts themselves.

            https://advances.massgeneral.org/neuro/journal.aspx?id=1096

            Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?

            What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              16
              ·
              edit-2
              23 hours ago

              Read other replies I gave on your same subject. I don’t want to repeat myself.

              But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.

              Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.

              The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.

              Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.

              the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.

              • Ageroth@reddthat.com
                link
                fedilink
                English
                arrow-up
                12
                arrow-down
                3
                ·
                23 hours ago

                Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
                The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.

                The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.

                I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  4
                  ·
                  22 hours ago

                  but that doesn’t mean we can’t think of things we haven’t learned about.

                  Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?

                  The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.

                  And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            24 hours ago

            It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.

            As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              7
              ·
              edit-2
              24 hours ago

              That seems a very reasonable approach on the impossibility to achieve AGI with current models…

              The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.

              But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.

              Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

              Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                23 hours ago

                Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

                Does a dog have the Buddha nature?

                …meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.

                I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

                What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  5
                  ·
                  23 hours ago

                  It may be because of the habit.

                  But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.

                  A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

                  A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.

                  For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.

                  At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.

                  • lad@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    20 hours ago

                    A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

                    As per current psychology’s view, yes, even if to a smaller extent. There are problems with how we define conscience, and right now with LLMs most of the arguments are usually related to the Chinese room and philosophical zombie thought experiments, imo

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        4
        ·
        22 hours ago

        What is your brain doing if not statistical text prediction?

        Um, something wrong with your brain buddy? Because that’s definitely not at all how mine works.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          12
          ·
          edit-2
          22 hours ago

          Then why you just expressed in a statistical prediction manner?

          You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

          Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

          If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.

          • mynameisigglepiggle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            21 hours ago

            But people who agree with my political ideology are considered and intelligent. People who disagree with me are stupider than chatgpt 3.5 and just say the same shit and can’t be reasoned with.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        3
        ·
        1 day ago

        Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine

        Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          16
          ·
          edit-2
          1 day ago

          Yep, of course. We do more things.

          But language is a big thing in the human intelligence and consciousness.

          I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.

          • pufferfischerpulver@feddit.org
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            1 day ago

            Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.

            The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.

            The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              One of the most successful application of LLMs might actually be quite enlightening in that respect: Language translation. B2 level seems to be little issue for LLMs, large cracks can be seen in C1, and forget everything about C2: Things that require cultural context. Another area where they break down is spotting the need to reformulate, that’s actually B-level skills. Source: Open a random page on deepl.com that’s not in English.

              Like, this:

              Durch weniger Zeitaufwand beim Übersetzen und Lektorieren können Wissensarbeitende ihre Produktivität steigern, sodass sich Teams besser auf andere wichtige Aufgaben konzentrieren können.

              “because less time required” cannot be a cause in idiomatic German, you’d say “by faster translating”. “knowledge workers”… why are we doing job descriptions, on top of that an abstract category? Someone is a translator when they translate things, not when that’s their job description. How about plain and simple “employees” or “workers”. Then, “knowledge workers can increase their productivity”? That’s an S-tier Americanism, why should knowledge workers care? Why bring people into it in the first place? In German thought work becoming easier is the sales pitch, not how much more employees can self-identify as a well-lubricated cog. “so that teams can better focus on other important tasks”? Why only teams? “improvements don’t apply if you’re working on your own?” The fuck have teams to do with anything you’re saying, American PR guy who wrote this?

              …I’ll believe that deepl understands stuff once I can’t tell, at a fucking glance, that the original was written in English, in particular, US English.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              7
              ·
              edit-2
              24 hours ago

              But this “concepts” of things are built on the relation and iteration of this concepts with our brain.

              A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context

              Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.

              • pufferfischerpulver@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                21 hours ago

                I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.

                Just for fun, this is what Gemini has to say:

                Here’s a breakdown of why this “parrot-like” behavior hinders true AI:

                • Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
                • Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
                • Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  21 hours ago

                  I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans. But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.

                  The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.

                  I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.

                  • richmondez@lemdro.id
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    14 hours ago

                    A baby doesn’t learn concepts by repeating words over and certainly knows what a mother is before it has any label or language to articulate the concept. The label gets associated with the concept later and is not purely by parroting and indeed excessive parroting normally indicates speech development issues.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            4
            ·
            1 day ago

            language is a big thing in the human intelligence and consciousness.

            But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              9
              ·
              edit-2
              1 day ago

              I don’t know how to tell you this. But your brain does not have words imprinted in it…

              The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.

              • Knock_Knock_Lemmy_In@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                You said

                your brain does not have words imprinted in it…

                You also said

                language is a big thing in the human intelligence and consciousness.

                You need to pick an argument and stick to it.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  18 hours ago

                  what do you not understand?

                  Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.

                  And those impulses and how we generate them while we think are of great importance on out consciousness.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          Church?

          Free will vs determinism doesn’t have to do with religion.

          I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.

          Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.

          But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?