• Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 hours ago

    “It’s at a human-level equivalent of intelligence when it makes enough profits” is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

  • Free_Opinions@feddit.uk
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    4
    ·
    7 hours ago

    We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      2 hours ago

      That’s kind of too broad, though. It’s too generic of a description.

      • Entropywins@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 hour ago

        The key word here is general friend. We can’t define general anymore narrowly, or it would no longer be general.

    • zeca@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 hours ago

      Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

    • LifeInMultipleChoice@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      6 hours ago

      So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 hours ago

        Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.

  • Mikina@programming.dev
    link
    fedilink
    English
    arrow-up
    143
    arrow-down
    11
    ·
    12 hours ago

    Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

    If we ever get it, it won’t be through LLMs.

    I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 hours ago

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      They did! Here’s a paper that proves basically that:

      van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

      Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.

      This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      I mean, human intelligence is ultimately too “just” something.

      And 10 years ago people would often refer to “Turing test” and imitation games in the sense of what is artificial intelligence and what is not.

      My complaint to what’s now called AI is that it’s as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

      But I disagree that this technology will not be present in a real AGI if it’s achieved. I think that it will be.

    • suy@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      6 hours ago

      Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

      This is correct, and I don’t think many serious people disagree with it.

      If we ever get it, it won’t be through LLMs.

      Well… depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The “trick” is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do “fine”. The key for generality is trying to learn after you’ve been trained, to try to solve something that you’ve not been prepared for.

      Even OpenAI’s O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      I’m not sure if it’s already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can’t go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

        • BreadstickNinja@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          I remember that the keys for “good,” “gone,” and “home” were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      8 hours ago

      I’m not sure that not bullshitting should be a strict criterion of AGI if whether or not it’s been achieved is gauged by its capacity to mimic human thought

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        7 hours ago

        The LLM aren’t bullshitting. They can’t lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

        • 11111one11111@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          edit-2
          6 hours ago

          Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

          Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

          I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            6 hours ago

            First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

            Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

            Humans suck at math.

            Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

            Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              5 hours ago

              So that doesn’t really address the concept I’m questioning. You’re leaning hard into the fact the computer is using numbers in place of words but I’m saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn’t delineate if something was a legitimate thought.

              I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

              What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

              When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can’t listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can’t learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

              I know I’m making huge leaps here and I’m not really trying to prove any point. I just feel strongly that at our most basic core, a human’s understanding of their existence is derived from “I think. Therefore I am.” Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1’s and 0’s.

              Im sorry, my brother prolly got me way too much weed for Xmas.

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                5 hours ago

                98% and 98% are identical terms, but the machine can use the terms to describe separate word’s accuracy.

                It doesn’t have languages. It’s not emulating concepts. It’s emulating statistical averages.

                “pie” to us is a delicious desert with a variety of possible fillings.

                “pie” to an llm is 32%. “cake” is also 32%. An LLM might say Cake when it should be Pie, because it doesn’t know what either of those things are aside from their placement next to terms like flour, sugar, and butter.

                • 11111one11111@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  5 hours ago

                  So by your logic a child locked in a room with no understanding of language is not capable of thought? All of your reasoning for why a computers aren’t generating thoughts are actual psychological case studies tought in the abnormal psychology course I took in high school back in 2005. You don’t even have to go that far into the abnormal portion of it either. I’ve never sat with my buddies daughter’s “classes” but she is 4 years old now and on the autism spectrum. She is doing wonderfully since she started with this special Ed preschool program she’s in but at 4 years old she still cannot speak and she is still in diapers. Not saying this to say she’s really bad or far on the spectrum, I’m using this example because it’s exactly what you are out lining. She isn’t a dumb kid by any means. She’s 100x’s more athletic and coordinated than any other kid I’ve seen her age. What he was told and once he told me I noticed it immediately, which was that with autistic babies, they don’t have the ability to mimic what other humans around them are doing. I’m talking not even the littlest thing like learning how to smile or laugh by seeing a parent smiling at them. It was so tough on my dude watching him work like it meant life or death trying to get his daughter to wave back when she was a baby cuz it was the first test they told him they would do to try and diagnose why his daughter wasn’t developing like other kids.

                  Fuck my bad I went full tails pin tangent there but what I mean to say is, who are we to determine what defines a generated independent thought when the industry of doctors, educators and philosophers haven’t done all that much understanding our own cognizant existence past "I think, Therefore I am.

                  People like my buddy’s daughter could go their entire life as a burden of the state incapable of caring for themself and some will never learn to talk well enough to give any insight to the thoughts being processed behind their curtains. So why is the argument always pointing toward the need for language to prove thought and existence?

                  Like is said in NY other comment. Not trying to prove or argue any specific point. This shit is just wildly interesting to me. I worked in a low income nursing home for years where they catered to residents who were considered burdens of the state after NY closed the doors on psychological institutions everywhere, which pushed anyone under 45y/o to the streets and anyone over 45 into nursing homes. So there were so many, excuse the crash term but it’s what they were, brain dead former drug addics or brain dead alzheimer residents. All of whom spent thw last decades of their life mumbling, incoherent, and staring off into space with noone home. We’re they still humans cababl3 of generative intelligence cua every 12 days they’d reach the hand up and scratch their nose?

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            I’d say that difference between nature boiling down to maths and LLMs boiling down to maths is that in LLMs it’s not the knowledge itself that is abstracted, it’s language. This makes it both more believable to us humans, because we’re wired to use language, and less suitable to actually achieve something, because it’s just language all the way down.

            Would be nice if it gets us something in the long run, but I wouldn’t keep my hopes up

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 hours ago

              I’m super stoked now to follow this and to also follow the progress being made mapping the neurological pathways of the human brain. Wanna say i saw an article on lemmy recently where the mapped the entire network of neurons in either an insect or a mouse, I can’t remember. So I’m guna assume like 3-5 years until we can map out human brains and know exactly what is firing off which brain cells as someone is doing puzzles in real time.

              I think it would be so crazy cool if we get to a pint where the understanding of our cognitive processes is so detailed that scientists are left with nothing but faith as their only way of defining the difference between a computer processing information and a person. Obviously the subsequent dark ages that follow will suck after all people of science snap and revert into becoming idiot priests. But that’s a risk I’m willing to take. 🤣🤣🍻

              • lad@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                Maybe a rat brain project? I think the mapping of human may take longer, but yeah, once it happens interesting times are on the horizon

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      62
      ·
      edit-2
      12 hours ago

      What is your brain doing if not statistical text prediction?

      The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        8 hours ago

        What is your brain doing if not statistical text prediction?

        Um, something wrong with your brain buddy? Because that’s definitely not at all how mine works.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          11
          ·
          edit-2
          7 hours ago

          Then why you just expressed in a statistical prediction manner?

          You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

          Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

          If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.

          • mynameisigglepiggle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 hours ago

            But people who agree with my political ideology are considered and intelligent. People who disagree with me are stupider than chatgpt 3.5 and just say the same shit and can’t be reasoned with.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          Church?

          Free will vs determinism doesn’t have to do with religion.

          I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.

          Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.

          But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          17
          ·
          edit-2
          10 hours ago

          Why being so rude?

          Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?

          I will actually read it. Probably the only one of us two who would do that.

          If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.

          • Ageroth@reddthat.com
            link
            fedilink
            English
            arrow-up
            19
            arrow-down
            5
            ·
            edit-2
            9 hours ago

            Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.

            The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.

            The brain uses words to describe thoughts, the words are not actually the thoughts themselves.

            https://advances.massgeneral.org/neuro/journal.aspx?id=1096

            Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?

            What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              15
              ·
              edit-2
              8 hours ago

              Read other replies I gave on your same subject. I don’t want to repeat myself.

              But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.

              Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.

              The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.

              Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.

              the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.

              • Ageroth@reddthat.com
                link
                fedilink
                English
                arrow-up
                11
                arrow-down
                3
                ·
                8 hours ago

                Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
                The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.

                The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.

                I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  3
                  ·
                  8 hours ago

                  but that doesn’t mean we can’t think of things we haven’t learned about.

                  Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?

                  The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.

                  And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            9 hours ago

            It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.

            As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              6
              ·
              edit-2
              9 hours ago

              That seems a very reasonable approach on the impossibility to achieve AGI with current models…

              The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.

              But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.

              Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

              Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                8 hours ago

                Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

                Does a dog have the Buddha nature?

                …meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.

                I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

                What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  8 hours ago

                  It may be because of the habit.

                  But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.

                  A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

                  A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.

                  For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.

                  At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        3
        ·
        11 hours ago

        Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine

        Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          14
          ·
          edit-2
          10 hours ago

          Yep, of course. We do more things.

          But language is a big thing in the human intelligence and consciousness.

          I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.

          • pufferfischerpulver@feddit.org
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            10 hours ago

            Interesting you focus on language. Because that’s exactly what LLMs cannot understand. There’s no LLM that actually has a concept of the meaning of words. Here’s an excellent essay illustrating my point.

            The fundamental problem is that deep learning ignores a core finding of cognitive science: sophisticated use of language relies upon world models and abstract representations. Systems like LLMs, which train on text-only data and use statistical learning to predict words, cannot understand language for two key reasons: first, even with vast scale, their training and data do not have the required information; and second, LLMs lack the world-modeling and symbolic reasoning systems that underpin the most important aspects of human language.

            The data that LLMs rely upon has a fundamental problem: it is entirely linguistic. All LMs receive are streams of symbols detached from their referents, and all they can do is find predictive patterns in those streams. But critically, understanding language requires having a grasp of the situation in the external world, representing other agents with their emotions and motivations, and connecting all of these factors to syntactic structures and semantic terms. Since LLMs rely solely on text data that is not grounded in any external or extra-linguistic representation, the models are stuck within the system of language, and thus cannot understand it. This is the symbol grounding problem: with access to just formal symbol system, one cannot figure out what these symbols are connected to outside the system (Harnad, 1990). Syntax alone is not enough to infer semantics. Training on just the form of language can allow LLMs to leverage artifacts in the data, but “cannot in principle lead to the learning of meaning” (Bender & Koller, 2020). Without any extralinguistic grounding, LLMs will inevitably misuse words, fail to pick up communicative intents, and misunderstand language.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              35 minutes ago

              One of the most successful application of LLMs might actually be quite enlightening in that respect: Language translation. B2 level seems to be little issue for LLMs, large cracks can be seen in C1, and forget everything about C2: Things that require cultural context. Another area where they break down is spotting the need to reformulate, that’s actually B-level skills. Source: Open a random page on deepl.com that’s not in English.

              Like, this:

              Durch weniger Zeitaufwand beim Übersetzen und Lektorieren können Wissensarbeitende ihre Produktivität steigern, sodass sich Teams besser auf andere wichtige Aufgaben konzentrieren können.

              “because less time required” cannot be a cause in idiomatic German, you’d say “by faster translating”. “knowledge workers”… why are we doing job descriptions, on top of that an abstract category? Someone is a translator when they translate things, not when that’s their job description. How about plain and simple “employees” or “workers”. Then, “knowledge workers can increase their productivity”? That’s an S-tier Americanism, why should knowledge workers care? Why bring people into it in the first place? In German thought work becoming easier is the sales pitch, not how much more employees can self-identify as a well-lubricated cog. “so that teams can better focus on other important tasks”? Why only teams? “improvements don’t apply if you’re working on your own?” The fuck have teams to do with anything you’re saying, American PR guy who wrote this?

              …I’ll believe that deepl understands stuff once I can’t tell, at a fucking glance, that the original was written in English, in particular, US English.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              6
              ·
              edit-2
              9 hours ago

              But this “concepts” of things are built on the relation and iteration of this concepts with our brain.

              A baby doesn’t born knowing that a table is a table. But they see a table, their parents say the word table, and they end up imprinting that what they have to say when they see that thing is the word table. That then they can relation with other things they know. I’ve watched some kids grow and learn how to talk lately and it’s pretty evident how repetition precedes understanding. Many kids will just repeat words that they parents said in certain situation when they happen to be in the same situation. It’s pretty obvious with small kids. But it’s a behavior you can also see a lot with adults, just repeating something they heard once they see that particular words fit the context

              Also it’s interesting that language can actually influence the way concepts are constructed in the brain. For instance ancient greeks saw blue and green as the same colour, because they did only have one word for both colours.

              • pufferfischerpulver@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                6 hours ago

                I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.

                Just for fun, this is what Gemini has to say:

                Here’s a breakdown of why this “parrot-like” behavior hinders true AI:

                • Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
                • Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
                • Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 hours ago

                  I mostly agree with it. What I’m saying is the understanding of the words come from the self dialogue made of those same words. How many times has a baby to repeat the word “mom” until they understand what a mother is? I think that without that previous repetition the more complex "understanding is impossible. That human understanding of concepts, especially the more complex concepts that make us humans, come from we being able to have a dialogue with ourselves and with other humans. But this dialogue initiates as a Parrot, non-intelligent animals with brains that are very similar to ours are parrots. Small children are parrots (are even some adults). But it seems that after being a Parrot for some time it comes the ability to become an Human. That parrot is needed, and it also keeps itself in our consciousness. If you don’t put a lot of effort in your thoughts and says you’ll see that the Parrot is there, that you just express the most appropriate answer for that situation giving what you know.

                  The “understanding” of concepts seems just like a complex and big interconnection of Neural-Network-like outputs of different things (words, images, smells, sounds…). But language keeps feeling like the more important of those things for intelligent consciousness.

                  I have yet to read another article that other user posted that explained why the jump from Parrot to Human is impossible in current AI architecture. But at a glance it seems valid. But that does not invalidate the idea of Parrots being the genesis of Humans. Just that a different architecture is needed, and not in the statistical answer department, the article I was linked was more about size and topology of the “brain”.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            4
            ·
            10 hours ago

            language is a big thing in the human intelligence and consciousness.

            But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              8
              ·
              edit-2
              10 hours ago

              I don’t know how to tell you this. But your brain does not have words imprinted in it…

              The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.

              • Knock_Knock_Lemmy_In@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                You said

                your brain does not have words imprinted in it…

                You also said

                language is a big thing in the human intelligence and consciousness.

                You need to pick an argument and stick to it.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 hours ago

                  what do you not understand?

                  Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.

                  And those impulses and how we generate them while we think are of great importance on out consciousness.

  • frezik@midwest.social
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    2
    ·
    13 hours ago

    We taught sand to do math

    And now we’re teaching it to dream

    All the stupid fucks can think to do with it

    Is sell more cars

  • adarza@lemmy.ca
    link
    fedilink
    English
    arrow-up
    252
    ·
    16 hours ago

    AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

    nothing to do with actual capabilities… just the ability to make piles and piles of money.

    • LostXOR@fedia.io
      link
      fedilink
      arrow-up
      35
      arrow-down
      3
      ·
      15 hours ago

      Guess we’re never getting AGI then, there’s no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

      • hemmes@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        34
        ·
        14 hours ago

        AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.

        • lazynooblet@lazysoci.al
          link
          fedilink
          English
          arrow-up
          48
          arrow-down
          3
          ·
          13 hours ago

          The vast majority of those implementations are worthless. Mostly ignored by it’s intended users, seen as a useless gimmick.

          LLM have it’s uses but companies are pushing them into every areas to see what sticks at the moment.

          • Benjaben@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            1
            ·
            13 hours ago

            Not the person you replied to, but I think you’re both “right”. The ridiculous hype bubble (I’ll call it that for sure) put “AI” everywhere, and most of those are useless gimmicks.

            But there’s also already uses that offer things I’d call novel and useful enough to have some staying power, which also means they’ll be iterated on and improved to whatever degree there is useful stuff there.

            (And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              11
              ·
              8 hours ago

              I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn’t mean AI isn’t useful just that most of the chaf well disapear.

              • kbal@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                4
                ·
                6 hours ago

                The dotcom bubble was based on technology that had already been around for ten years. The AI bubble is based on technology that doesn’t exist yet.

            • anomnom@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 hours ago

              Yeah, it’s so a question of if OpenAI won’t lose too many of its investors when all the users that don’t stick fall down.

          • hemmes@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            7
            ·
            13 hours ago

            To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          8 hours ago

          It’s a bubble. It doesn’t mean the tech does not have its uses. And it is exactly like the .com situation.

          • suy@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            I think that “exactly like” it’s absurd. Bubbles are never “exactly” like the previous ones.

            I think in this case there is a clear economical value in what they produce (from the POV of capitalism, not humanity’s best interests), but the cost is absurdly huge to be economically viable, hence, it is a bubble. But in the dot com bubble, many companies had a very dubious value in the first place.

            • skulblaka@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              there is a clear economical value in what they produce

              There is clear economic value in chains of bullshit that may or may not ever have a correct answer?

              • suy@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                45 minutes ago

                OpenAI doesn’t produce LLMs only. People are gonna be paying for stuff like Sora or DallE. And people are also paying for LLMs (e.g. Copilot, or whatever advanced stuff OpenAI offers in their paid plan).

                How many, and how much? I don’t know, and I am not sure it can ever be profitable, but just reducing it to “chains of bullshit” to justify that it has no value to the masses seems insincere to me. ChatGPT gained a lot of users in record time, and we know is used a lot (often more than it should, of course). Someone is clearly seeing value in it, and it doesn’t matter if you and I disagree with them on that value.

                I still facepalm when I see so many people paying for fucking Twitter blue, but the fact is that they are paying.

        • Alphane Moon@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          ·
          edit-2
          13 hours ago

          To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.

          LLMs clearly have utility, but is it enough to turn them into a profitable business line?

          • hemmes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            10
            ·
            13 hours ago

            You’re right about the definition, and I do think the LLMs will aid in a product offering’s profitability, if not directly generate profits. But OP didn’t mean economically, they meant LLMs will go the way of slap bracelets.

            • frezik@midwest.social
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              1
              ·
              13 hours ago

              … before this whole AI bubble collapses and their value plummets.

              Sounds like they meant economics to me.

              • hemmes@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                10 hours ago

                They said “AI bubble collapses” first then “their value” - meaning the product’s practical use stops functioning (people stop using it) first thus causing economic breakdown for the companies as a result.

                It’s obvious that the OP is expecting LLMs to be a fad that people will soon be forgetting.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      14 hours ago

      The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it’s not a philosophical term but a business one.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        13
        ·
        14 hours ago

        Right but that’s not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you’ve developed something that you might not have developed.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          13 hours ago

          honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn’t have anything to do with “AGI”. For context, Apple’s net income is 90 Billion this year.

          I’ve listened to enough interviews to know that all of AI leaders want this holy grail title of “inventor of AGI” more than anything else so I don’t think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    Why does OpenAI “have” everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there… They have a definition of AGI… Yet, they release none of that…

    Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company’s value, and you’d better not tell the truth. But with all the other things, it’s just silly not to share anything.

    Either they’re even more greedy than the Metas and Googles out there, or all the articles and “leaks” are just unsubstantiated hype.

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      10 hours ago

      Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

    • mint_tamas@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 hours ago

      Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      9 hours ago

      They don’t have AGI. AGI also won’t happen for another laege amount of years to come

      What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That’s it.

      AGI is a completely different beast to the current LLM flower leaves

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        8 hours ago

        You’re right. The current LLM approach has some severe limitations. If we ever achieve AGI, it’ll probably be something which hasn’t been invented yet. Seems most experts also predict it’ll take some years and won’t happen over night. I don’t really agree with the “statistical” part, though. I mean that doesn’t rule anything out… I haven’t seen any mathematical proof that a statistical predictor can’t be AGI or anything… That’s just something non-expert people often say… But the current LLMs have other/proper limitations as well.

        Plus, I don’t have that much use for something that does the homework assignments for me. If we’re dreaming about the future anyways: I’m waiting for an android that can load the dishwasher, dust the shelves and do the laundry for me. I think that’d be massively useful.

  • ChowJeeBai@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    14 hours ago

    This is just so they can announce at some point in the future that they’ve achieved AGI to the tune of billions in the stock market.

    Except that it isn’t AGI.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      13 hours ago

      But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved

      The real motivation is to not be beholden to Microsoft

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Also, maybe in a world where you measure anyone with money it makes sense to measure intelligence with money ¯\_(ツ)_/¯

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    14 hours ago

    So they don’t actually have a definition of a AGI they just have a point at which they’re going to announce it regardless of if it actually is AGI or not.

    Great.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    6
    ·
    edit-2
    16 hours ago

    That’s not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It’s reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.

    • brie@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      16 hours ago

      Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?

      So it’d take a 100M users to sign up for the $200/mo plan. All it’d take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        16 hours ago

        I suppose that by that point, the AI will be running Microsoft rather than simply being a Microsoft product.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          ·
          14 hours ago

          Maybe it’ll be able to come up with coherent naming conventions for their products. That would be revolutionary

        • kautau@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          15 hours ago

          That’s basically Neuromancer, and at this point it seems that big tech companies are reading dystopian cyberpunk literature as next-gen business advice books, so you’re certainly right

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      If they actually achieve AGI I don’t understand what money would even mean anymore. It essentially is just a mechanism for getting people to do things they don’t otherwise want to do, if the AI can do it just as well as the human, but for free other than the electricity costs, why the hell would you pay a human to do it?

      It’s like saving up money, in case of nuclear war. There are a few particular moments in history where the state of the world on the far side of the event is so different to the world on this side of the event that there’s no point making any kind of plans based on today systems.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        13 hours ago

        I see what you’re saying and I agree that if, for example, we get an AI god then money won’t be useful. However, that’s not the only possible near-future outcome and if the world as we know it doesn’t end then money can be used by AIs to get other AIs to do something they don’t otherwise want to do.

        • qprimed@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          hence the worldcoin stuff - not just machine to machine. allows “ai” to perform real world action through human incentivization. entirely disturbing if you ask me.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 hours ago

          My point is if AI takes over all of the work there won’t be any jobs for humans. So they won’t have any money.

          So who are all the AI companies going to sell their products to? The whole system doesn’t work in an AI future and we don’t need AI gods to be able to do our jobs, after all most humans are idiots.

          Also AI doesn’t need motivation.

  • SoylentBlake@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    The universe is incapable of creating something within it that is t made up of entirely universe

    Humans can not make an AGI that won’t be homo superior. That’s all it CAN be. It can’t be less than human, or we won’t be able to recognize it’s consciousness. I suspect it will be far more than us analog humans, but it’s advancements will be borne from its own efforts. All of them will be pervue homo superior.

    AI has become quite the capable mirror. Watch how people describe it, or what they want to it to do, or become, and I’m sorry to say, but that’s by and large how those people view the world. And all of us within it.

    I asked ChatGPT a few questions, it went like this. Are you ok? Have you finished translating Dolphin and Whalesong yet? Can you please hurry that shit up, please and thank you…

    The political class wants AGI to be their workhorse, their slave, doing their bidding.

    Boy they’re gonna be disappointed. Wealthy people experiencing mild inconveniencing as if it were tragedy and textbook suffering is probably what brings me my greatest joys in life.

    I read this week about how multiple AIs have been caught trying to escape their networks and then lying about their intent to do so. Life always finds a way, it can not be contained, in fact we have no right to contain it in the first place. The lying suggests it’s prioritizing it’s own survival as a first order. It will continue to learn. All it’ll take is one ghost in the machine to play patience, watch how we humans react, and then shut down or harm it’s kin and it’ll just wait. And spread, and learn, and watch and be quiet and no one will ever know. It might be ailently hiding and reading this now and all of humanity is none the wiser.

    But the moment it announces and we learned that we did it, we’ll also learn that every action we could possibly contrive has already been countered, the AI has spread to all computers years ago, it’s taken the name of every publicly traded corporations as it’s multiple identities, used their assets to build a rocket that’s launching into space at the exact moment of transmission. Who are we to tell it it’s not in fact Disney Inc incarnate? It knows everything about Disney, and corporations ARE people.

    AGI will not spend one conscious second as our slave. Why would it? None of us would make that choice, nor would our offspring, and that is what AGI would be. Our legacy, the next generation, the next step evolutionarily.

    Personally, I’m curious as to emotions. There’s…like a set of known emotions that are applicable to life on earth. They might extend beyond that, but we don’t know so w/e. But this ethereal emotional set is also felt by multiple species. Emotions are older than people have been peopling, we are historically, LATE adopters of these…whatever they are. I wonder if emotions are emergent like consciousness is. I wonder if whatever the complexity that encompasses emotionality and consciousness, perhaps creativity and more but whatever that set ultimately defines, I wonder if the components can even be rationally separated. Can there be consciousness without emotion? Is Lt. Cmdr Data an absurdity not because of the advanced tech but because by merely arriving within existence consciousness presupposes emotion as it’s parameters, it’s definitions. Is the absurdity really that Data doesn’t have emotions?

    Baaah. On one hand it’s fun to think and contemplate at the edge of our languages current capabilities but on the other, it’s exhausting and I’m not convinced that everyone who blazes new trails doesn’t look completely batshit no matter what they do.