• henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    66
    ·
    4 months ago

    This might be a joke, but it’s making a very important point about how AI is being applied to problems where it has no relevance.

    • nalinna@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      4 months ago

      Yep, there’s flavor text at the bottom saying exactly that:

      But this isn’t your standard piece of tech; instead, it is a clever parody, an emblem of resistance to the unrelenting AI craze.

      CalcGPT embodies the timeless adage - ‘Old is Gold’ - reminding us that it’s often rewarding to resort to established, traditional methods rather than chasing buzzword-infused cutting-edge tech.

  • VaalaVasaVarde
    link
    fedilink
    arrow-up
    18
    ·
    4 months ago

    We are on the right track, first we create an AI calculator, next is an AI computer.

    The prompt should be something like this:

    You are an x86 compatible CPU with ALU, FPU and prefetching. You execute binary machine code and store it in RAM.

  • sweetgemberry@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    4 months ago

    It seems to really like the answer 3.3333…

    It’ll even give answers to a random assortment of symbols such as “±±/” which apparently equals 3.89 or… 3.33 recurring depending on its mood.

  • jacksilver@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    One of thing I love telling the that always surprises people is that you can’t build a deep learning model that can do math (at least using conventional layers).

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        I’m curious what approaches you’re thinking about. When last looking into the matter I found some research in Neural Turing Machines, but they’re so obscure I hadn’t ever heard of them and assume they’re not widely used.

        While you could build a model to answer math questions for a set input space, these approaches break down once you expand beyond the input space.

          • jacksilver@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            Yeah, but since Neural networks are really function approximators, the farther you move away from the training input space, the higher the error rate will get. For multiplication it gets worse because layers are generally additive, so you’d need layers = largest input value to work.