• abhibeckert@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 months ago

    Apple is working on models, but they seem to be focusing on ones that use tens of gigabytes of RAM, compared to tens of terabytes.

    I wouldn’t be surprised Apple ships an “iPhone Pro” with 32GB of RAM dedicated to AI models. You can do a lot of really useful stuff with a model like that… but it can’t compete with GPT4 or Gemini today - and those are moving targets. OpenAI/Google will have even better models (likely using even more RAM) by the time Apple enters this space.

    A split system, where some processing happens on device and some in the cloud, could work really well. For example analyse every email/message/call a user has ever sent/received with the local model, but if the user asks how many teeth a crocodile has… you send that one to the cloud.

    • Fubarberry
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Tbf, Google has versions of Gemini that will run locally on phones too, and their open source Gemini models run on 16GB of ram or so.