• MudMan@fedia.io
    link
    fedilink
    arrow-up
    37
    ·
    4 months ago

    So an interesting thing about this is that the reasons Gemini sucks are… kind of entirely unrelated to LLM stuff. It’s just a terrible assistant.

    And I get the overlap there, it’s probably hard to keep a LLM reined in enough to let it have access to a bunch of the stuff that Assistant did, maybe. But still, why Gemini is unable to take notes seems entirely unrelated to any AI crap, that’s probably the top thing a chatbot should be great at. In fact, in things like those, related to just integrating a set of actions in an app, the LLM should just be the text parser. Assistant was already doing enough machine learning stuff to handle text commands, nothing there is fundamentally different.

    So yeah, I’m confused by how much Gemini sucks at things that have nothing to do with its chatbotty stuff, and if Google is going to start phasing out Assistant I sure hope they fix those parts at least. I use Assistant for note taking almost exclusively (because frankly, who cares about interacting with your phone using voice for anything else, barring perhaps a quick search). Gemini has one job and zero reasons why it can’t do it. And it still really can’t do it.

    • miskOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      LLMs on their own are not a viable replacement for assistants because you need a working assistant core to integrate with other services. LLM layer on top of assistants for better handling of natural language prompts is what I imagined would happen. What Gemini is doing seems ridiculous but I guess that’s Google developing multiple competing products again.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 months ago
        1. Convert voice to text.
        2. Pre-parse vs voice command library of commands. If there are, do them, pass confirmation and jump to 6.
        3. If no valid commands, then pass to LLM.
        4. have LLM heavily trained on commands and some API output for them. If none, then other responses
        5. have response checked for API outputs, handle them appropriately and send confirmation forward, otherwise pass on output.
        6. Convert to voice.

        The LLM part obviously also needs all kinds of sanitation on both sides like they do now, but exact commands should preempt the LLM entirely, if you’re insisting on using one.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        It is a replacement for a specific portion of a very complicated ecosystem-wide integration involving a ton of interoperability sandwiched between the natural language bits. Why this is a new product and not an Assistant overhaul is anybody’s guess. Some blend of complicated technical issues and corporate politics, I bet.