Silicon Valley is bullish on AI agents. OpenAI CEO Sam Altman said agents will “join the workforce” this year. Microsoft CEO Satya Nadella predicted that agents will replace certain knowledge work. Salesforce CEO Marc Benioff said that Salesforce’s goal is to be “the number one provider of digital labor in the world” via the company’s various “agentic” services.

But no one can seem to agree on what an AI agent is, exactly.

In the last few years, the tech industry has boldly proclaimed that AI “agents” — the latest buzzword — are going to change everything. In the same way that AI chatbots like OpenAI’s ChatGPT gave us new ways to surface information, agents will fundamentally change how we approach work, claim CEOs like Altman and Nadella.

That may be true. But it also depends on how one defines “agents,” which is no easy task. Much like other AI-related jargon (e.g. “multimodal,” “AGI,” and “AI” itself), the terms “agent” and “agentic” are becoming diluted to the point of meaninglessness.

  • Baldur Nil@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    14 hours ago

    …so might as well say that “agent” is simply the next buzzword, since people aren’t so excited with the concept of artificial intelligence any more

    This is exactly the reason for the emphasis on it.

    The reality is that the LLMs are impressive and nice to play with. But investors want to know where the big money will come from, and for companies, LLMs aren’t that useful in their current state, I think one of the biggest use for them is extracting information from documents with lots of text.

    So “agents” are supposed to be LLMs executing actions instead of just outputting text (such as calling APIs). Which doesn’t seem like the best idea considering they’re not great at all at making decisions—despite these companies try to paint them as capable of such.