Buzzwords are nothing new in tech. But few have become as muddled as “AI agents.” The term is now everywhere—from VC pitch decks to startup websites—but what it actually means is still up for debate, even among the most seasoned investors.
Take Andreessen Horowitz (a16z), one of the most aggressive backers of AI startups. They’ve poured money into major players like OpenAI and Anysphere, and are reportedly raising a $20 billion mega-fund to double down on AI. But even inside the firm, there’s no clear agreement on what an AI agent actually is.
In a recent podcast episode titled “What Is an AI Agent?”, three of a16z’s infrastructure investment partners—Guido Appenzeller, Matt Bornstein, and Yoko Li—tried to pin down a definition. They pointed out that startups today are slapping the “agent” label on a wide range of tools, from simple prompt-based bots to ambitious systems meant to replace human workers.
Appenzeller joked that some so-called agents are just a clever prompt layered over a knowledge base, offering canned responses to support queries. Hardly revolutionary. But other companies are going further, marketing their agents as full-on human substitutes—claims that the a16z team called premature at best.
For an AI tool to truly function like a human employee, Appenzeller argued, it would need to behave more like artificial general intelligence (AGI): staying active over time, remembering past interactions, and working independently. As of now, he and Li both agreed, that level of functionality simply doesn’t exist.
Why the Hype Around AI Agents Is Outpacing Reality
Even founders building AI agents are running into harsh technical limits. Jaspar Carmichael-Jack, CEO of Artisan (a company building sales agents), admitted that despite a viral “stop hiring humans” campaign, his team is still very much hiring humans. That’s because real-world AI agents still struggle with long-term memory and tend to hallucinate—two deal-breakers when reliability and continuity matter.
So what can AI agents do right now? Yoko Li offered a grounded definition: an AI agent is a reasoning, multi-step large language model (LLM) with a dynamic decision tree. In simple terms, it’s not just a chatbot that follows instructions—it makes its own choices. It can pull data, decide what matters, act on it, and even generate follow-up actions like writing emails or inserting code.
This is a far cry from replacing human jobs. While automation might take over some repetitive tasks, Bornstein explained, we’re nowhere near building bots that can match human creativity or critical thinking. In fact, as agents boost productivity, some businesses may end up hiring more people, not fewer.
Bornstein emphasized that the industry often forgets most people have jobs that require flexibility, emotion, and nuance—things AI still can’t replicate. As for a future where bots replace all white-collar workers? “I’m not sure that’s even theoretically possible,” he said.
Much of today’s confusion, the group concluded, stems from companies pushing exaggerated narratives to market their products or justify higher prices. The result is an overhyped landscape where expectations outpace capabilities.
If even Silicon Valley insiders are urging caution, the rest of us would be wise to stay skeptical too.