AI agents “will come of age in 2025” writes AI futurist Steve Brown. But another AI expert, Tobias Zwingmann, argues that this year “the growing adoption of AI agents won’t deliver the productivity gains everyone expects.” These views, from two of the top experts in the field, actually complement each other more than they conflict.Business leaders want to know if their companies can provide improve their activity at lower cost. We economists want to know if this will happen at sufficient scale that productivity statistics will reflect the improvement. AI agents are the embodiment of my past prediction: Most business functions will not be improved by simply using a chatbot connected to a large language model. Instead, the big gains will come from specialized applications that solve a very specific business problem.”Brown offers ideas for AI agents, some of which now exist. An easy one to understand is the Customer Support Agent: “An agent that fields customer support calls, books appointments, answers questions, sells spare parts, and performs any needed follow-up, such as sending an email with a PDF of instructions.” He also lists customer help agents for travel and shopping, enterprise agents including buying agents and scientific researchers, and “ambient agents” which might monitor all aspects of a house, public space or cybersecurity. The key element is that the agents are good in one particular area. The customer service agent, for example, would be specialized for a specific company, understanding its products, their technical details, the company’s service call booking system, etc. The framework for an agent could be used by different companies, but each would have to be customized for the particular operation.Zwingmann points out that agents are anything but plug-and-play. “They work great in demos and controlled environments, but when you put them into production, things get really messy, really fast.” He offers three concrete suggestions for implementation in 2025. First, start small, gradually increasing the agent’s level of independence and authority.
The second suggestion is to prioritize implementations, starting with “specific, well-defined tasks where the consequences of failure are manageable. Document processing? Great start. High-stakes financial decisions? Maybe not yet.”Finally, Zwingmann recommends planning for failures. The agents will make mistakes, especially with unusual cases and when conditions change. The implementation plan should limit downside risks and have humans ready to intervene.
Brown, I believe, would concur on these suggestions. As a futurist, he lays out the vast potential of AI agents. He writes, “The near future of work will be about human, digital, and robotic employees working closely together to accomplish goals. Each will have unique strengths and weaknesses. Companies that figure out the right balance of employees in their organization, build a culture of trust between their human and machine employees, and use digital employees and robots to amplify and elevate the efforts of their human employees will win in the marketplace.”
Many great companies were built by two people with different personalities: the grand visionary joins with a nuts-and-bolts technician. One sees a glorious future coming from a radical change; the other makes sure the bills are paid on time and tries to shave a quarter percent off the cost of production. The two articles about AI agents and the future of productivity reflect these personalities. One lays out a view of the future economy in which people will become far more powerful because they have very useful tools. And the other article describes a practical, affordable path to achieving that future.More By This Author:Consumer Spending Forecast 2025: Good Jobs And Savings Drive GrowthDivorce Monday 2025: When High Mortgage Rates Block The ExitEconomic Forecast For 2025 And Beyond: Growth With Continued Inflation