The AI Automation Trap: What Manufacturing Can Teach Us About Agents, Token Costs, and Layoffs
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
— Roy Amara
Roy Amara’s observation, often called Amara’s Law, perfectly captures the current moment in generative AI. We are almost certainly overestimating what AI can safely and economically do in the short term. At the same time, we may be underestimating how profoundly it will reshape work, software, organizations, and human productivity over the long term.
That tension is important. The problem is not that AI is useless. Quite the opposite. Generative AI is already a remarkably useful assistant. It can help people write, summarize, brainstorm, research, code, analyze, and learn faster. Used well, it can expand what an individual is capable of doing.
The problem is that we are confusing a powerful assistant with an autonomous worker.
I created this simple explainer video where I describe generative AI as a “smart guesser.” That phrase is intentionally simple. These systems are powerful because they are very good at predicting plausible answers from patterns in data. But the same property that makes them useful also makes them risky: they can be confidently wrong.
A smart guesser can be an excellent assistant. A smart guesser should not be given unchecked authority over mission-critical decisions.
AI Is Most Valuable When It Augments Human Judgment
The best uses of generative AI today tend to have a few things in common: the cost of being wrong is manageable, the output can be reviewed, and a human remains accountable.
AI is useful when it helps draft a document, summarize a meeting, generate test cases, classify support tickets, brainstorm product ideas, or explain a complex topic. It is useful when it accelerates a human’s work without replacing the human’s responsibility.
In these cases, AI does not need to be perfect. It needs to be helpful. The human user provides context, judgment, taste, ethics, and final approval.
That is a very different proposition from asking an AI agent to act on behalf of a company.
An AI agent is not just answering questions. It may be sending emails, updating records, issuing refunds, placing orders, writing code, approving claims, modifying accounts, or interacting directly with customers. Once an AI system is allowed to take action, its mistakes are no longer just bad answers. They become operational failures.
The Agent Hype Is Where Things Get Dangerous
The current hype around AI agents is concerning because it encourages companies to move too quickly from assistance to autonomy.
The difference matters. A chatbot that drafts a response for a human is one thing. A chatbot that gives binding customer-service answers is another. Air Canada learned this the hard way when a tribunal found the airline liable after its chatbot gave a customer incorrect information about bereavement fares. The issue was not whether the chatbot was “intelligent.” The issue was that the company deployed it as part of its customer experience, and the customer relied on it.
We have seen similar warnings in other domains. In the legal field, lawyers in Matav. Avianca were sanctioned after submitting fake legal citations generated by ChatGPT. That episode became a memorable example of how plausible AI output can become professional misconduct when humans fail to verify it. McDonald’s also ended its IBM AI drive-thru pilot after testing the technology at more than 100 locations; reporting described customer complaints and order-accuracy problems, even though the company remains interested in future voice-ordering technology.
These stories are not arguments against AI. They are arguments against magical thinking.
The lesson is simple: once an AI system acts on behalf of an organization, its mistakes become the organization’s mistakes. The liability, brand damage, customer frustration, and cleanup costs do not disappear because the error came from a machine.
The Hidden Cost of AI Automation
Another misunderstanding in the current AI hype cycle is economic. Many people assume AI labor is nearly free. After all, a single prompt can cost pennies or fractions of a penny. But that is the wrong unit of analysis.
The relevant cost is not the cost of one prompt. The relevant cost is the cost of completing a workflow reliably.
Agentic systems can consume a surprising number of tokens. They may need to read long context, reason through several steps, call tools, retry failed actions, check their own work, produce logs, escalate exceptions, and interact with multiple systems. The cost can increase quickly, especially when agents are deployed at scale.
A single chatbot answer may be cheap. A production-grade AI workflow with orchestration, monitoring, evaluation, human fallback, compliance controls, and failure handling may not be.
This is why some companies are beginning to discover that AI automation is not automatically cheaper than human labor. Axios recently reported that some companies are spending more on AI than on the employee salaries those tools were meant to replace. A Forbes analysis described this as a “token paradox”: even as the price per token falls, total spending can rise as agentic systems use far more tokens across multi-step workflows.
The true cost of AI automation is not just:
model tokens
It is closer to:
model tokens + integration + orchestration + monitoring + evaluation + human escalation + security + compliance + failure recovery + liability risk
In some workflows, AI will be dramatically cheaper than human labor. In others, especially where the work is ambiguous, exception-heavy, emotionally sensitive, or mission-critical, the economics may be far less obvious.
Manufacturing Already Learned This Lesson
This is where manufacturing offers a useful analogy.
People often assume that factory automation is always better than manual labor. But manufacturing leaders know the real question is not “Can we automate this?” The question is “Does automation make the total system better?”
Many mass-produced electronics still rely heavily on human labor. The iPhone is a good example. Foxconn’s Zhengzhou complex in China, often called “iPhone City,” has employed hundreds of thousands of people during peak production periods. This is because iPhone assembly involves complex, repetitive, fine-motor work at extraordinary scale.
Apple has been pushing to automate more of final assembly, but this is easier said than done. Reports indicate that Apple’s goal is to reduce labor on iPhone final assembly lines by up to 50%, yet those efforts have run into practical limits, including high defect rates when attempting to automate certain component-installation steps.
The lesson is straightforward: even in one of the world’s most advanced supply chains, automation is not automatic. Human labor remains important when tasks are intricate, products change frequently, quality standards are high, and retooling is expensive.
This is not because manufacturers are unaware of robots. It is because automation is an economic and operational decision. Robots are often best suited to consistent, high-volume production, while the total cost of integration, including infrastructure, safety, installation, and maintenance, can make automation prohibitive for many manufacturers. They are less compelling when products change frequently, when reprogramming and retooling are expensive, when the work requires flexibility or dexterity, or when lower-cost labor remains more economical.
That sounds a lot like AI agents.
The point worth carrying into the AI debate is this: the question is not merely whether a task can be automated. The question is whether automation improves the total system.
The AI Version of the Manufacturing Decision
Manufacturing automation asks questions like:
- Is the task repetitive?
- Is the product stable?
- Is volume high enough to justify the investment?
- Is quality improved by automation?
- How expensive is retooling?
- What happens when something goes wrong?
AI automation should ask the same kinds of questions:
- Is the workflow predictable?
- Are the rules stable?
- Is the data reliable?
- Is the action reversible?
- Can the output be audited?
- How costly is a mistake?
- How often will humans need to handle exceptions?
- Does automation improve the customer experience, or merely reduce headcount?
The companies that win with AI will not be the ones that automate the most jobs the fastest. They will be the ones that understand which parts of the workflow should be automated, which should be augmented, and which should remain human.
Exposure Is Not the Same as Economic Automation
This distinction is often missing from discussions about AI and jobs. A task may be technically exposed to AI, but that does not mean it is economically attractive to automate.
MIT researchers made this point in a study on computer-vision automation. They found that, at then-current costs, businesses would choose not to automate most vision tasks that had AI exposure, and that only 23% of worker wages paid for vision tasks would be attractive to automate.
That finding is important because it challenges the simplistic version of the AI jobs story. The fact that AI can perform part of a task does not mean replacing the worker is the right business decision. The full economics matter: system cost, error rates, supervision, deployment time, maintenance, and whether the task exists inside a messy real-world workflow.
The same applies to generative AI. A demo can be impressive. A production system is different.
Layoffs in the Name of AI May Be Bad Business
This is why the current wave of AI-driven layoffs is so troubling.
The human cost is obvious. Losing a job is painful. It affects families, communities, mental health, and people’s sense of dignity. But even from a cold business perspective, replacing people before understanding the real operating model may be reckless.
More companies are now citing AI when announcing job cuts. Challenger, Gray & Christmas reported that in April 2026, AI led all reasons for job cuts for the second month in a row, with 21,490 announced cuts, or 26% of total cuts that month. AP has also reported that companies including Cisco, Block, Dow, Pinterest, and Lufthansa have pointed to AI or automation when announcing job reductions, while noting that AI is often one factor among several.
Some of these reductions may prove strategically sound. But many may not.
When companies cut too deeply in the name of AI, they risk eliminating the very expertise needed to make AI useful. Experienced employees understand customer edge cases, institutional history, informal workflows, regulatory nuance, and the difference between what the process says and how the work actually gets done.
That knowledge is hard to encode in a prompt.
Klarna is a useful cautionary example. The company previously promoted its AI customer-service chatbot as doing the work of hundreds of agents. Later reporting said Klarna was turning back toward human customer-service hiring after concerns about quality.
That does not mean Klarna was wrong to use AI. It means the right model may not be “replace humans.” The right model may be “redesign the work so humans and AI each do what they are best at.”
The Better Model: Human-in-the-Loop, Not Human-out-of-the-Loop
The most durable AI systems will likely be human-in-the-loop by design.
That does not mean every AI action needs manual approval. It means companies should be thoughtful about where human judgment is required. The more irreversible, sensitive, or high-stakes the action, the more oversight is needed.
AI agents are best suited for workflows that are high-volume, low-risk, reversible, auditable, and governed by clear rules. They are poorly suited for workflows that are mission-critical, legally sensitive, emotionally complex, ambiguous, or difficult to verify after the fact.
A good AI deployment should make the system more capable, not merely less staffed.
That means asking better questions:
- Where does AI reduce drudgery?
- Where does it improve quality?
- Where does it help employees serve customers better?
- Where does it create hidden risk?
- Where does it merely shift work from employees to customers?
- Where does it save money only on paper?
- Where does it require human judgment, empathy, or accountability?
The goal should not be to remove humans from the system. The goal should be to design better systems.
The Long-Term Impact May Still Be Enormous
None of this means AI is a fad. Amara’s Law cuts both ways.
We may be overestimating the short-term ability of AI agents to replace workers safely and economically. But we may also be underestimating the long-term transformation that will come as the technology improves, costs change, organizations redesign workflows, and people learn how to use AI effectively.
The internet did not transform everything overnight. Smartphones did not either. Cloud computing, SaaS, and manufacturing robotics all took time to mature. The lasting impact came not from the first wave of hype, but from years of infrastructure, process redesign, standards, business-model innovation, and cultural adaptation.
Generative AI will likely follow a similar path.
The near-term risk is that companies act as if the future has already arrived. The long-term opportunity is that we learn how to use these tools wisely.
Conclusion: Automate Where It Makes the System Better
Manufacturing learned an important lesson over decades: automation is not a religion. It is a design and economics decision.
Robots make sense in some factories, on some lines, for some products, at some volumes, with some payback periods. In other cases, human labor remains more flexible, more economical, or better suited to the work.
AI deserves the same discipline.
Generative AI can be an extraordinary assistant. It can increase productivity, improve creativity, reduce repetitive work, and help people do more with less. But when we mistake a smart guesser for an accountable worker, we create brittle systems, frustrated customers, hidden costs, and unnecessary harm.
The right lesson is not “AI will replace everyone.”
The right lesson is also not “AI is overhyped and useless.”
The right lesson is this:
Use AI where it makes the system better. Keep humans where judgment, accountability, empathy, and adaptability matter. And do not confuse replacing people with creating value.
That may be the real opportunity in this moment: not mass replacement, but better collaboration between humans and machines.
I first published this article on LinkedIn.
