Why does partial AI automation create new bottlenecks instead of saving time?
Because a workflow runs at the speed of its slowest step, and partial automation rarely touches that step. Speeding up the middle of a chain while leaving the beginning and end untouched relocates the backlog. It rarely shrinks it. The work you expected to eliminate shows up somewhere else in the process, usually somewhere your ROI analysis did not model.
The Math That Does Not Play Out
Teams green light automation projects with a clean arithmetic story. If AI handles three of five steps in a workflow, the team expects roughly 60% less effort and a proportional reduction in cycle time. Six months after shipping, the numbers rarely line up. Leadership calls the project a disappointment. Few people on the team can fully explain why. Meanwhile the team that built it is confused because each automated step really is faster.
The 2026 INSEAD and Harvard Business School field experiment on AI adoption surfaced a version of this directly. Companies that automated the entire chain of a process pulled ahead on revenue and conversion. Companies that automated two or three steps in the middle stayed flat, even though their individual automated steps were performing as designed. The study called out an eight step accounts receivable process one company automated end to end as an example of what full chain adoption looks like in practice. Partial adopters running the same general workload did not see the same lift.
The study measured the outcomes. The mechanism behind those outcomes is worth pulling apart.
How the Bottleneck Moves
In any workflow with multiple steps, one step is slower than the rest. That step sets the throughput of the entire chain. Speed up any other step and total cycle time barely changes, because work still piles up at the bottleneck.
Consider an eight step accounts receivable process. A team automates steps three through five: data extraction from invoices, routing to the right account owner, and draft generation of follow up emails. Those three steps used to take two days combined. Now they take minutes. Leadership expects the whole cycle to compress by two days.
It compresses by hours.
The bottleneck was always step six, the internal approval before anything goes out to a customer. That step already ran at the speed of whoever was free to review. Now the approver receives three times as many drafts per day, still has the same amount of review time available, and the queue that used to sit at step four now sits at step six. Same pile of work. New location.
Why the Handoffs Get Worse
The second effect is quieter and harder to measure. Partial automation changes the shape of the handoffs between steps.
When humans did every step of the chain, coordination happened in the background. The person handling step three knew the person handling step four. They adjusted for each other. Exceptions got flagged in conversation. Context traveled with the work.
When AI takes over the middle steps, the handoff back to a human at step six becomes a formal boundary. The human now has to validate AI output before acting on it. Reading the draft, spot checking the data it pulled, catching hallucinations, rewriting the edge cases the model did not handle cleanly. That validation work is invisible in most ROI models because it did not exist in the old process. The old process did not need anyone to check whether the invoice got read correctly. A person read it.
That hidden validation cost often eats the time savings from the automation itself.
Three Bottleneck Shifts to Watch For
Three specific shifts show up often enough to plan for.
The approval bottleneck. Upstream steps produce output faster than any human can review. The queue moves from waiting for a draft to waiting for sign off. Total cycle time is unchanged because the approver was always the constraint and nothing touched the approver's capacity.
The exception bottleneck. AI handles the routine cases cleanly. The remaining cases, the messy 15% or 20%, now consume disproportionate human attention. The humans who used to handle all the work had a mixture of routine cases and difficult cases. Now they mostly address exceptions.
The context bottleneck. Downstream humans no longer build context while doing the work, because the AI did the work. They have to rebuild context from artifacts. Reading what the model produced, tracing why it made the choices it made, verifying the inputs. Rebuilding context from scratch in the middle or at the end often takes longer than just doing the work.
Any team evaluating an automation opportunity should evaluate if they are in danger of creating these situations.
When Partial Automation Actually Works
Partial automation is not broken as a strategy. It is often the right move. It works reliably in three situations.
The automated steps were the actual bottleneck. If step four was the constraint and the AI makes step four five times faster, total cycle time moves. The gains match the expectations.
The humans upstream and downstream of the automated steps gain real slack and redirect it to higher value work. If the team still had headroom on approvals and the automation frees them to spend more time on strategic account recovery, the savings hold up. If the team was already at capacity on every step, the automation just shuffles the queue.
The partial automation is a staging step toward full chain automation, and the team has a real plan to close the remaining steps. Large projects take budget and attention. Make sure the automation initiative is not a casualty of losing either.
Any of those three cases justifies partial automation. Teams that skip the analysis and assume the savings will appear tend to find out they did not.
How to Avoid Moving the Backlog
A few practices help teams avoid shifting the bottleneck rather than eliminating it.
Map the full chain before picking what to automate. Include every approval step, every exception handler, every handoff. Do not fall into the trap of a high level automation plan that misses critical details.
Measure cycle time end to end before and after shipping. Not step level time. Total time from the start of the workflow to delivery. Step level savings mean little if the end to end number does not move.
Push approval logic into the AI step itself where you can. Confidence thresholds, auto approve bands for routine cases, escalation rules for exceptions. If a human sits at the end of every run, the approver tends to become the new constraint.
If the bottleneck is a step you cannot automate, do not automate around it. Redesign the workflow so the bottleneck goes away or stops gating the rest of the process. Automating in front of an immovable constraint is how teams end up with faster machines feeding the same slow human.
The Takeaway Worth Quoting
Partial AI automation does not save work. It moves it. The companies in the INSEAD and Harvard study that pulled away on revenue were the ones that automated the full chain and eliminated the handoffs between steps. The ones that automated the convenient middle and left the hard ends alone stayed flat.
That is why teams keep shipping working AI and calling it a failure.
By the Numbers
Firms that automated the full production chain pulled ahead on revenue and conversion, while firms that automated two or three steps in the middle of the chain stayed flat
Kim, Kim, and Koning, Mapping AI into Production: A Field Experiment on Firm Performance, INSEAD Working Paper 2026/20, March 2026
95% of enterprise GenAI pilots delivered no measurable profit and loss impact despite $30 to $40 billion in enterprise spending, a gap attributed to tools that do not learn from or adapt to the workflows they are dropped into
MIT NANDA, The GenAI Divide: State of AI in Business 2025, August 2025
Related Services
Related Insights
Have a Question About Your Business?
Book a free 30-minute call and we'll work through it together.
Start a Conversation