The Attention Cost of Agentic Development
Running multiple AI agents in parallel feels productive. It might be costing you the one thing that makes hard work good.
The first time you run an AI agent and just let it go, it feels like a superpower. You describe the task, hit enter, and watch it work. You review what it did. You move on. Simple.
Then you harden the setup a little. The agent can run longer, test its own work, catch its own mistakes. You do not need to babysit it. You let it go for thirty minutes instead of five.
Then you run two at once. Then four.
And this is where it gets interesting, because now you have a system problem. Four agents producing output means four things that need your attention. So you build a notification system, or you use someone else’s. You get a ping when something finishes. You build dashboards. You get visibility into what is running, what is stuck, what needs review.
And for a while, this feels like the goal. You have organized the chaos. You can see everything.
The problem with seeing everything
Here is what actually happens.
You get pinged. You switch context. You review something. You send feedback. You get pinged again. You switch again. You are moving at a pace that feels productive — the output is there, the queue is moving — but you are never really in one place long enough to think.
The tools that manage all this work tend to optimize for throughput. Dashboards, queues, status updates, notifications. They are designed to help you process more. And they do. But processing is not the same as thinking.
For small, well-defined tasks this is fine. Review a diff, approve it, next. You do not need to sit with it. The risk is low, the context is narrow.
For anything bigger, it is a problem.
Big work needs time
Big work — a hard architectural decision, a feature with subtle edges, anything that requires genuine understanding of what you already have — does not respond well to interruption. You need to build a mental model, hold it, stress-test it. That takes time and it takes staying in one place.
Context switching breaks that. Not just a little. Switching between four half-finished things means none of them get your real attention. You are always arriving, never settled.
This is not a new problem. Every senior engineer has felt it. But agentic development makes it worse because the incentive is pointed the wrong way. The whole pitch is throughput: ship more, faster, with fewer bottlenecks. The agent does the work. You review and move. That rhythm works until the work requires more than a quick review.
What is missing
Most tools I have seen in this space lead with visibility. Boards, timelines, notifications, diffs, progress indicators. They are excellent at showing you what is happening. They are not very good at helping you focus on one thing until it is done well.
I do not think there is a tool right now that handles this part of the problem: the moment where you need to slow down, read the whole thing, sit with it, and actually decide if it is right.
Some of that is hard to build. It requires the tool to get out of your way at the right moment instead of pulling you toward the next item in the queue. It means sometimes the right move is fewer notifications, not more.
The human part
There is a version of this where we just build better agents. More autonomous, more capable of catching their own mistakes and making their own judgment calls. The agent handles the subtleties. You barely have to look.
I think that is coming, and I think it will handle a lot of cases. But I suspect some work will always need what you can only give it by sitting with it for a while. Not because the AI cannot produce the output, but because “is this right” is sometimes not a question you can answer in thirty seconds at the end of a notification queue.
The quality of your attention shapes the quality of the result. That part is still human. The tools we build should probably make room for it.