When AI Orchestrates, Not Just Writes
Getting good results from AI coding tools is a workflow problem. The next step is AI that manages that workflow.
AI coding tools can write code. The harder part is everything around it: context switching, review cycles, keeping track of what still needs attention, and making sure nothing falls through the cracks. These are not side concerns. They are what determines whether AI-assisted development actually works.
Right now most AI tools sit in one corner of your workflow. You ask them to generate something, they generate it, and then you are on your own to integrate, test, review, and track it.
This is already starting to shift. Claude Code introduced an auto mode where a classifier decides per-action whether to ask for permission or proceed. Cursor launched automations that trigger agents from Slack messages, Linear tickets, or PagerDuty incidents. Devin lets you assign a Jira ticket and get a PR back. The pattern is the same: the unit of interaction is moving from keystrokes to tasks.
I think the next step is AI that does not just write. It orchestrates.
What orchestration looks like
Imagine the AI finishes writing a component and, instead of dumping the diff into a chat window, it does something more useful.
It creates a review task. Opens the diff in a pane next to the code. Pre-fills a comment about what changed and why. Leaves you with one action: approve or request changes.
Or it spots a potential security issue while modifying an auth module. Instead of mentioning it in passing, it creates a security review ticket in your backlog with the relevant context already attached.
Or it finishes a feature branch and realizes the tests are missing. It opens the test file, writes a skeleton, and creates a task: “Fill in the test cases for the new payment flow.”
The point is not that the AI is making decisions for you. It is that the AI is doing the work of structuring your work: creating tasks, opening the right context, reducing the number of times you have to switch between tools to figure out what needs your attention.
Assistants versus agents
There is a meaningful difference between an AI assistant and an AI agent, and it shows up in who holds the initiative.
An assistant waits for you. You ask, it answers. You prompt, it generates. The flow is always driven by the human, and the AI’s role is reactive.
An agent has some ability to act on its own, within boundaries you define. It can identify that something needs attention, create the task, and prepare the context. You still decide what to approve, but you are not the one doing the setup work.
The difference sounds small but it changes what the human actually does all day. When the AI is an assistant, you spend your time prompting and reviewing output. When it is an agent, you spend your time reviewing decisions and setting direction. The bottleneck shifts from giving instructions to giving good judgment.
This is the version I have been building toward. Open Forge, the tool I have been working on, is designed around this model. The AI agent working on a task can create follow-up tasks when it spots things that need separate attention: code cleanup, missing tests, inconsistencies it found while working. Those tasks land in the backlog with a description and context. I review them when I am ready, not when the AI happens to mention them.
Where this goes
There is an idea that keeps coming up in this space: the ticket is the new prompt. Instead of writing a detailed prompt for an AI to generate code, you write a clear task description and an agent picks it up, does the work, and comes back with something reviewable. The skill shifts from writing good prompts to writing good tasks, and that is arguably a more durable skill anyway.
From there, a few directions seem like natural extensions. One is tighter feedback loops with running applications. The AI could navigate to the right page, trigger the right state, and show you the result of a change before you even ask. Not just “I think this works” but “here is the app, here is the change, see for yourself.”
Another is integration with existing tools: Git, CI, issue trackers, documentation, so that the AI’s orchestration is not confined to one app but spans the whole development environment.
The human role
None of this removes the need for human judgment. The AI is not deciding what to build or whether something is good enough to ship. It is handling the mechanical parts of managing work so that humans spend more of their time on the parts that actually require thinking.
The shift is from “AI helps me code” to “AI helps me work.” And that second version is a lot more valuable, because writing code was never the bottleneck.