·2 min read

Agent teams are not sub-agents

Anthropic shipped agent teams in Claude Code last week, alongside Opus 4.6. At first glance they look like sub-agents. They work differently.

Sub-agents are workers. You give them a task, they disappear into their own context window, and come back with a result. If you spin up three sub-agents, each works blind to what the others are doing. They report to you. You synthesize.

Agent teams talk to each other. They share a task list, and when one agent discovers something that affects another's work, it sends a message directly. The lead coordinates, but the teammates adjust based on what others have found.

I tested this on Andelo's codebase. Andelo has around 90 database migrations and modules that depend on each other in ways that aren't obvious from the schema alone. I set up a team: one agent on the assessment flow for external valuers, one on improvement tracking, one on row-level security across both.

With sub-agents, each would have finished its own analysis and reported back. I'd then reconcile their findings, catching where one agent's assumptions conflicted with another's work.

With teams, the RLS agent flagged that assessment assignments used a different access pattern than the improvement module. It messaged the assessment agent, which adjusted its analysis on the fly. I didn't broker that exchange. The agents did.

The practical line: sub-agents save time on independent tasks. Agent teams save time on interdependent ones. If the work doesn't overlap, sub-agents are simpler and cheaper. When modules affect each other, having agents that also talk to each other catches connections you'd otherwise find by hand.

Setup is one environment variable: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in your Claude Code settings. It's still a research preview, and the token usage adds up. But for codebases where modules are coupled, this is the first time I've had AI catch cross-cutting concerns without me pointing them out.