From Big Crews to Fast Ships: Org Design After AI
Why your scale-up plan is outdated, and what replaces it.

Ivan Peralta
Engineering
Mar 10, 2026

For most of the last decade, scaling engineering followed a familiar playbook: hire more people, add layers of leadership, split into teams, and create stronger handoffs between product, design, and engineering. I experienced that journey at a few organizations, It worked—because the main constraint was execution capacity, and coordination was a price worth paying.
Lately, I’ve been questioning whether that playbook still matters.
AI didn’t remove the need for good product thinking, good engineering, or good teams. But it changed a constraint we’ve all been optimizing around for years: latency. The loop is the same: understand the problem, choose a solution, build it, ship it, learn. What changed is how fast you can move through that loop when parts of execution are delegated or empowered by AI and agents.
That shift makes some classic structures feel heavier than they need to be. In some contexts, you can ship a thin slice, learn from reality, and iterate faster than you can “perfect” your way through weeks of artifacts, handoffs, and alignment. This doesn’t mean every company should shrink or abandon rigor. Context is still king. But it does mean we need to be honest about what’s actually slowing us down and whether we’re about to scale the wrong thing.
And this is where it gets personal. I still get approached by leaders who want to duplicate or triplicate their engineering teams, and they want to talk about my scale-up learnings. I understand why. But I also feel the tension: what if the best help I can offer is not “how to hire more,” but “why are you not getting the best out of the team you already have?” Because if the operating model is the problem, scaling headcount is not neutral. It can be a multiplier for dysfunction.
What still holds from scale-ups: coordination debt shows up faster now

When I think back to scaling in my career, the hardest part wasn’t building features. It was keeping the system coherent as headcount and scope grew: more dependencies, more surfaces, more production load, more people who needed context to make good decisions.
The real enemy was coordination debt and it compounds. Every new layer, interface, meeting, or handoff adds latency. Some of that is necessary at scale. But a lot of it becomes institutional habit.
AI doesn’t eliminate coordination debt. It makes it more visible. When execution gets faster, the bottleneck moves to everything around execution: decisions, reviews, approvals, sequencing, and ownership. That’s why “just hire more” is increasingly risky as a default response. Adding people can add output, yes—but it also adds parallel work, collisions, and negotiation overhead. You can end up with more movement and the same speed.
This is the uncomfortable point: many teams don’t need more capacity first. They need a different operating model: one that treats speed as cheap, and trust as expensive.
Staff+ in the AI era: when your best builders become the safety mechanism
AI changes the distribution of power inside engineering teams. More people can ship more, faster. The surface area of change expands. And in many scale-ups, a predictable dynamic kicks in: Staff+ engineers get pulled into being the human safety mechanism. Not because they want to block. Because someone has to hold the risk.
They become the people who:
slow things down “for good reasons”
say no when systems feel fragile
carry context and consequences in their head
absorb the blame when something breaks
get framed as gatekeepers by teams under delivery pressure
This is where the conversation gets uncomfortable, because it’s not a productivity problem. It’s a role and identity problem.
If your most senior engineers spend their time preventing mistakes other people can now produce at speed, you’re wasting the highest leverage talent you have. You’re also turning leadership into an adversarial function: the people responsible for progress start seeing the people responsible for safety as the enemy.
In the old world, this tension was often hidden by slower execution. In the new world, it becomes explicit—and it’s going to define which organizations scale and which ones stall.
The most important question isn’t “how do we move faster?” It’s: what are we turning our best people into, just to keep the system stable?
And here’s the part many leaders miss: if your organization is already in this stage—where senior engineers are acting as safety mechanisms—hiring more people usually makes it worse. You increase the volume of change, the number of edges, the number of misunderstandings, the number of “small” decisions that can break trust. The load doesn’t distribute evenly. It concentrates on the same few people who can see the failure modes. That’s when you get the regrettable outcomes: senior burnout, fine talent leaving, and an organization that becomes slower because it got bigger.
Discovery vs delivery after AI: when the “artifact phase” stops being the learning engine
I’ve used classic discovery frameworks for years. Double Diamond was one of my favorites because it pulls engineering closer to product thinking: not just implementing solutions, but shaping them.
That still matters. What changed is the cost of turning an idea into something real.
When execution was expensive, discovery relied heavily on artifacts: docs, prototypes, handoffs, staged approvals. Those artifacts weren’t just communication tools—they were how teams reduced risk before committing real build capacity.
AI compresses that cost. In many contexts, the highest-fidelity way to learn is no longer debating a prototype. It’s putting a thin version of the thing in front of reality and observing what happens.
This creates a new tension: the structures we built to protect discovery can turn into the thing that delays it. Product and design can accidentally become guardians of an “artifact phase” that once made sense, but now adds latency without adding proportional learning.
None of this removes the need for judgment, customer context, or craft. And there are contexts where you must preserve stronger pre-production validation. But across a large part of software, the competitive edge is shifting from “how well we design the plan” to how quickly we can converge on truth.
The question is no longer “did we follow the right process?” It’s: what did we learn this week that we couldn’t have known without shipping something real?
The new shape of teams: fewer handoffs, wider surfaces, higher judgment
If latency is the constraint, the most valuable “unit” isn’t a function or a department. It’s a small group that can move from problem → decision → build → learning with minimal negotiation overhead. That doesn’t mean specialization disappears. It means the center of gravity shifts:
Engineers: from contributors to drivers
The modern expectation is less “deliver what’s defined” and more:
spot problems worth solving
frame trade-offs clearly
execute end-to-end with AI support
own outcomes, not tasks
In other words: problem-finders, not just problem-solvers.
Product: from traffic controller to judgment amplifier
Product’s leverage moves away from routing every decision and toward:
clarifying priorities and trade-offs
tightening decision-making
keeping teams oriented toward impact, not output
Less gatekeeping. More direction, sharper.
Design: from artifact factory to quality multiplier
Design becomes less about producing phases and more about:
increasing user trust and coherence
shaping the feel of the product as it becomes real
accelerating iteration by improving what exists
The real trade-off
This way of working increases “gray area” on purpose. Roles overlap more. Responsibility is shared more. That can feel messy—until you realize the alternative is queues, handoffs, and slower learning.
The question becomes: can your organization handle higher autonomy without turning your seniors into the safety mechanism?
Closing
AI is not a tool upgrade. It’s a constraint change. And constraint changes always reshape organizations.
For years, “scaling” meant adding people and adding structure. That model was built for a world where execution was scarce. In a world where execution is cheaper, the danger is scaling the wrong thing: more handoffs, more coordination debt, and more senior talent turned into safety mechanisms instead of builders.
The hard part now is not speed. It’s what speed does to truth, trust, and coherence.
We’ve seen this pattern before outside software. When manufacturing and other industries adopted automation at scale, the need for people didn’t disappear — the center of value moved. Less repetitive execution. More supervision, quality, systems design, and exception handling. Software is heading in a similar direction. We’ll still need craft and product judgment, but more of the job shifts from “producing output” to steering systems that can produce output fast — without breaking trust.
If you’re leading an org right now, these are the questions that matter more than your hiring plan:
Where does work spend most of its time: building, or waiting?
Are your best engineers building the future, or preventing accidents in the present?
Is product discovery producing truth—or producing artifacts?
Does your structure reduce latency—or institutionalize it?
If you hired 50% more people tomorrow, would you go faster—or just create more friction?
There is no way back. The teams that adapt will feel unfairly fast. The teams that don’t will keep adding weight and calling it rigor.
If you’ve ever tried to understand your team through data — and felt the frustration of doing it with spreadsheets, or with tools that only rely on ticket lifecycles or member surveys, without getting a holistic and factual view — stay tuned. We’re building something for you.
And if you want to join a blue-ocean opportunity — and help shape how engineering teams navigate this new technology age — check out our careers page.

For most of the last decade, scaling engineering followed a familiar playbook: hire more people, add layers of leadership, split into teams, and create stronger handoffs between product, design, and engineering. I experienced that journey at a few organizations, It worked—because the main constraint was execution capacity, and coordination was a price worth paying.
Lately, I’ve been questioning whether that playbook still matters.
AI didn’t remove the need for good product thinking, good engineering, or good teams. But it changed a constraint we’ve all been optimizing around for years: latency. The loop is the same: understand the problem, choose a solution, build it, ship it, learn. What changed is how fast you can move through that loop when parts of execution are delegated or empowered by AI and agents.
That shift makes some classic structures feel heavier than they need to be. In some contexts, you can ship a thin slice, learn from reality, and iterate faster than you can “perfect” your way through weeks of artifacts, handoffs, and alignment. This doesn’t mean every company should shrink or abandon rigor. Context is still king. But it does mean we need to be honest about what’s actually slowing us down and whether we’re about to scale the wrong thing.
And this is where it gets personal. I still get approached by leaders who want to duplicate or triplicate their engineering teams, and they want to talk about my scale-up learnings. I understand why. But I also feel the tension: what if the best help I can offer is not “how to hire more,” but “why are you not getting the best out of the team you already have?” Because if the operating model is the problem, scaling headcount is not neutral. It can be a multiplier for dysfunction.
What still holds from scale-ups: coordination debt shows up faster now

When I think back to scaling in my career, the hardest part wasn’t building features. It was keeping the system coherent as headcount and scope grew: more dependencies, more surfaces, more production load, more people who needed context to make good decisions.
The real enemy was coordination debt and it compounds. Every new layer, interface, meeting, or handoff adds latency. Some of that is necessary at scale. But a lot of it becomes institutional habit.
AI doesn’t eliminate coordination debt. It makes it more visible. When execution gets faster, the bottleneck moves to everything around execution: decisions, reviews, approvals, sequencing, and ownership. That’s why “just hire more” is increasingly risky as a default response. Adding people can add output, yes—but it also adds parallel work, collisions, and negotiation overhead. You can end up with more movement and the same speed.
This is the uncomfortable point: many teams don’t need more capacity first. They need a different operating model: one that treats speed as cheap, and trust as expensive.
Staff+ in the AI era: when your best builders become the safety mechanism
AI changes the distribution of power inside engineering teams. More people can ship more, faster. The surface area of change expands. And in many scale-ups, a predictable dynamic kicks in: Staff+ engineers get pulled into being the human safety mechanism. Not because they want to block. Because someone has to hold the risk.
They become the people who:
slow things down “for good reasons”
say no when systems feel fragile
carry context and consequences in their head
absorb the blame when something breaks
get framed as gatekeepers by teams under delivery pressure
This is where the conversation gets uncomfortable, because it’s not a productivity problem. It’s a role and identity problem.
If your most senior engineers spend their time preventing mistakes other people can now produce at speed, you’re wasting the highest leverage talent you have. You’re also turning leadership into an adversarial function: the people responsible for progress start seeing the people responsible for safety as the enemy.
In the old world, this tension was often hidden by slower execution. In the new world, it becomes explicit—and it’s going to define which organizations scale and which ones stall.
The most important question isn’t “how do we move faster?” It’s: what are we turning our best people into, just to keep the system stable?
And here’s the part many leaders miss: if your organization is already in this stage—where senior engineers are acting as safety mechanisms—hiring more people usually makes it worse. You increase the volume of change, the number of edges, the number of misunderstandings, the number of “small” decisions that can break trust. The load doesn’t distribute evenly. It concentrates on the same few people who can see the failure modes. That’s when you get the regrettable outcomes: senior burnout, fine talent leaving, and an organization that becomes slower because it got bigger.
Discovery vs delivery after AI: when the “artifact phase” stops being the learning engine
I’ve used classic discovery frameworks for years. Double Diamond was one of my favorites because it pulls engineering closer to product thinking: not just implementing solutions, but shaping them.
That still matters. What changed is the cost of turning an idea into something real.
When execution was expensive, discovery relied heavily on artifacts: docs, prototypes, handoffs, staged approvals. Those artifacts weren’t just communication tools—they were how teams reduced risk before committing real build capacity.
AI compresses that cost. In many contexts, the highest-fidelity way to learn is no longer debating a prototype. It’s putting a thin version of the thing in front of reality and observing what happens.
This creates a new tension: the structures we built to protect discovery can turn into the thing that delays it. Product and design can accidentally become guardians of an “artifact phase” that once made sense, but now adds latency without adding proportional learning.
None of this removes the need for judgment, customer context, or craft. And there are contexts where you must preserve stronger pre-production validation. But across a large part of software, the competitive edge is shifting from “how well we design the plan” to how quickly we can converge on truth.
The question is no longer “did we follow the right process?” It’s: what did we learn this week that we couldn’t have known without shipping something real?
The new shape of teams: fewer handoffs, wider surfaces, higher judgment
If latency is the constraint, the most valuable “unit” isn’t a function or a department. It’s a small group that can move from problem → decision → build → learning with minimal negotiation overhead. That doesn’t mean specialization disappears. It means the center of gravity shifts:
Engineers: from contributors to drivers
The modern expectation is less “deliver what’s defined” and more:
spot problems worth solving
frame trade-offs clearly
execute end-to-end with AI support
own outcomes, not tasks
In other words: problem-finders, not just problem-solvers.
Product: from traffic controller to judgment amplifier
Product’s leverage moves away from routing every decision and toward:
clarifying priorities and trade-offs
tightening decision-making
keeping teams oriented toward impact, not output
Less gatekeeping. More direction, sharper.
Design: from artifact factory to quality multiplier
Design becomes less about producing phases and more about:
increasing user trust and coherence
shaping the feel of the product as it becomes real
accelerating iteration by improving what exists
The real trade-off
This way of working increases “gray area” on purpose. Roles overlap more. Responsibility is shared more. That can feel messy—until you realize the alternative is queues, handoffs, and slower learning.
The question becomes: can your organization handle higher autonomy without turning your seniors into the safety mechanism?
Closing
AI is not a tool upgrade. It’s a constraint change. And constraint changes always reshape organizations.
For years, “scaling” meant adding people and adding structure. That model was built for a world where execution was scarce. In a world where execution is cheaper, the danger is scaling the wrong thing: more handoffs, more coordination debt, and more senior talent turned into safety mechanisms instead of builders.
The hard part now is not speed. It’s what speed does to truth, trust, and coherence.
We’ve seen this pattern before outside software. When manufacturing and other industries adopted automation at scale, the need for people didn’t disappear — the center of value moved. Less repetitive execution. More supervision, quality, systems design, and exception handling. Software is heading in a similar direction. We’ll still need craft and product judgment, but more of the job shifts from “producing output” to steering systems that can produce output fast — without breaking trust.
If you’re leading an org right now, these are the questions that matter more than your hiring plan:
Where does work spend most of its time: building, or waiting?
Are your best engineers building the future, or preventing accidents in the present?
Is product discovery producing truth—or producing artifacts?
Does your structure reduce latency—or institutionalize it?
If you hired 50% more people tomorrow, would you go faster—or just create more friction?
There is no way back. The teams that adapt will feel unfairly fast. The teams that don’t will keep adding weight and calling it rigor.
If you’ve ever tried to understand your team through data — and felt the frustration of doing it with spreadsheets, or with tools that only rely on ticket lifecycles or member surveys, without getting a holistic and factual view — stay tuned. We’re building something for you.
And if you want to join a blue-ocean opportunity — and help shape how engineering teams navigate this new technology age — check out our careers page.

