The constraint didn’t disappear. It moved.

Why AI is flooding engineering teams with output, but not always progress.

The industry was built around the wrong assumption

For decades, the software industry optimized itself around a single constraint: writing code was the hardest, slowest, most expensive part of the system.

Everything else adapted to that reality: We protected developer time, we invested in tooling to speed up iteration, and we built entire organizational models around maximizing output from engineers. It made sense because code was the scarce resource.

This is a world that no longer exists.

AI has fundamentally changed the economics of code production. What used to take days can now take minutes. What used to require deep specialization can now be assisted or even automated. The constraint that shaped the industry for the last fifty years has been removed.

But removing a constraint does not eliminate it. It simply moves it somewhere else and you need to identify what is the next bottleneck you are gonna face.

More output, less clarity

What we are seeing now across engineering organizations is not necessarily better execution, but more activity: More pull requests, more commits, more code being generated, more things happening at once.

At first glance, it can look like progress since teams are shipping faster, there is more visible movement, and the sense of productivity is high.

But when you look closer, something feels off.

  • Review processes are under pressure.

  • Testing systems are struggling to keep up.

  • Rework increases quietly in the background.

In a nutshell, the system absorbs more and more change, but not necessarily in a way that improves outcomes.

We have replaced scarcity with noise.

And noise is harder to manage than scarcity because it creates the illusion that things are working.

The new bottleneck is understanding what is good and what is bad

If code is no longer the constraint, then the real question becomes much more uncomfortable and difficult to answer.

Are we actually executing better?

And by “executing better” I don’t mean producing more or moving faster in isolation. I mean executing better as a system.

That requires understanding whether the work being delivered is meaningful, whether quality is improving or degrading over time, whether teams are aligned with what matters, and whether the introduction of AI is making the system more effective or simply more active.

These are not new questions, but they are now unavoidable.

The problem is that most organizations are still trying to answer them with tools and metrics designed for a world where output was the bottleneck.

A system that was never designed for this

Story points, tickets, velocity, lines of code… all of these were created to approximate progress when production capacity was limited. They were imperfect proxies, but they were useful enough in a world where writing code was expensive.

In a world where code can be generated at scale, those proxies break down.

They measure activity, not execution. They capture motion, not progress. And they make it almost impossible to understand whether the system is improving or slowly degrading under the weight of constant change.

This is why so many teams feel like they are moving faster but not getting better.

What actually matters now

The shift that AI is forcing is not about adopting new tools or using the best model, it is about how we understand engineering performance.

We need to move:

  • From measuring output to understanding impact.

  • From counting what happens to understanding what it means.

  • From isolated signals to a coherent view of how work flows through the system.

Because if you cannot clearly see how your engineering organization performs, you cannot improve it. And in a world where AI is accelerating everything, the cost of not understanding is not acceptable anymore, it compounds.

This is why we needed to go back to First Principles and rethink the way we understand performance.

The industry was built around the wrong assumption

For decades, the software industry optimized itself around a single constraint: writing code was the hardest, slowest, most expensive part of the system.

Everything else adapted to that reality: We protected developer time, we invested in tooling to speed up iteration, and we built entire organizational models around maximizing output from engineers. It made sense because code was the scarce resource.

This is a world that no longer exists.

AI has fundamentally changed the economics of code production. What used to take days can now take minutes. What used to require deep specialization can now be assisted or even automated. The constraint that shaped the industry for the last fifty years has been removed.

But removing a constraint does not eliminate it. It simply moves it somewhere else and you need to identify what is the next bottleneck you are gonna face.

More output, less clarity

What we are seeing now across engineering organizations is not necessarily better execution, but more activity: More pull requests, more commits, more code being generated, more things happening at once.

At first glance, it can look like progress since teams are shipping faster, there is more visible movement, and the sense of productivity is high.

But when you look closer, something feels off.

  • Review processes are under pressure.

  • Testing systems are struggling to keep up.

  • Rework increases quietly in the background.

In a nutshell, the system absorbs more and more change, but not necessarily in a way that improves outcomes.

We have replaced scarcity with noise.

And noise is harder to manage than scarcity because it creates the illusion that things are working.

The new bottleneck is understanding what is good and what is bad

If code is no longer the constraint, then the real question becomes much more uncomfortable and difficult to answer.

Are we actually executing better?

And by “executing better” I don’t mean producing more or moving faster in isolation. I mean executing better as a system.

That requires understanding whether the work being delivered is meaningful, whether quality is improving or degrading over time, whether teams are aligned with what matters, and whether the introduction of AI is making the system more effective or simply more active.

These are not new questions, but they are now unavoidable.

The problem is that most organizations are still trying to answer them with tools and metrics designed for a world where output was the bottleneck.

A system that was never designed for this

Story points, tickets, velocity, lines of code… all of these were created to approximate progress when production capacity was limited. They were imperfect proxies, but they were useful enough in a world where writing code was expensive.

In a world where code can be generated at scale, those proxies break down.

They measure activity, not execution. They capture motion, not progress. And they make it almost impossible to understand whether the system is improving or slowly degrading under the weight of constant change.

This is why so many teams feel like they are moving faster but not getting better.

What actually matters now

The shift that AI is forcing is not about adopting new tools or using the best model, it is about how we understand engineering performance.

We need to move:

  • From measuring output to understanding impact.

  • From counting what happens to understanding what it means.

  • From isolated signals to a coherent view of how work flows through the system.

Because if you cannot clearly see how your engineering organization performs, you cannot improve it. And in a world where AI is accelerating everything, the cost of not understanding is not acceptable anymore, it compounds.

This is why we needed to go back to First Principles and rethink the way we understand performance.

Know what's working, fix what's not

Pensero analyzes work patterns in real time using data from the tools your team already uses and delivers AI-powered insights.

Are you ready?

To read more from this author, subscribe below…