The Risk of Burning your Budget
Why “AI is working” is the most dangerous assumption companies make.
Scaling AI costs without measuring is a fast way to lose control
A few days ago I came across a reflection from Aakash Gupta that I found very interesting. The core idea was simple: AI costs are scaling too fast, often faster than companies expected, and in many cases without a clear understanding of the return.
It’s also not new. We’ve seen this pattern before with cloud, with tooling and with data. What changes is the speed and the narrative, AI just accelerates both.
Measuring what actually matters
Most companies are still not set up to understand performance in a way that connects to business outcomes. They track activity because it’s easier: lines of code, tickets closed, content generated, outreach sent.
AI pours fuel on that system. Suddenly you have more output everywhere: More code, more experiments and in theory more “productivity.”
But none of that answers the only questions that matter: are we delivering faster in a meaningful way, is quality improving or degrading, and is the business actually benefiting from it?
If you don’t have a way to measure delivery, quality, and flow consistently, AI doesn’t make you better. It just makes you busier, at an exponentially higher cost.
The dangerous assumption: “AI is working”
There’s an implicit assumption in a lot of these conversations that AI is already delivering value. That adoption equals impact. That’s a leap.
If you were not measuring performance before AI, you don’t have a baseline. And without a baseline, you can’t say if anything improved. You can say there’s more activity. You can say teams feel busier. You can say output volume increased.
But you cannot say impact improved in any defensible way.
And there’s a second, more uncomfortable layer to this.
A lot of the current AI economics are distorted. Providers are not necessarily pricing at sustainable levels. There are credible signals across the market that token-based models are being sold at a loss, in the same way early rides in Uber were heavily subsidized to drive adoption.
A $200/month plan that actually costs multiples of that to serve is not a stable equilibrium. It’s a growth tactic.
So if you are breaking your operating model because “AI is cheaper than humans,” there’s a real risk you are building on top of a temporary subsidy, not a durable cost structure.
When that gap corrects, and it will, the economics change overnight and you will be in deep trouble.
One language for performance
This is exactly the problem we are trying to solve with Pensero.
Not another dashboard, not another layer of activity metrics. A way for the leaders and the entire engineering and product organizations to look at the same signals and speak the same language about performance.
What is actually being delivered. How complex that work is. Where quality is holding or breaking. How flow is evolving. And critically, what is changing when AI is introduced into the system.
Because without that shared understanding, every conversation becomes opinion-driven. Engineering talks about output not feelings. Finance talks about cost no dreams. Leadership talks about strategy not trends. And none of it connects unless you understand your reality.
The reality is simple: AI is already increasing the operating cost for most companies. The only question that matters is whether it’s also improving performance in a way that justifies it.
If you can’t measure that clearly, you’re not scaling intelligently, you’re just scaling spend. And it’s not only about understanding ROI today, but how it evolves as costs inevitably rise, before you find yourself on the wrong side of that curve.
And this moment, as highlighted in Aakash’s reflection, makes it clear that true understanding is no longer optional.
Scaling AI costs without measuring is a fast way to lose control
A few days ago I came across a reflection from Aakash Gupta that I found very interesting. The core idea was simple: AI costs are scaling too fast, often faster than companies expected, and in many cases without a clear understanding of the return.
It’s also not new. We’ve seen this pattern before with cloud, with tooling and with data. What changes is the speed and the narrative, AI just accelerates both.
Measuring what actually matters
Most companies are still not set up to understand performance in a way that connects to business outcomes. They track activity because it’s easier: lines of code, tickets closed, content generated, outreach sent.
AI pours fuel on that system. Suddenly you have more output everywhere: More code, more experiments and in theory more “productivity.”
But none of that answers the only questions that matter: are we delivering faster in a meaningful way, is quality improving or degrading, and is the business actually benefiting from it?
If you don’t have a way to measure delivery, quality, and flow consistently, AI doesn’t make you better. It just makes you busier, at an exponentially higher cost.
The dangerous assumption: “AI is working”
There’s an implicit assumption in a lot of these conversations that AI is already delivering value. That adoption equals impact. That’s a leap.
If you were not measuring performance before AI, you don’t have a baseline. And without a baseline, you can’t say if anything improved. You can say there’s more activity. You can say teams feel busier. You can say output volume increased.
But you cannot say impact improved in any defensible way.
And there’s a second, more uncomfortable layer to this.
A lot of the current AI economics are distorted. Providers are not necessarily pricing at sustainable levels. There are credible signals across the market that token-based models are being sold at a loss, in the same way early rides in Uber were heavily subsidized to drive adoption.
A $200/month plan that actually costs multiples of that to serve is not a stable equilibrium. It’s a growth tactic.
So if you are breaking your operating model because “AI is cheaper than humans,” there’s a real risk you are building on top of a temporary subsidy, not a durable cost structure.
When that gap corrects, and it will, the economics change overnight and you will be in deep trouble.
One language for performance
This is exactly the problem we are trying to solve with Pensero.
Not another dashboard, not another layer of activity metrics. A way for the leaders and the entire engineering and product organizations to look at the same signals and speak the same language about performance.
What is actually being delivered. How complex that work is. Where quality is holding or breaking. How flow is evolving. And critically, what is changing when AI is introduced into the system.
Because without that shared understanding, every conversation becomes opinion-driven. Engineering talks about output not feelings. Finance talks about cost no dreams. Leadership talks about strategy not trends. And none of it connects unless you understand your reality.
The reality is simple: AI is already increasing the operating cost for most companies. The only question that matters is whether it’s also improving performance in a way that justifies it.
If you can’t measure that clearly, you’re not scaling intelligently, you’re just scaling spend. And it’s not only about understanding ROI today, but how it evolves as costs inevitably rise, before you find yourself on the wrong side of that curve.
And this moment, as highlighted in Aakash’s reflection, makes it clear that true understanding is no longer optional.


