What is your AI strategy?
Why most companies don’t actually have an AI strategy.

Dave Garcia
Founder and Co-CEO
Apr 15, 2026

Most companies believe they have an AI strategy, but when you look closely, what they actually have is a collection of tools and some loosely tracked usage metrics. Buying licenses, rolling out copilots, or monitoring token consumption might feel like progress, but none of that constitutes a strategy. It simply means you are participating in the trend. If that is the extent of the effort, then what is happening is not strategic decision-making but reactive spending.
The reality is that this is not unusual. Most companies are in exactly the same position. AI is a genuinely transformative technology, but the conversation around it is rarely grounded in structured thinking. It is being shaped by urgency, by competitive pressure, and often by a subtle sense of fear that others might be moving faster. Vendors amplify this dynamic because their incentives are tied to adoption and consumption, which naturally pushes organizations toward using more rather than using better.
The difference between activity and strategy
A real AI strategy is not defined by the tools you deploy but by the outcomes you can clearly attribute to them. At its core, the objective is straightforward: AI should help your business either generate more revenue or operate more efficiently. These are the only two levers that ultimately matter. Everything else, whether it is increased activity, faster output, or higher engagement with tools, is secondary unless it translates into one of those outcomes.
If you are unable to connect AI usage to either increased revenue or reduced cost in a measurable way, then what you have is not an investment but an expense. Without that connection, decision-making becomes driven by perception rather than evidence, and budgets tend to expand without a clear understanding of the return they generate. That is where most organizations find themselves today, spending more each quarter while struggling to articulate what has actually improved.
The illusion of progress
One of the reasons this happens is that AI creates the appearance of rapid progress across almost every function. Sales teams can reach more prospects and personalize outreach at scale. Marketing teams can produce content faster and experiment more aggressively. Engineering teams can generate code at unprecedented speed. Finance teams can surface insights and automate reporting workflows. The surface area of potential improvement is enormous.
However, this is precisely where things begin to break down. Companies tend to deploy AI across multiple areas simultaneously without defining what success looks like in each case. They track adoption rates, usage frequency, or volume of output, assuming these are indicators of value. Over time, these metrics become proxies for success, even though they do not capture whether the business is actually improving.
The result is a growing gap between perceived progress and real impact. Teams feel more productive, but leadership cannot clearly explain why outcomes have improved, or in many cases, whether they have improved at all.
Why engineering exposes the problem
Engineering is one of the clearest examples of this dynamic. AI has significantly increased the ability of developers to produce code, and many organizations are already seeing higher levels of activity in terms of commits, pull requests, and overall output. On the surface, this looks like a clear win.
But the important questions sit one level deeper:
Are teams actually delivering features faster, or is the additional output creating more complexity to manage?
Is software quality improving, or are defects and regressions increasing over time?
Are teams collaborating more effectively, or are dependencies and bottlenecks becoming harder to identify?
Without a system that measures delivery, quality, and flow in a consistent way, it is very difficult to answer these questions. More code does not automatically translate into more value, and in many cases, it can introduce new forms of inefficiency that only become visible later.
What a real approach looks like
In practice, building a real AI strategy tends to be much less about scale and much more about focus. The organizations that are making meaningful progress are not trying to transform everything at once. Instead, they start with a clearly defined scope and a disciplined approach to measurement.
They begin with a small group of people who are motivated to experiment and improve their way of working. They select a single use case where the potential impact is meaningful and where outcomes can be observed. They define a set of metrics that reflect real business results rather than subjective perceptions, and they track those metrics consistently over time.
From there, they review what is happening, adjust their approach based on evidence, and only expand to other teams or use cases once they can clearly demonstrate that the initial effort is delivering value. This creates a feedback loop where learning compounds, rather than a fragmented rollout where activity increases but understanding does not.
Why measurement is the hard part
The reason this approach is not more common is that measuring the impact of AI is inherently difficult. The effects are often indirect, and they tend to emerge over time rather than immediately. Most organizations do not have systems in place that connect day-to-day work with business outcomes in a reliable way, which makes it challenging to isolate the contribution of AI from other factors.
As a result, many teams fall back on metrics that are easy to collect but do not provide meaningful insight. Lines of code, number of prompts, or tool adoption rates are convenient, but they do not explain whether the organization is becoming more effective. They create a sense of control without delivering real understanding.
Developing the ability to measure impact requires a different level of rigor. It involves looking at how work actually flows through the organization, where it slows down, how often it needs to be reworked, and how these patterns change over time as AI is introduced.
The only question that matters
Ultimately, every AI initiative should be evaluated against a single question: is this making the business better in a way that can be demonstrated with evidence? Not assumed, not inferred, but clearly shown through consistent measurement.
When the answer is yes, the path forward becomes obvious, and investment decisions become much easier to justify. When the answer is unclear, it is a signal that the strategy is not yet defined, regardless of how many tools have been deployed or how widely they are being used.
Where we’ve focused
In our case, we have focused on helping teams answer this question in the context of engineering, where the gap between activity and impact is particularly visible. Instead of relying on surveys or self-reported data, we look directly at the work itself, analyzing how it moves through systems, how teams collaborate, and what is ultimately delivered.
This does not eliminate the complexity, but it provides a foundation for understanding how AI is affecting delivery speed, quality, and efficiency. Once that connection is established, the conversation shifts. It is no longer about adopting tools or increasing usage, but about improving performance in a way that can be measured and sustained over time.
If you are trying to build that level of clarity and are struggling to define the right metrics, it is a problem worth solving properly.

Most companies believe they have an AI strategy, but when you look closely, what they actually have is a collection of tools and some loosely tracked usage metrics. Buying licenses, rolling out copilots, or monitoring token consumption might feel like progress, but none of that constitutes a strategy. It simply means you are participating in the trend. If that is the extent of the effort, then what is happening is not strategic decision-making but reactive spending.
The reality is that this is not unusual. Most companies are in exactly the same position. AI is a genuinely transformative technology, but the conversation around it is rarely grounded in structured thinking. It is being shaped by urgency, by competitive pressure, and often by a subtle sense of fear that others might be moving faster. Vendors amplify this dynamic because their incentives are tied to adoption and consumption, which naturally pushes organizations toward using more rather than using better.
The difference between activity and strategy
A real AI strategy is not defined by the tools you deploy but by the outcomes you can clearly attribute to them. At its core, the objective is straightforward: AI should help your business either generate more revenue or operate more efficiently. These are the only two levers that ultimately matter. Everything else, whether it is increased activity, faster output, or higher engagement with tools, is secondary unless it translates into one of those outcomes.
If you are unable to connect AI usage to either increased revenue or reduced cost in a measurable way, then what you have is not an investment but an expense. Without that connection, decision-making becomes driven by perception rather than evidence, and budgets tend to expand without a clear understanding of the return they generate. That is where most organizations find themselves today, spending more each quarter while struggling to articulate what has actually improved.
The illusion of progress
One of the reasons this happens is that AI creates the appearance of rapid progress across almost every function. Sales teams can reach more prospects and personalize outreach at scale. Marketing teams can produce content faster and experiment more aggressively. Engineering teams can generate code at unprecedented speed. Finance teams can surface insights and automate reporting workflows. The surface area of potential improvement is enormous.
However, this is precisely where things begin to break down. Companies tend to deploy AI across multiple areas simultaneously without defining what success looks like in each case. They track adoption rates, usage frequency, or volume of output, assuming these are indicators of value. Over time, these metrics become proxies for success, even though they do not capture whether the business is actually improving.
The result is a growing gap between perceived progress and real impact. Teams feel more productive, but leadership cannot clearly explain why outcomes have improved, or in many cases, whether they have improved at all.
Why engineering exposes the problem
Engineering is one of the clearest examples of this dynamic. AI has significantly increased the ability of developers to produce code, and many organizations are already seeing higher levels of activity in terms of commits, pull requests, and overall output. On the surface, this looks like a clear win.
But the important questions sit one level deeper:
Are teams actually delivering features faster, or is the additional output creating more complexity to manage?
Is software quality improving, or are defects and regressions increasing over time?
Are teams collaborating more effectively, or are dependencies and bottlenecks becoming harder to identify?
Without a system that measures delivery, quality, and flow in a consistent way, it is very difficult to answer these questions. More code does not automatically translate into more value, and in many cases, it can introduce new forms of inefficiency that only become visible later.
What a real approach looks like
In practice, building a real AI strategy tends to be much less about scale and much more about focus. The organizations that are making meaningful progress are not trying to transform everything at once. Instead, they start with a clearly defined scope and a disciplined approach to measurement.
They begin with a small group of people who are motivated to experiment and improve their way of working. They select a single use case where the potential impact is meaningful and where outcomes can be observed. They define a set of metrics that reflect real business results rather than subjective perceptions, and they track those metrics consistently over time.
From there, they review what is happening, adjust their approach based on evidence, and only expand to other teams or use cases once they can clearly demonstrate that the initial effort is delivering value. This creates a feedback loop where learning compounds, rather than a fragmented rollout where activity increases but understanding does not.
Why measurement is the hard part
The reason this approach is not more common is that measuring the impact of AI is inherently difficult. The effects are often indirect, and they tend to emerge over time rather than immediately. Most organizations do not have systems in place that connect day-to-day work with business outcomes in a reliable way, which makes it challenging to isolate the contribution of AI from other factors.
As a result, many teams fall back on metrics that are easy to collect but do not provide meaningful insight. Lines of code, number of prompts, or tool adoption rates are convenient, but they do not explain whether the organization is becoming more effective. They create a sense of control without delivering real understanding.
Developing the ability to measure impact requires a different level of rigor. It involves looking at how work actually flows through the organization, where it slows down, how often it needs to be reworked, and how these patterns change over time as AI is introduced.
The only question that matters
Ultimately, every AI initiative should be evaluated against a single question: is this making the business better in a way that can be demonstrated with evidence? Not assumed, not inferred, but clearly shown through consistent measurement.
When the answer is yes, the path forward becomes obvious, and investment decisions become much easier to justify. When the answer is unclear, it is a signal that the strategy is not yet defined, regardless of how many tools have been deployed or how widely they are being used.
Where we’ve focused
In our case, we have focused on helping teams answer this question in the context of engineering, where the gap between activity and impact is particularly visible. Instead of relying on surveys or self-reported data, we look directly at the work itself, analyzing how it moves through systems, how teams collaborate, and what is ultimately delivered.
This does not eliminate the complexity, but it provides a foundation for understanding how AI is affecting delivery speed, quality, and efficiency. Once that connection is established, the conversation shifts. It is no longer about adopting tools or increasing usage, but about improving performance in a way that can be measured and sustained over time.
If you are trying to build that level of clarity and are struggling to define the right metrics, it is a problem worth solving properly.

