Measuring engineering performance is hard. Pretending otherwise is the real problem.
When metrics create uncertainty instead of clarity, trust breaks first.

Dave Garcia
Founder and Co-CEO
Feb 25, 2026

Engineers have told me that more than once. They walked into a performance review genuinely unsure whether the outcome would be recognition or a warning. From their perspective, performance review felt random.
That uncertainty is not a small issue.
Most companies still struggle to measure engineering performance in a way that feels fair, consistent, and grounded in reality. As organizations grow, as teams scale, as layers of management increase, the signal gets noisier. Engineers ship features, fix incidents, review code, unblock teammates, refactor legacy systems, and quietly prevent disasters. Then they wait for a performance cycle to interpret all of that on their behalf.
When the interpretation feels like a coin toss, trust erodes.
Engineering productivity is inherently difficult to evaluate. It is not a transactional function. It is creative, iterative, and deeply contextual. In sales, outcomes often collapse into a clear number like revenue closed, quota attainment or conversion rate. Engineering rarely reduces so cleanly: Much of the value lies in what did not happen because someone made a good architectural decision months earlier and much of the impact is distributed across a system rather than attributable to a single moment.
This is why so many engineering productivity dashboards fail. They present numbers, but they do not necessarily present meaning. They create the appearance of objectivity without resolving ambiguity. Once those numbers are used to judge rather than to understand, teams adapt. They optimize for what is visible, they protect themselves and then, the conversation shifts from improvement to positioning.
Leaders understandably want clarity. They want a way to compare teams, identify underperformance, and reward excellence. The temptation is to search for a universal metric, something that behaves like a law of physics: is it a red or green flag?
In my experience as a CTO and engineer, engineering does not cooperate with that model. It is defined by trade-offs, speed competes with quality and short-term delivery competes with long-term maintainability. Shipping new features competes with paying down technical debt. Any metric that ignores those tensions risks rewarding the wrong behavior.
So when someone asks for the best engineering productivity metric, the only honest answer is that context, team maturity, product stage and technical complexity matters. What is healthy in one organization may be destructive in another.
Measurement becomes dangerous when intent is unclear. The moment leaders announce that they are going to measure productivity, anxiety spikes. Engineers have seen systems where visibility gradually turned into surveillance. Where dashboards became tools for individual ranking rather than team learning. Once people feel observed rather than supported, they stop exposing reality. The data remains, but the truth disappears.
If measurement is going to create trust instead of fear, it has to be explicit. People need to understand what is being measured, why it exists, how it will inform decisions, and equally important, what it will not be used for. Without that clarity, the system incentivizes self-protection rather than improvement.
When performance reviews feel arbitrary, the root cause is rarely bad intent. It is weak signals. Managers are often making high-stakes decisions with incomplete visibility. They lack a clear view of where delivery slowed down, where quality degraded, who consistently unblocked others, or where work vanished into coordination overhead. In the absence of reliable signals, decisions default to perception, recency, and confidence.
That is how you end up with engineers who cannot predict their own review outcome.
If someone is walking into a performance conversation unsure whether they will be promoted or penalized, the system is failing them. Not because engineering is simple, but because we have not been honest about its complexity. Measuring engineering performance is hard. Pretending it is easy is what turns it unfair.
In the next piece, I will explore what high performance in engineering actually looks like, and why output alone misses most of the story.

