High Engineering Performance Isn’t Output. It’s Context, Decisions, and Impact.
How misaligned metrics quietly undermine high-performing engineering teams.

Dave Garcia
Founder and Co-CEO
Feb 25, 2026

A lot of teams measure engineering performance by asking one question:
“How much did we ship?”
That’s a trap: Some of the best engineers I’ve worked with didn’t ship the most code, instead they shipped the right decisions because they were obsessed with context.
High performance in engineering is not about perfect syntax or elegant abstractions. It is about judgment and about understanding the real problem, the business constraints, the technical debt already in the system, and the cost of being wrong. Great engineers constantly evaluate alternatives. They think through what happens if we choose option A instead of B. They consider the consequences of doing nothing and weigh short-term delivery against long-term stability.
That is performance.
Engineering is fundamentally a series of trade-offs. Speed and quantity competes with quality. Immediate delivery competes with architectural soundness since shipping a feature today may increase complexity tomorrow. Any metric that ignores those tensions risks punishing good judgment. Over time, teams learn what is rewarded. If you reward speed alone, you get speed at the expense of resilience. If you reward visible output alone, you get output at the expense of depth.
The distortion compounds at senior levels since senior engineers do not just deliver tasks: They shape direction, review critical changes, prevent incidents before they occur and unblock teammates who would otherwise stall. In a nutshell, they influence architecture in ways that reduce future risk and that means much of their contribution is indirect. It shows up in system stability, in faster onboarding of juniors, in fewer escalations months later.
If your performance system only tracks delivery, you systematically undervalue the people who are holding the system together. They rarely make noise about it. They simply absorb the complexity until it becomes unsustainable. Eventually they disengage or leave, and the organization realizes too late how much invisible load they were carrying.
Platform and infrastructure teams suffer from a similar dynamic. They are often evaluated using the same lens as feature teams. That misses the point of their work. Their value lies in reliability, internal developer experience, and the removal of friction. When they succeed, incidents do not happen, build times shrink and dependencies stabilize. Nothing dramatic occurs. And because nothing explodes, their impact can appear minimal on a simplistic dashboard.
Output is easy to count while impact requires interpretation. It requires understanding the system as a whole. If performance systems fail to reflect how engineering actually works, they do more than mismeasure reality, they end up reshaping behavior in ways that degrade it.
In the next piece, I will look at how to measure engineering performance in a way that preserves trust rather than eroding it.

