What teams are really doing with AI coding tools and what that tells us
From Cursor’s stability to Claude Code’s surge and Copilot’s quiet transition: what engineers are actually using today.

Dave Garcia
Founder and Co-CEO
Mar 3, 2026

At Pensero, we sit in a privileged position.
Every day, we serve thousands of engineers who use AI coding assistants as part of their normal workflow. We don’t ask them what they feel they’re using. We see what actually happens in production: which tools show up in commits, how frequently they’re used, and how teams evolve around them.
Over the past months, some clear patterns have started to emerge.
They’re not always the ones you’d expect.
Cursor is still everywhere. It remains one of the most popular tools in absolute terms, deeply embedded in many teams’ daily workflows. Actually, Pensero’s team used for some time (spoiler: only half of our team uses it). They age of hypergrowth is not there anymore, and it feels stable now, less like a rising wave, more like infrastructure that’s already found its place.
Claude Code, on the other hand, is the new cool kid. Its adoption curve is steep, and teams are embracing it fast. Not just for experimentation, but for real work. We’re seeing something new here: teams liking it enough to formalize its presence, even to the point of registering autonomous synthetic contributors in their systems.
That’s a significant shift, and one we’ll dig into in a future post.
And then there’s Copilot…
This is the surprising part.
Copilot pioneered the category. It made AI coding mainstream. It sold at massive scale and became almost synonymous with “AI for developers.” And yet, when we look at actual engagement patterns today, enthusiasm appears muted. The tool exists, but the love for it is harder to find. Same as in real life.
So the obvious question comes up:
Is it time to say goodbye to an old friend?
Probably not. Or at least not yet: It’s never wise to bet against Microsoft.
What we’re seeing instead is something more nuanced. Copilot is changing. Models are improving. The depth of analytics is starting to resemble what newer players offer. And most importantly, there’s a clear signal from the top of Microsoft: leadership attention, investment, and what can only be described as founder-mode energy behind the product again.
The market has moved, and Copilot is adapting.
The broader lesson here isn’t about which tool is “winning.” It’s about how fast this space is evolving and how quickly yesterday’s assumptions become obsolete. AI coding tools are no longer just about autocomplete. They’re becoming collaborators. Some are better at certain tasks, some fit certain team cultures better, and many will coexist for a long time.
What matters most, though, isn’t which assistant you pick.
It’s whether your organization can actually understand what’s happening once these tools are in play. When usage explodes, when bots start committing alongside humans, when output increases across the board… can you still tell what’s really working? Can you distinguish real impact from noise?
That question is becoming far more important than tool choice.
And it’s the one we spend most of our time thinking about.

