Lemon Ice Cream
AI development tools are all around: Copilot, Cursor, Windsurf, ClaudeCode… they promise boost our productivity and bring superpowers to our development teams but still many teams struggle adopting them or even reject their usage.

If it works don’t touch it
If I had to choose one of the best qualities an engineer should have that’d be pragmatism.
I remember talking to a friend of mine who’s a lawyer many years ago and we were talking about the difference between lawyers and engineers (if there’s any!) and he said: In Argentina we have a joke that says the following: you know why the engineer orders lemon ice cream?, because he tried, and he liked it.
I was a bit shocked and was preparing my immediate response but after self evaluating my taste I decided to not say anything.
I always try to apply proven patterns to solve similar problems, try to use the simplest approach and almost ask for the same flavor of ice cream.
Any sufficiently advanced technology is indistinguishable from magic
AI is here to change everything, it’s not new, but it became mainstream quite recently.
Until quite recently only some of us in tech were aware about what neural nets, classifiers, genetic algorithms… can do, but now everyone knows ChatGPT and charlatans here and there have been promising the end of human labour, starting by no less than software engineering.
ChatGPT was initially perceived as almost magic. An almost human entity was able to answer our questions with superhuman accuracy.
That was specially impressive in tasks where perfection was not required like asking open ended questions where it doesn’t have to be perfect just good enough.
Software engineering is a bit trickier, there’re multiple ways to solve a problem but if the code is not syntactically correct, doesn’t run or it just doesn’t solve the problem is very easy to spot the imperfections.
Initial models (specially anything that was not GPT4) were pretty dull on that front. Malformed JSON, made up libraries, etc frustrated early adopters that wasted more time detecting bullshit than doing actual work.
Models improved a lot and strong code models like Claude, Qwen Code appeared but many developers were already gone and frustrated with the experience.
Furthermore, unrealistic fake claims coming from newly created startups saying a prompt will be replacing all the people behind software creation made it even harder for people to embrace this new way of doing.
The (my) truth about code assisting tools
Those tools, from Cursor to Claude, or even pasting from ChatGPT are great. They feel like a cheat code for development, I write what I want to do and they give me code that looks ok, so what is the problem?
The problem is that even best models fail, some 60%, some 80% but this last 20% can kill you.
You really need to understand how to ask, how to interact and how to validate what the models gives you before adding it to the codebase.
We moved (a bit) from being mainly coders to be technical leads (we need to understand what the code does), QAs (we need to be able to check that the code covers all possible scenarios)and product managers (we need to make sure it solves the problem).
Those models are power tools, like a power drill for a construction worker: with the proper skills and supervision it could boost greatly your productivity but without those it could be the equivalent to a monkey with a shotgun and can kill all the benefits with the first incident (sorry vibecoders we are not there yet).
Similarly to other industries (did I say power drill already?), if you want to stay productive in this new scenario you should master it.
No excuses, like but my code is artisanal craft, this cannot be solved by the machine, yada yada yada.
Some of our code is critical, and needs to be perfect, optimal and artistic, most doesn’t so be pragmatic.
Tips and tricks
So far we discussed how we have been sold that AI is going to replace us, how we saw that this is complete BS but still the current technology is amazing and can make our lives much better by using it properly but what does properly means?.
Every person maybe has a different experience and what works for me maybe doesn’t work for you but I will summarize what works for me, my team and the people I talk to.
Let’s start by some ground rules:
Your code is yours, you own it the same way you would if you’d wrote it.
Now some tips in no particular order:
Use the best model you can afford, don’t use expensive Claude for everything but don’t use dumb 8B models, this will kill all the benefits.
Understand everything you are suggested. If you don’t ask the model to explain it until you do, thank me later.
Never give permissions to commit to the model, never ever it could destroy everything in one shot.
Whenever solving a problem ask the model for a plan, then for a step by step process to implement it and validate every step before going to the next. This applies to methods, tests etc.
If you use tools that support rules like Cursor, spend a bit of time there. Explain how you like tests, design patterns or code architecture done so it gets it right much faster.
Try different tools, find the one that works the best for your stack, team and types of task.
Conclusions
If you skipped to the end or if you have to get a single thing from this post that’d be: do not miss this wave, get proficient with AI code assistants.
It’s not going to be a magical tool that can read your mind and create a perfect implementation from a 2 lines prompt but is pretty damn good on helping you getting the job done.
Maybe it is not going to make us all 100x developers (yet) but if you make me a 1.25x developer that’s pretty amazing, specially because this 25% is usually the boilerplate, repeating task that we all hate so we can concentrate on the creative parts where the technology is still weak.
It is also improving and changing every single day so these performance gains are increasing as we speak so don’t stop paddling and catch this wave.


