Developers used to laugh at those who spent hours configuring Neovim instead of simply using JetBrains (or another VS Code) and writing code. Now, modern developers spend hours selecting and configuring AI coding tools, then pat themselves on the back for creating a prompt, which is supposedly great. Would Cursor be better for my work? Or Wind Surf. No, man, try Claude Code. Which model? Sonnet 3.7 or 4? I’d suggest GPT-4 or 5. No, no, use GPT-5 for writing CSS, and then Sonnet-4 for writing JavaScript. Don’t forget about .cursorrules and claude.md. You need to write a ton of documentation so your agents know how to do their jobs. You need to create separate agents for the frontend and backend, and even break them down further to make them work well for you. The prompt can’t be too large or too small. It has to be just right. You need to test different prompts, in various tasks and with other models, and figure out which one works best when. It’s best to create an Excel prompt library. A new version of the tool came out, and it stopped working. Oh well, experiment again. You’ll surely find the best way to explain to the bot how to write the code for you. That’s how I see it, working with AI tools. I’ve been through all of this myself, having tried it from the very beginning. Isn’t that pathological? Isn’t it analogous to those nerds who spent an hour configuring Neo Vim? Devs say they feel productive and work faster, but is that really the case? The illusion of productivity is certainly compelling, and using these tools can make you feel like magic is happening. ## Productivity illusion Developers feel productive when they configure these tools. They feel like they’re riding a wave and doing something super good. I’ve done it myself. Anyone who wants to try AI coding tools has to do it, because in real life, these tools don’t help without all the machinery I described above. The problem with configuring this tool is that it’s not something you can finish. You’ll spend hours configuring it, but in reality, you can tell the AI ​​to do the same task five times and it’ll do it differently each time. Or you’ll make a tool update, and what you thought worked no longer works. But configuration and improvement give you a boost. You feel better. You’re working on something that’s mega-popular right now, so you feel satisfied. You can even give presentations and share your knowledge. In the long run, you’ll deliver just as many features as without it. No one can really measure its actual impact on productivity yet. We can measure it through the hype of X and YouTube, and we do, which only reinforces the false narrative. ## We measure the wrong things A developer sees that an AI tool has generated a ton of code and a dozen or so files and is delighted. Does more lines of code, more files mean better? It should be the other way around: the less code you write to solve a problem, the better for you and others. AI generates a ton of code, and you can feel the vibe and accept it, but I deeply believe that serious developers who use agents and prompts to generate code, review it, improve it, and refactor it. It takes time. Sometimes it’s actually faster to refactor something using prompts, but not always. You feel 10x faster, but maybe you’re actually 20–30% faster — we don’t really know how to measure that. Another thing is that writing code has never been the biggest challenge in software development. It’s much more difficult, for example, to understand what the Product Owner means when presenting a new design. Or debugging a bug in a distributed architecture, when, at best, it takes you a few hours to find a way to reproduce the problem. Creating new code is only a small part of everyday software development work. Unfortunately, we have focused so much on optimizing code generation and adding new features that we have forgotten about it, leading us into a vicious cycle of investing time in optimization. ## This is a sandcastle Referring back to configuring NeoVim. You know, I started using NeoVim about six months ago. I struggled terribly at first, but once I configured it to my liking, it worked for me, and I didn’t have to change many things later. Configuring AI coding tools is quite the opposite. Every few months, new models are released, and prompts that worked perfectly yesterday might be fit for the trash today. There are more and more tools, and thanks to powerful marketing, we’re bombarded from every side with gibberish about the superiority of tool A over tool B. You configure something that works now. In three months, you’ll have to configure it differently. You learn techniques that will be forgotten next year, and you’ll be learning even more. This investment doesn’t pay off. You pay more for it than you think. The real cost isn’t just wasted time, but also avoiding difficult work and learning technologies that will truly benefit you when you want to become an expert. These tools won’t help you when you have a huge codebase and need to fix edge cases. They won’t help you discuss a new feature with the Product Owner or understand another developer’s intentions. They won’t resolve conflicts within the team or conduct a security audit. Those are the complex parts of our daily work, not code generation. By spending time forcing technology to write code for you, you’re doing something relatively technical and easy, and it’s a form of “productive procrastination.” ## Conclusion Living and working in the AI ​​bubble, we need more data and less hype. Each of us needs to experiment and find a way to determine whether something works. Just realizing how much time goes into configuring this technology is eye-opening. You also need to find a balance in using these tools. You don’t have to use them 100% of the time. Sometimes you just write the damn function yourself. NeoVim users configured it to be better at crafting. We configure AI tools to avoid crafting. --- #blog #SoftwareEngineering #ArtificialIntelligence #AI #AISkills #Programming #Productivity #AiCoding