Despite the excitement around autonomous coding agents, the highest-leverage way to use LLMs today is deliberately — staying in the loop. That will change as AI improves, and eventually your role will shift to defining the what and the why. But we aren’t there yet. Right now, staying in control is what lets you extract the sharpest output from these models: lean where lean is right, sophisticated where sophistication is warranted.
This approach lets you operate beyond the edges of your expertise while actually learning in the process — from LLMs as you would from books or colleagues. Everything you produce will reflect your judgment and standards, not a probabilistic guess that silently breaks under pressure. You’ll also retain genuine understanding of every design decision.
1. Avoid “vibe coding” (letting LLMs write everything). It works for small throwaway scripts, but for anything nontrivial, LLMs alone produce bloated, fragile, suboptimal codebases. The human+LLM combination beats either one working alone — but only if the human has strong communication skills and LLM experience.
2. Provide rich context. Don’t be stingy with what you feed the model. Include relevant code, papers, documentation, and a thorough “brain dump” — especially hints about which solutions look tempting but are actually bad, and which directions are promising. The more context, the better the output.
3. Use the right models. Use Claude Opus 4.6 or Claude Connet 4.6 (stronger reasoning, better at spotting complex bugs). Crucially: avoid agents, IDE integrations, and RAG-based tools. These hide context from the model and degrade performance. Stay in the loop — copy code to the web interface yourself, so you see everything.
4. Stay in control, but don’t overreact. Ceding all control to agents costs quality. But refusing to use LLMs out of ideology is equally costly — you’ll fall behind and miss a genuinely new set of skills. The sweet spot is the middle: human judgment driving the process, LLMs amplifying execution.
One last risk deserves mention — the opposite failure mode: rejecting LLMs out of ideology or discomfort, quietly accumulating a disadvantage.
