Skip to main content

AI Wrote My Code. I Couldn't FEEL It.

·11 min read
aiagenticdxengineering

I've been using AI to write code for a while now. It's fast, it's getting more accurate, and the spec-first workflows are genuinely good. But I keep running into this feeling that I can't shake: when I let AI do too much, I feel disconnected to the codebase. I posted about this on r/ExperiencedDevs and the response confirmed something I was hoping for. It's not just me.

TLDR: Going fully agentic with AI coding made me faster but left me unable to understand my own codebase. I built a workflow where I write the skeleton and core logic by hand, let AI handle the deterministic stuff, and iterate on specs until they become the feature documentation. When I posted about this, hundreds of devs showed up with the same feeling, different coping mechanisms, and some sharp counterarguments that made me think harder. This post is everything I learned from that conversation.

The Disconnect

When I let AI go fully agentic on a feature, even with a good spec, I feel disconnected from the codebase. The code is there. It works. But I don't generally get how it works. I just know the result. I don't know what it actually does. And that bothers me.

That's a weird place to be as a software engineer. You shipped the feature, tests pass, PR got merged. But if someone asks you why you chose this table structure, or what happens when this edge case hits, you're reading your own code like it's someone else's.

One commenter nailed it better than I could. They said there is none of your own intention in the codebase that you are reading. If code was something you could just "prompt, then review," then reviewing until you feel ownership would work outside of generated code too. But anyone who has been forced to maintain monolithic legacy codebases they didn't create will tell you that's not true.

That hit me. Because what they're really saying is: reviewing is not the same as understanding. You can read every line and still not have the mental model. The intention gap is real.

The Burn

I got burned once. Wrote a short prompt, let AI implement a whole feature, went to test it, and the thing totally diverged from what I wanted. I couldn't even course-correct because I had no idea what it built. Had to scrap everything and start over.

The Workflow I Landed On

Since the burn, I've been doing spec-first, but with my hands in it. I generate a spec, then I argue with it, poke holes in it, point out flaws, architect it myself. Once it's workable I implement the skeleton by hand. The schema, the core logic, the architecture. Then I feed it back to improve the spec more. As I implement I find more flaws, keep iterating, and eventually the spec becomes the documentation of the feature itself.

The deterministic functions, the boilerplate? AI can have those. But the core stuff, I need to touch it. I need to write it. Otherwise I don't feel ownership of it. When the shit hits the fan in production, I need to be able to jump right in and know where the logic goes and where it breaks.

When working with the spec, I generally provide it more context and ask it to come up with 2 or 3 aspects of the problem, then I need to really think about that. To sum it up: AI is my assistant.

The Speed Tradeoff

I know my approach is slower than the folks throwing unlimited tokens at the latest models Claude 4.6 1M context windows. I accept that tradeoff. I feel all the f*cks that I did, and I want to keep feeling them.

That matters. Maybe not on the sprint board this week. But it matters when production goes down and your team is looking at you.

The Machine Spirit

There's this concept in Warhammer 40k called the machine spirit. Tech priests pray to the machines, wave incense before them, maintain a spiritual connection to the technology. Before AI I never thought about "connection to the codebase" as a thing. I coded 100% by hand so the connection was just there. Now that I have the option to lose it, I can actually feel it. I need to feel the machine spirit of my code.

What the Community Told Me

I posted this on r/ExperiencedDevs. 324 upvotes, 154K views, and a comment section full of people who either feel the same thing or have strong opinions about why I shouldn't.

Here's what stood out.

"You Throw the First One Away"

One senior dev described a workflow that's almost the inverse of mine. They let agentic AI build the throwaway iterations on greenfield stuff, learn from the mistakes, then take a couple days to refactor into something reasonable. Their least favorite part of a project is when they don't know what they don't know, so they let AI eat that discovery phase.

But in an established codebase, they flip. They use AI as a rubber duck: following existing patterns, debating current design, working through alternatives. They said AI is not super helpful when you have a complex and working system, but it keeps them engaged with the codebase and thinking through refactors.

That's a useful frame. Greenfield vs. established codebase might need completely different levels of AI autonomy.

Building in Stages, Not Blobs

One developer described exactly why full agentic output feels wrong. When they write code manually, they build in stages. Write the most basic thing. Test it. Add the next layer. Test it. Wire up the data. Test. Each cycle lets you grow the code from nothing to something functional.

AI bypasses all of this. It just spews out a big blob, and sometimes it isn't even the blob you want. You're stuck debugging something that appears to have been written by a drunk intern who has no clue.

Their solution: have AI write in stages too. Same iterative approach, but with AI sketching each layer. Some steps are simple enough to just code yourself. The point is to build incrementally, not to receive a finished product you don't understand.

Review Fatigue is Real

Multiple people brought up review fatigue. It's more work to review code than write it a lot of times. When you write it, you understand the intent. AI does silly things like adding a bunch of fallback code because it doesn't have enough info, so it implements cautiously. You tell it what not to do, but there's always something you forgot to include.

I felt that. There is definitely some review fatigue where I have to read too much code.

One dev pointed out there's no mandate at their workplace to go super-fast. Their team would rather they wrote high-quality code than a lot of code. If they need to spend a couple hours reviewing their own generated code, nobody holds it against them. They're already going twice as fast as they used to.

That's a healthy take. The speed pressure is partly self-imposed.

Another dev called this "busy waiting," borrowing from scheduling theory. The CPU is technically running instructions but producing zero useful output. With AI: you're generating code, accepting code, reviewing code, debugging code. Lots of motion. But comprehension is built through friction, not throughput. The commit history shows progress. Your mental model doesn't.

Productivity is measured in problems you understand, not tokens consumed.

The Flip-Flop Problem

Someone raised a point I've been thinking about: when code is from a colleague, I already adjust my expectations and read it carefully. If something is unclear, I can poke my colleague and they'll explain why they did it that way.

With AI, you ask why, and it says "Absolutely, that is my mistake..." It flip-flops. You point out the error in its backtrack and it backtracks again. A human sometimes does this too, but at least both parties learn something in the process.

That matters more than it sounds. The dialogue you have with a human about code intent is fundamentally different from the dialogue you have with AI. A colleague can say "I copied from Stack Overflow and I don't fully understand it either." That honesty is more useful than AI's confident backtracking.

One Manual Task Per Session

One dev shared a practical rule: keep one task per session that you do fully manually, usually the thing closest to actual system behavior. It keeps the mental model alive. If they go fully hands-off for a week, they lose track of what the code actually does vs. what they think it does.

Simple. I like it.

The Navigator Mode Approach

Another dev wrote a Claude skill called Navigator mode. It's instructions for AI to act like a pair programmer sitting next to you. You look at the ticket together, it asks clarifying questions, it tells you what to do, and you drive. It cuts throughput way down, but everything that goes into the editor has to pass through your eyeballs and out your fingertips.

That's an interesting middle ground. AI as navigator, you as driver. Not the other way around.

The Sharpest Counterpoint

The comment that made me think the hardest came from a retired engineer. They said: back when third-party libraries started becoming widely available, a lot of devs were on the NIH (Not Invented Here) struggle bus. "How can I trust code written by people I don't know to do these important foundational things?" The concerns weren't unfounded. Some libraries were bad, all contained bugs. But in the long run, productivity gains won out, libraries got better, and now none of us think twice about it.

You don't feel ownership of those third-party libraries, nor should you. You feel ownership for what you build with them. They think AI code will eventually fall out the same way.

That's a strong argument and I genuinely appreciate them taking the time to write it out. I think they might be right about the long arc. But right now, today, with AI at this level, I still need my hands in it. Maybe that changes. Maybe I'm the dev in 2003 insisting on writing my own string parsing library. I'll accept that possibility.

I asked them if they're having fun in retirement. They said being retired is the bomb, and they're pretty sure they were born to sit around playing games and reading books in the sun. May we all get there someday.

A Good Project Isn't One You Know by Heart

The last piece of wisdom that stuck with me came from a dev who said: a good project isn't the one that you know by heart so you can change it with ease. A good project is one where a stranger can come into the codebase and find the relevant piece of code with ease. The stranger doesn't need to know the entire repo in order to make their fix.

That reframes the whole question. Maybe the goal isn't "I must understand every line." Maybe the goal is "the codebase must be structured so that understanding is possible." If AI-generated code is clean, well-documented, and navigable, then maybe my feeling of disconnection is a signal to improve the code's structure, not to write more of it by hand.

I'm still chewing on that. It's hard, and it points in a different direction than my instincts.

What AI Taught Me About Caring

Before AI, I honestly didn't think about the connection to the codebase as something important. I coded 100% by hand, so the connection was just there by default. I never had to think about it because I never had the option to lose it.

Now I can feel that feeling. The difference between code I understand and code that just exists in my repo. AI didn't just change how I write software. It made me realize I actually care about the craft in a way I couldn't articulate before.

I feel all the f*cks that I did. And I want to keep feeling them.

This is solvable. I'm trying to solve it. My workflow is one answer, not the answer. The community showed me there are a lot of different ways to keep your hands in it while still getting the speed benefits. The retired dev showed me this might all look different in ten years. The 40k fans reminded me that humans have always had a complicated relationship with the machines they depend on.

The only wrong move is to stop caring about what your code does.