Where builders get stuckCodingAutomation

How to recover when AI starts rewriting working code

A practical recovery guide for the moment when AI stops helping and starts undoing known-good behavior across the codebase.

The failure mode

You had working code. Then a few prompts later, the AI "helped" and now:

  • a working feature is broken
  • a previous bug is back
  • the diff is wider than the task
  • nobody is sure where the last good state lived
  • This is one of the most demoralizing moments in AI-assisted development because it feels like progress turned into sabotage.

    How this problem usually shows up

  • the model rewrites files outside the requested scope
  • it removes conditions or guards that were there for a reason
  • it tries to "simplify" code by deleting important behavior
  • multiple retries make the repo less coherent, not more
  • If you keep going in the same thread, it often gets worse.

    Why it happens

    This is usually a mix of:

  • context drift
  • poor checkpoints
  • prompts that were too broad
  • not enough human review of diffs
  • The issue is rarely just "the model is dumb." The issue is that the workflow no longer has a reliable source of truth.

    This is especially visible in Cursor, Windsurf, and Cline, where the tools are powerful enough to create big diffs quickly.

    What builders get wrong

    They try to fix the broken state from inside the broken state

    That often creates a second-order mess.

    They do not isolate the last known good version

    If you cannot identify the last stable point, recovery becomes guesswork.

    They ask for more broad cleanup

    Once things are drifting, "clean this up" is gasoline.

    What to do instead

    1. Stop the thread

    Do not keep prompting the same broken conversation if the model is already rewriting unrelated code.

    Freeze it.

    2. Find the last known good state

    That may be:

  • a git commit
  • a saved export
  • a downloaded version
  • a local copy that still worked
  • If you do not have one, start making one after every stable milestone from now on.

    3. Reduce the bug to one failing behavior

    Not:

  • "the app is broken"
  • But:

  • signup used to create a profile row and now it does not
  • the dashboard card count used to render and now it is blank
  • That is the unit you can actually recover.

    4. Open a fresh thread with hard boundaries

    Good recovery prompt:

    The last good behavior was: when a user signs up, a profile row is created and the dashboard loads. Current bug: signup succeeds but no profile row is inserted. Only inspect these files: app/signup/page.tsx, app/api/signup/route.ts, lib/db.ts. Do not change styling, billing, or other routes.

    That is much safer than asking for "fix the app."

    5. Review the diff before testing

    The model is allowed to propose. It is not allowed to silently decide.

    Read the files it touched.

    Good-enough recovery flow

  • Revert to last good state.
  • Name the exact broken behavior.
  • Limit the fix to the smallest relevant files.
  • Run the fix in a fresh session.
  • Test immediately.
  • Save the repaired version before the next prompt.
  • This is the practical loop that keeps debugging from turning into repo roulette.

    Typical red flags

  • very large diffs for small bugs
  • renamed functions without clear reason
  • deleted guards
  • rewritten styling while fixing backend
  • repeated apologies from the model with no stable fix
  • Those mean you should stop broad prompting and reset the frame.

    Related guides

  • Context window collapse: why AI starts breaking working code
  • Why builders get stuck at auth and databases
  • Why weak prompts create weak apps
  • Builder takeaway

    When AI starts rewriting working code, the goal is not to win the current thread.

    The goal is to restore a trustworthy source of truth and make the next change smaller, clearer, and testable.

    That is what turns a panic session back into engineering.