Where builders get stuckCodingDeployment

The last 20% problem: why AI-built apps stall before launch

You built 80% of your app in a weekend with AI. Then spent three weeks on the last 20%. Here's why the final stretch breaks builders and how to push through it.

The failure mode

You built 80% of the app in a weekend.

The layouts are there. CRUD works. The basic flows make sense. You showed a friend and they said "this is really cool." You felt like you were two days from launching.

That was three weeks ago.

The last 20% is the gap between "this works in a demo" and "this is ready for real users." AI tools are incredible at generating that first 80% — layouts, database schemas, basic flows, happy-path logic. But the final stretch involves edge cases, error handling, loading states, mobile responsiveness, email flows, payment failures, and polish. This is where builders stall. Not because they lack skill, but because the nature of the work changes completely.

The first 80% felt like building. The last 20% feels like debugging, and the dopamine disappears.

How this problem usually shows up

  • you have been "almost done" for two weeks
  • each fix creates a new bug somewhere else
  • the AI keeps breaking working code when you try to fix edge cases
  • you are spending tokens on the same three or four persistent issues
  • the motivation that came from rapid early progress has completely evaporated
  • you have stopped showing the app to people because you know what is broken
  • you keep adding small features instead of finishing what is already there
  • The project does not feel stuck in an obvious way. It feels stuck in a slow, frustrating way. You are still making progress, but it is measured in hours per fix instead of features per hour.

    Why it happens

    AI generates optimistic code. It writes for the happy path — the user who does everything right, with a fast connection, on a desktop browser, with a valid credit card, who never hits the back button at the wrong time.

    Real users are not like that.

    Here is what the last 20% actually contains:

  • Error boundaries and loading states. What does the user see when the API is slow? When it fails? When there is no data yet? AI rarely generates these.
  • Auth edge cases. Expired sessions, email verification flows, password reset, OAuth token refresh, what happens when a user closes the tab mid-signup.
  • Payment edge cases. Failed charges, webhook retries, subscription cancellation, proration, what happens when Stripe sends the same event twice.
  • Mobile layout breaks. The AI built everything at desktop width. On a phone, the layout collapses, buttons overlap, modals are unusable.
  • Context window collapse. The AI has been working on this project for weeks now. Its context window fills up, it starts forgetting earlier decisions, and it begins rewriting working code with subtle regressions.
  • None of these are individually hard. But there are dozens of them, and they interact with each other in ways that make the project feel like it is fighting back.

    What builders get wrong

    Trying to fix everything with more prompts

    The instinct is reasonable: AI built the code, so AI should fix the code. But at this stage, the problems are specific and contextual. The AI does not remember why it made certain choices three days ago. Throwing more prompts at persistent bugs often creates new ones.

    Not scoping

    Every edge case feels urgent when you are in the weeds. But treating "the empty state on the settings page looks wrong" the same as "users can accidentally double-charge themselves" is a triage failure. Not all bugs are created equal.

    Perfectionism

    The app does not need to be perfect. It needs to be shippable. There is a significant difference. Most successful products launched with obvious rough edges. Users forgive imperfection if the core value works. They do not forgive never launching.

    Not asking the right question

    The question is not "is this bug fixed?" The question is: "Would a real user hit this edge case in the first month?" If the answer is "maybe one person, once," it is not a launch blocker.

    How to close the last 20%

    1. Triage ruthlessly

    Make a list of every remaining issue. Every bug, every missing state, every rough edge. Then categorize:

  • Launch blockers — the app crashes, users lose data, there is a security hole, payments are broken
  • Week-one fixes — annoying but not breaking, can be patched after launch
  • Later — nice-to-have polish that no one will notice on day one
  • Only work on launch blockers. Move everything else to a list you will look at after you ship. This is not laziness. This is the same triage that professional teams do.

    2. Stop prompting, start reading

    At this stage of the project, you probably need to understand what the AI actually wrote. Open the files. Read the data flow. Trace how a user action moves from the frontend to the API to the database and back.

    You do not need to understand every line. But understanding the shape of the code — which files talk to which, where state lives, how auth works — makes you dramatically faster at fixing issues. Small manual fixes are often faster than re-prompting an AI that has lost context.

    3. Test like a real user

    Open the app in an incognito window on your phone. Sign up with a new email address. Go through the core flow as if you have never seen the app before. Try to break it.

    This 10-minute test reveals more real issues than two hours of AI prompting. It also shows you which bugs actually matter and which ones you were fixating on unnecessarily.

    4. Ship with known imperfections

    Write down what is imperfect. Look at the list. Decide you are OK with it. Ship.

    This is harder emotionally than it sounds. You have been staring at this project for weeks. You know every flaw. But your users do not have your context. They will see the thing you built, not the things you did not finish.

    The builders who launch are not the ones with perfect apps. They are the ones who decided "good enough" was good enough.

    5. Set a hard deadline

    "I ship on Friday regardless."

    This one sentence forces every triage decision that endless weekends do not. When the deadline is real, you stop debating whether the settings page empty state matters. You know it does not, because Friday is in three days and the payment flow still has a bug.

    A deadline turns "should I fix this?" into "can I afford to fix this?" and that is a much better question.

    Related failure modes

  • Context window collapse — when the AI starts breaking working code because it has lost track of the full project
  • Deployment anxiety — the fear of going live, which often compounds the last-20% stall
  • No MVP, endless scope — adding features instead of finishing, which disguises itself as progress
  • Builder takeaway

    The last 20% is not a sign that something went wrong. It is a normal phase of building software. Professional developers spend most of their time in the last 20%. The difference is they expect it.

    What makes it feel worse with AI tools is the contrast. The first 80% was so fast that the last 20% feels broken by comparison. It is not broken. It is just a different kind of work — slower, more detailed, less exciting.

    The way through it is not more prompting. It is triage, reading, testing, and a deadline.

    Ship the app. Fix it in production. The version in your head will never exist. The version in front of your users is the one that matters.