Cursor, Claude Code and Codex all have a BIG problem

The video argues that modern AI-powered coding tools like Cursor, Claude Code, and Codex suffer from poor code quality and maintainability because they were built using immature AI models, leading to buggy and inconsistent user experiences. To combat this, the creator recommends aggressively refactoring code, separating features into distinct codebases, and maintaining both experimental and stable versions to prevent AI-driven technical debt.

The video discusses the core problems with modern AI-powered developer tools like Cursor, Claude Code, and Codex. The creator, who has used and even invested in some of these tools, argues that despite the promise of AI making coding easier, these tools are frustratingly inconsistent, buggy, and unpleasant to use. The main issue, he claims, is that these tools were built using the very AI models they are meant to showcase, leading to a kind of “slop” in their own codebases. Unlike traditional, carefully crafted developer tools of the past, these new AI-driven tools often feel rushed, unstable, and prone to breaking user experience with frequent, poorly considered changes.

A significant point raised is the concept of “dogfooding”—using your own product to build itself—which historically has been beneficial (like writing a C compiler in C). However, the video argues that with AI tools, this approach backfired. Early versions of AI models like Sonnet 3.5 or GPT-3.5 were not robust enough to produce high-quality, maintainable code, yet companies doubled down on using them for their own development. As a result, the foundational codebases of these tools are riddled with bad patterns and technical debt, making them increasingly difficult to maintain or improve as time goes on.

The creator introduces the idea of “codebase inertia,” explaining that the quality of a codebase tends to plateau after about six months. If the codebase is already messy or poorly structured at that point, it only gets worse over time, especially when AI agents are used to propagate and multiply bad patterns. He emphasizes that AI models, when referencing existing code, will often copy and spread suboptimal solutions, accelerating the decline in code quality. This is compounded by the tendency of teams to add features and hacks directly into the main codebase rather than creating separate, focused projects.

To address these issues, the video suggests several strategies: optimize for clarity, speed, and maintainability from the start; tolerate no bad patterns or code smells—remove them aggressively; and don’t be afraid to throw away and rewrite large sections of code, especially now that AI can help automate much of the grunt work. The creator also advocates for splitting unrelated features into separate codebases to avoid bloated, hard-to-maintain monoliths, and for spending more time planning and reviewing with AI agents to ensure quality before merging changes.

Finally, the video proposes a future where teams maintain two versions of their products: a “slop” version for rapid prototyping and experimentation, and a polished, reliable version for production. This approach, inspired by how the game Vampire Survivors is developed, could allow for fast iteration without sacrificing long-term maintainability. The creator concludes that while AI tools have made it easier to write and rewrite code, they also make it easier to accumulate and spread bad code. Therefore, strong engineering discipline and a willingness to reset or refactor are more important than ever to prevent AI-accelerated technical debt.