AI coding agents are useless on large codebases. Unless you do THIS

The video explains that AI coding agents struggle with large legacy codebases mainly due to improper usage rather than tool limitations, and demonstrates how integrating semantic search tools like Serena and specialized refactoring MCP servers can drastically improve their efficiency and reduce costs. By using these techniques, AI agents can perform complex tasks such as large-scale refactoring much faster and more effectively, making them practical for real-world, large-scale projects.

The video addresses a common concern that AI coding agents struggle with large, legacy codebases due to their size and complexity. The presenter argues that the issue often lies not with the AI tools themselves but with how they are used. Comparing it to eating spaghetti with a spoon instead of a fork, the video emphasizes the importance of using the right tools and techniques to significantly boost the performance of AI coding assistants without resorting to costly or complex solutions like sub-agents.

The first tool introduced is Serena, an open-source semantic search and edit tool that enhances code retrieval and editing by understanding the code context rather than relying on brute-force text searches. Serena integrates with AI coding agents to perform tasks like refactoring more efficiently. In a demonstration, Serena helped refactor a small codebase twice as fast as manual effort, using fewer tokens and reducing the need for repeated compile and test cycles, which is especially beneficial for large legacy projects.

Next, the presenter explores a large open-source .NET codebase called Umbraco to benchmark AI coding agent performance. A manual refactoring task of renaming a heavily used type took about three minutes using traditional IDE tools. However, when the same task was attempted naively with an AI agent (Claude) without additional tools, it took three hours due to multiple rebuilds and retries, illustrating the inefficiency of a straightforward approach on large codebases.

To improve this, the video showcases the Refactor MCP server, a .NET-specific tool that provides Roslyn-based refactoring capabilities accessible via a simple interface for AI agents. Using this MCP server, the AI agent completed the renaming task autonomously in about five minutes, a 30x speed improvement over the naive approach. This method also drastically reduced token usage and cost, demonstrating how integrating specialized MCP servers can make AI coding agents far more practical and cost-effective for large-scale projects.

In conclusion, the video strongly recommends leveraging semantic search tools like Serena and refactoring MCP servers to enhance AI coding agents’ efficiency on large codebases. These tools reduce token consumption, speed up workflows, and minimize costly compile-fail-retry loops. The presenter encourages viewers to explore MCP servers relevant to their programming languages and share tips to further improve AI-assisted coding in complex environments, highlighting that with the right setup, AI agents can approach or even surpass human coding speeds in legacy systems.