Trisha G and Steve Smith agree that AI-generated code will likely increase technical debt due to rapid code production and variable quality, especially without strong engineering practices and proper AI training aligned with company standards. They emphasize that while AI can boost productivity for non-critical code, human oversight and disciplined software engineering remain essential to manage complexity, maintain code quality, and handle debugging effectively in an AI-augmented development environment.
The video features a conversation between Trisha G and Steve Smith discussing the question: “Will AI code create mountains of technical debt?” Both agree that AI-generated code is likely to increase technical debt due to the sheer volume of code produced rapidly and the current quality of AI-generated code not always meeting high standards. They emphasize that while AI tools like Cursor and Windsurf are exciting and can boost productivity, they can also exacerbate existing problems, especially in teams or organizations lacking strong engineering practices.
They explore the nature of technical debt, describing it as code that is hard, slow, or risky to change. Trisha shares her experience from Elmax, where a “refactoring list” was used to incrementally improve code quality, a practice that might be challenged in an AI-driven development environment. They also discuss how some AI-generated code might be “good enough” for certain utility or non-critical components, where the cost of perfecting the code is not justified, but for critical systems, human oversight remains essential.
The conversation highlights the importance of training AI tools properly and aligning them with company-specific coding standards and technical values. They note that AI is only as good as the data it learns from, and much publicly available code (e.g., on GitHub) may be outdated or suboptimal, leading to poor AI outputs. This means companies will need to invest time in training AI models to generate maintainable, high-quality code, especially for complex or business-critical systems.
Trisha and Steve also discuss the challenges AI-generated code poses for debugging and incident management. Poorly understood or unreadable AI-generated code can make troubleshooting difficult, especially under pressure. They stress the continued importance of best engineering practices such as small incremental commits, thorough testing, and clear coding standards to mitigate these risks. They foresee the development of new AI-powered tools to help analyze and manage code quality and failures more effectively.
In conclusion, both agree that AI will inevitably create more technical debt, but this is manageable with proper engineering discipline and strategic use of AI. They suggest using AI primarily for utility or “glue” code initially, while reserving critical, differentiating software for human engineers augmented by AI. The key takeaway is that good software engineering practices matter more than ever in an AI-augmented development world, and organizations must adapt to leverage AI effectively without compromising code quality or maintainability.