Linus Torvalds Pushes Back on AI Slop

The video covers a debate in the Linux kernel community about whether AI-generated code should be treated differently, with some developers worried that “AI slop” could overwhelm maintainers and calling for explicit guidelines to reject low-quality AI submissions. Linus Torvalds pushes back, arguing that AI is just another tool and that existing review processes are sufficient, emphasizing that documentation should focus on code quality rather than the tools used.

The video discusses a recent debate within the Linux kernel community about the use of AI-generated code, specifically focusing on transparency and the potential for “AI slop”—low-quality, high-volume code submissions that could overwhelm maintainers. The discussion began when Intel engineer Dave Hansen proposed new guidelines for contributors to disclose when they use AI or other coding tools in their submissions. The proposed documentation doesn’t introduce new rules but clarifies existing expectations: contributors should make their work transparent and easy to review, including specifying if AI tools were involved.

A significant part of the debate centers on whether AI-generated code should be treated differently from code produced by traditional tools. Lorenzo Stoakes, another kernel developer, argues that large language models (LLMs) are fundamentally different from other tools because they enable people to submit patches without real expertise, potentially flooding the kernel with low-quality contributions. He suggests that maintainers should have explicit permission to reject suspected AI-generated “slop” outright, even without detailed technical justification, to protect limited review resources.

This stance is controversial because the Linux kernel culture traditionally discourages blanket rejections; maintainers are expected to provide technical reasons for rejecting patches. Lorenzo’s concern is that the sheer volume of AI-generated submissions could overwhelm the community’s ability to review code, and that existing rules may not be sufficient to handle this new challenge. He emphasizes the need for clear guidelines that allow maintainers to reject low-quality AI-generated patches to prevent review overload.

Linus Torvalds, the creator of Linux, responds bluntly to these concerns. He dismisses the idea of focusing on “AI slop” in documentation, arguing that bad actors will ignore rules regardless of what the documentation says. According to Torvalds, documentation is for good-faith contributors, and the real issue is not the tool used (AI or otherwise) but the quality of the code submitted. He insists that AI should be treated as just another tool, and that maintainers should continue to handle low-quality patches as they always have, without creating special rules or statements about AI in the documentation.

The video concludes by reflecting on the broader implications of this debate. The core issue is not whether AI is inherently good or bad for Linux, but whether the kernel community’s review processes can scale as code generation becomes easier and more accessible. While some fear that AI will lower the bar and inundate maintainers with poor-quality patches, others believe that changing documentation or policies won’t stop bad actors. The real challenge is balancing openness and innovation with the need to protect maintainers’ time and ensure code quality as the development landscape evolves.