The creator undertook a 30-day challenge using “Vibe coding,” delegating all software development tasks to an AI agent, Claude Code, and found that while the AI excelled at straightforward coding and boosted productivity, it required human oversight for complex tasks and design refinement. Ultimately, the experiment shifted their perspective on AI-assisted development, highlighting its potential to transform workflows and emphasizing the evolving role of developers in managing and reviewing AI-generated code.
In this video, the creator embarks on a 30-day challenge to exclusively use “Vibe coding,” a method where all software development is delegated entirely to an AI agent rather than writing code manually. This approach marked a significant departure from their usual coding style, which involves hands-on coding without AI assistance. The challenge was designed with specific rules, including exceptions for critical bug fixes, learning new concepts, and content creation. The creator chose to test Vibe coding across three project types: rewriting an existing project in a new framework, maintaining and adding features to a production service, and building a new project from scratch. For the AI tool, they selected Claude Code, an AI agent that operates via the terminal, aligning well with their workflow preferences.
The initial project involved rewriting their Dreams of Code website from Go to Next.js, allowing for easy comparison between manual and AI-generated code. Early on, the creator encountered cost challenges with the AI usage but resolved this by subscribing to a more affordable Claude Max plan, which proved cost-effective given the productivity gains. A significant hurdle was the AI’s tendency to lose accuracy during long coding tasks, a phenomenon known as “agent half-life.” To mitigate this, the creator adapted by breaking down large features into smaller, manageable tasks, effectively managing the AI like a junior developer. This strategy led to rapid progress, with the AI not only matching but sometimes improving upon the original code.
As the challenge progressed, the creator added new features, including an admin dashboard with database migrations, and noted that while the AI handled backend tasks well, it struggled with user interface design, often prioritizing function over form. They addressed this by iteratively refining UI prompts and leveraging existing design references to maintain consistency. Security concerns, initially a major worry, were found to be manageable through careful code review and automated scanning tools, reinforcing the idea that AI-generated code requires human oversight. The creator also discovered that Vibe coding enabled a multitasking workflow, allowing them to code effectively even during activities like flying or playing video games, which traditionally hinder manual coding productivity.
In the latter part of the challenge, the creator built a new project, Zenblog.ai, using Next.js to quickly develop an automated blog post generator from YouTube videos. They encountered issues with the AI misinterpreting certain frameworks, which they overcame by providing detailed documentation URLs and project context files to the AI, enhancing its understanding and output quality. However, some complex tasks, such as implementing advanced video processing, proved too difficult for the AI alone, requiring manual coding to provide reference implementations. This reinforced the conclusion that Vibe coding excels at straightforward tasks but still depends on human intervention for novel or complex problems.
Overall, the creator found the 30-day Vibe coding experiment to be a positive and productive experience, changing their initial skepticism into appreciation for the method’s potential. While they do not plan to abandon manual coding entirely or use AI co-pilots within text editors, they see value in integrating AI agents like Claude Code into their workflow for rapid web application development. They also anticipate that software development roles will evolve, emphasizing skills beyond coding, such as task delegation and code review, and foresee new challenges in managing AI-generated code quality. The video concludes with an invitation for viewers to share their experiences with AI coding tools and a promise of future content exploring these emerging aspects of software development.