10x Terminals with AI Coding Agents = 100x Dev? - Codex + Claude Code

The creator experiments with running ten AI coding agents in parallel, each assigned distinct tasks from a detailed Product Requirements Document, to collaboratively develop an image-merging app using Nano Banana and AI models like OpenAI Codex and Claude Code. Despite some challenges like uneven workloads and initial placeholders, the agents successfully complete the project with promising results, demonstrating the potential of multi-agent AI systems to accelerate software development.

In this video, the creator embarks on an experimental project to test the effectiveness of running multiple AI coding agents simultaneously to develop a software application. Specifically, they run ten terminals in parallel, six using OpenAI Codex with GPT-5 and five using Claude Code with Sonnet 4. The goal is to have each AI agent work on a distinct task within the same project directory, coordinated by a detailed Product Requirements Document (PRD) generated with the help of GPT-5. This PRD assigns specific responsibilities to each agent, ensuring no overlap or communication between them, aiming to see if this approach can streamline development.

The project chosen for this experiment is an app that uses the Nano Banana image model to merge two input images and return the generated image. The PRD outlines tasks such as UI design, API integration, prompt improvement, and validation, distributed among the ten agents. An additional eleventh agent is tasked with reviewing the completed codebase to identify any gaps or missing elements. The creator shares the process of setting up each terminal with its assigned agent number and task, then running them concurrently to observe how well the agents follow their instructions and collaborate indirectly through the shared codebase.

As the agents execute their tasks, the creator notes some challenges, such as uneven workload distribution—agent one had a disproportionately large task compared to others—and the need for manual intervention to approve commands. Despite these issues, the agents largely follow the PRD and complete their assigned work. The eleventh agent’s review helps catch minor issues, such as adding a timeout in the OpenAI API calls. However, the initial build had placeholders for some API routes, which meant the app was not fully functional at first, highlighting the limitations of the approach.

After fixing some validation errors with the help of GPT-5 Codex, the creator successfully runs the app and tests its core functionality by merging images with descriptive prompts. The results are promising, with the AI-generated images closely matching the input descriptions and demonstrating the potential of this multi-agent system. While the experiment was not flawless or practical for large projects, it showed that AI agents could work in parallel on a shared codebase with a well-structured PRD guiding their efforts.

In conclusion, the video presents an intriguing experiment in leveraging multiple AI coding agents to accelerate development through task division and parallel execution. The creator acknowledges the current limitations and areas for improvement, such as better workload balancing and eliminating placeholders. They express enthusiasm for further exploring Codex and GPT-5 capabilities and hope the experiment inspires others to try similar approaches or refine the process. Overall, the video offers valuable insights into the future possibilities of AI-assisted software development.