The creator tests OpenAI’s new GPT-5 Codex model by successfully switching their AI avatar from Omnihuman to Cling within their MCP server setup, praising Codex’s speed and dynamic task handling despite some minor audio issues in the final video. While appreciating Codex’s capabilities, they remain undecided about fully replacing Cloud Code, emphasizing the importance of flexibility in choosing AI tools amid a rapidly evolving landscape.
In this video, the creator shares their initial experience testing OpenAI’s newly released GPT-5 Codex model through the Codex CLI. They focus on experimenting with switching an AI avatar model in their existing setup, which uses MCP servers to generate videos. The goal was to replace the problematic Omnihuman model with a new Cling AI avatar model and evaluate how smoothly this transition would go using GPT-5 Codex. The creator appreciates the speed improvements and the dynamic thinking feature of the new model, which allocates more processing time to complex tasks while handling simpler requests quickly.
The creator walks through the process of updating their repository and integrating the new Cling avatar model by reading its API documentation and instructing Codex to make the necessary code changes. They note that Codex tends to perform all tool calls upfront, which differs from their experience with Cloud Code but does not cause any issues. The model successfully switches out the avatar model while maintaining the same input arguments, demonstrating a smooth and efficient update process. However, the creator mentions that they still find Cloud Code somewhat easier to use for setting up MCP servers, though they plan to continue exploring Codex.
Next, the creator runs a full workflow test to generate a video using the new avatar model. This involves feeding a source image and an audio clip into the MCP servers, splitting the audio into chunks, and applying different camera angles with the Nano Banana tool. The process successfully produces multiple video segments, which are then merged into a final video. They also add background noise from 11 Labs to enhance the audio. Although the final video is not perfect—some audio cuts occur mid-sentence—the overall test demonstrates that the new GPT-5 Codex model can effectively handle the task.
In their conclusion, the creator highlights the main advantages of GPT-5 Codex, particularly its speed and ability to differentiate between simple and complex tasks. They remain undecided about fully switching from Cloud Code to Codex, as they are currently subscribed to both services and weighing the benefits. They also mention that other providers like Anthropic have faced recent model issues, reinforcing the idea that users should remain flexible and choose the best model available rather than committing long-term to a single provider.
Finally, the creator encourages viewers to share their own plans regarding switching between Cloud Code, Codex, or upcoming models like Gemini Ultra. They emphasize the importance of flexibility in choosing AI tools and advise against committing to long-term subscriptions given the rapidly evolving landscape. Overall, the creator expresses satisfaction with GPT-5 Codex’s performance so far and looks forward to further testing and improvements in the future.