Horizon Alpha - is it GPT 5

The video offers a detailed review of Horizon Alpha, an AI model praised for its speed and some creative capabilities but limited by incomplete coding outputs and reasoning challenges, with the host concluding it is likely an OpenAI GPT-4 class model rather than GPT-5. Through various tests and comparisons with other AI tools, the host explores Horizon Alpha’s strengths and weaknesses, speculates on its origins, and discusses the broader AI landscape while encouraging viewers to experiment with the model themselves.

The video is an in-depth exploration and testing session of the AI model called Horizon Alpha, with the host sharing his hands-on experience and thoughts about its capabilities. He begins by discussing the model’s context window, initially believed to be 256K tokens, and compares it to the hype around GPT-5, which some predict will support up to a million tokens. The host tests Horizon Alpha’s image recognition abilities and coding skills, noting that while it performs well in some areas like text extraction from images, it struggles with more complex coding tasks and sometimes produces incomplete or blank outputs. He also mentions that the model claims to be an OpenAI GPT-4 class model but is not GPT-5, and speculates on whether it might be an open-source model or a smaller-scale OpenAI release.

Throughout the video, the host experiments with Horizon Alpha in various coding environments such as Root Code and Open Code, trying different prompts including building a 3D Minecraft clone and a recipe builder. He observes that Horizon Alpha is very fast and responsive, but its coding output is often basic, incomplete, or lacking in reasoning and chain-of-thought explanations. The model tends to add excessive whitespace in code and sometimes fails to complete tasks fully. Despite these shortcomings, the host appreciates the model’s speed and some of its unique behaviors, like generating SVG icons and responding quickly to large prompts.

The discussion also covers comparisons with other AI coding models and tools like Quinn3 Coder, Kimmy K2, Gemini CLI, and 03, highlighting their strengths and weaknesses. The host notes that while some models excel at design or debugging, others are better at problem-solving or tool calling. He explains the complexities of tool calling in AI coding assistants, distinguishing between native function calls and XML-based tool calls, and how these impact performance and cost. The host also touches on the marketing and popularity of various AI coding tools, expressing skepticism about some but acknowledging their growing user bases.

In addition to coding, the host tests Horizon Alpha’s creative writing abilities, finding it surprisingly capable of generating long, coherent stories with vivid descriptions. He contrasts this with earlier AI models that struggled to produce lengthy, high-quality text. The video also includes a broader discussion about the AI landscape, including upcoming models like GPT-4.1 and GPT-40, and speculation about whether Horizon Alpha could be related to these or represent a new open-source initiative. The host engages with the community through polls and chat, gauging opinions on Horizon Alpha’s origins and potential.

Overall, the video provides a comprehensive, candid review of Horizon Alpha, balancing its impressive speed and some novel features against its current limitations in coding and reasoning. The host remains curious and open-minded about the model’s true nature, leaning toward the idea that it might be an OpenAI model but not the anticipated GPT-5. He encourages viewers to try it out while it’s available and shares insights into the evolving ecosystem of AI coding assistants and large language models. The session ends with reflections on the future of AI models, ongoing testing plans, and appreciation for the community’s engagement.