In the video “AI Coding BATTLE | Which Open Source Model is BEST?”, the host compares three open-source coding models—Deep Sea Coder V2, Yoder 9B, and Quen 2.5 Coder 7B—using a powerful PC to determine their performance in coding challenges, including creating a Snake game and a Tetris game. Quen 2.5 emerges as the fastest and most effective model, particularly excelling in the Snake game challenge, while all models struggle with more complex tasks like prime number generation.
In the video titled “AI Coding BATTLE | Which Open Source Model is BEST?”, the host tests three open-source coding models to determine which one performs best for local coding without internet access. The models being compared are Deep Sea Coder V2, Yoder 9B, and Quen 2.5 Coder 7B. The host is equipped with a powerful Dell Precision 5860 PC, featuring dual RTX A6000 GPUs and 96 GB of VRAM, allowing for the simultaneous loading of all three models. The testing environment is set up using LM Studio, which the host is a fan of.
The first coding challenge involves creating a simple Snake game in Python. Deep Sea Coder V2 generates code that runs at about 30 tokens per second, utilizing the Tkinter library, but it doesn’t function as expected. Yoder 9B performs faster at 50 tokens per second and uses the Turtle library, producing a working game albeit with some alignment issues. Quen 2.5 Coder 7B stands out by generating code that runs at nearly 70 tokens per second and successfully implements the Pygame library, resulting in a fully functional Snake game. Quen is declared the winner of this round.
Next, the host tests the models with a more complex challenge: creating a Tetris game. Deep Sea Coder V2 struggles with missing references to Pygame, while Yoder 9B fails to produce a working game altogether. Quen 2.5 again shows promise but ultimately fails to create a functional Tetris game as well. The host notes that creating Tetris is a challenging task for coding models, and none of the three succeeded in this instance.
The video then shifts to coding challenges from CodeWars, starting with a simpler task of moving letters in a string forward by ten positions in the alphabet. All three models successfully complete this challenge, demonstrating their capabilities. However, when faced with a more difficult challenge involving prime number generation, all models time out, indicating that they struggle with more complex algorithms.
In conclusion, the host summarizes the performance of the three models, highlighting Quen 2.5 as the fastest and most effective overall, particularly for the Snake game challenge. The video emphasizes the practicality of running these models locally and showcases the impressive capabilities of the Dell Precision 5860 machine. The host invites viewers to suggest further tests for the coding models and expresses gratitude to Dell and Nvidia for their sponsorship, encouraging viewers to like and subscribe for more content.