This week’s AI news features the launch of Vibe Jam, a competition for web-based multiplayer games, and the release of the small open-source model Knowledge GPTQA, which outperforms larger models. Additionally, Claude has integrated web search functionality, OpenAI has introduced new text-to-speech models, and various advancements from Windsurf, Korea AI, Notebook LM, and Stability AI highlight the rapid evolution of AI technologies.
This week’s AI news highlights several exciting developments, starting with the launch of Vibe Jam, a competition for creating web-based multiplayer games using vibe coding. Following a successful demo of a vibe-coded flight simulator by Levels.io, the gaming community has embraced this innovative coding style. Submissions for Vibe Jam have already showcased impressive projects, including a Fortnite-inspired game with Minecraft aesthetics, a safari driving game, and a puzzle game that was created in just a few hours. The competition is generating significant interest, and playtesting for these games is currently underway.
In another notable development, Miss Straw has released a small open-source model called Knowledge GPTQA, which outperforms larger closed-source models in terms of performance and latency. This model, with 24 billion parameters, is multimodal and can run on a single RTX490 or a Mac with 32 GB of RAM. It boasts a context window of 128,000 tokens, making it suitable for advanced reasoning tasks. Users are encouraged to download and experiment with this model to explore its capabilities.
Claude has finally integrated web search functionality, allowing it to compete with other AI models like ChatGPT and Perplexity. This feature is particularly beneficial for coding tasks, as Claude can now reference up-to-date API documentation and library information. This enhancement positions Claude as a more versatile tool for developers, further solidifying its reputation as one of the best coding models available.
OpenAI has also made significant updates to its audio models, introducing two new text-to-speech models that outperform the previous Whisper model. These models allow users to provide specific instructions on how to deliver text, including tone and pacing. Additionally, OpenAI is holding a competition in collaboration with Teenage Engineering for the most creative text-to-speech creations, with prizes for the top submissions. This initiative encourages users to explore the new capabilities of OpenAI’s audio technology.
Lastly, several other AI advancements were announced, including updates from Windsurf, Korea AI, Notebook LM, and Stability AI. Windsurf introduced improvements to its tab completion feature, while Korea AI now offers video training capabilities for personalized AI video creation. Notebook LM can generate mind maps from provided documents, and Stability AI’s new virtual camera feature allows users to create 3D immersive videos from 2D images. These innovations reflect the rapid evolution of AI technologies and their increasing accessibility for creators and developers alike.