Claude Mythos, Deepseek v4, HappyHorse, Meta’s new AI, realtime video games: AI NEWS

This week’s AI news highlights major advancements including Anthropic’s Claude Mythos for cybersecurity, ZAI’s GLM 5.1 for complex agentic tasks, Alibaba’s Happy Horse 1.0 in video synthesis, and Meta’s new but moderately performing Muse Spark model. Additional innovations span real-time 3D world models, AI-driven motion generation, improved video accuracy, and diverse applications like virtual try-ons and automated floor plan creation, showcasing rapid progress across AI domains.

This week in AI news has been packed with groundbreaking developments across multiple domains. Anthropic unveiled a preview of their most powerful model, Claude Mythos, which excels at autonomously finding and exploiting deep software vulnerabilities across major operating systems and browsers. Due to its potential dangers, Anthropic is withholding public release and instead collaborating with major tech companies through Project Glasswing to use the model defensively. While some critics argue that the vulnerability claims may be exaggerated and that smaller models can replicate some findings, Mythos represents a significant leap in AI-driven cybersecurity, albeit with limitations like occasional factual errors and the need for human oversight.

In the open-source arena, ZAI released GLM 5.1, currently the best open-source AI model for complex agentic tasks such as coding and reasoning. Demonstrations include autonomously building a fully functional Linux desktop with over 50 apps, showcasing its advanced capabilities in long, multi-step workflows. Meanwhile, Alibaba’s new video generator Happy Horse 1.0 emerged as a top contender in AI video synthesis, outperforming many competitors and highlighting China’s dominance in this space. Additionally, new interactive and real-time 3D world models like Spatial World and Overworld’s Waypoint 1.5 enable immersive exploration and interaction on consumer GPUs, opening possibilities for gaming, robotics, and autonomous driving training.

Meta released Muse Spark, their new AI model developed by a recently restructured team after previous setbacks. Although Muse Spark is multimodal and powers Meta’s AI chat across platforms like Facebook and Instagram, its performance is moderate and generally outpaced by competitors like GPT 5.4 and Gemini 3.1 Pro. The model remains closed source, which limits community adoption and experimentation. On the agent front, SkyClaw offers a cloud-based AI agent platform that simplifies autonomous task execution with premium model access and integrated productivity skills, making complex workflows more accessible without heavy setup.

Other notable innovations include Nvidia’s Komodo, an open-source tool generating realistic 3D human and robot motions from text prompts, useful for virtual robot training. Advances in video generation continue with frameworks like mmfizz video that incorporate physical understanding to produce more realistic motion and interactions. Tools like Numina improve video generation accuracy by better following object counts in prompts, and spatial edit offers precise control over object placement and camera angles in images. Meanwhile, breakthroughs in AI memory compression with RotorQuant surpass Google’s TurboQuant, enabling faster and more efficient large model deployment.

Finally, new AI applications span diverse fields: Vanast enables realistic virtual try-ons with pose animation, Anima v3 preview offers a lightweight, fast anime image generator, and unified vector floor plan generation translates textual room descriptions into structured floor plans using a specialized markup language. The week’s developments highlight rapid progress in AI capabilities, from cybersecurity and coding to video synthesis and interactive avatars, signaling a future where AI tools become increasingly powerful, versatile, and integrated into everyday technology.