Realtime AI videos, new #1 open source model, AI reads minds, Google’s space GPUs, gynoids - AI NEWS

This week’s AI news highlights include Bite Dance’s open-source Bindweave for high-fidelity video generation, Alibaba’s Uni Lumos for fast, automatic video relighting, and Brain It’s transformer model that reconstructs images from brain activity. Additionally, the Allen Institute released the geospatial AI model OMO Earth, Kimmy K2 Thinking emerged as a powerful open-source trillion-parameter model rivaling top closed systems, and Google unveiled Project Suncatcher to deploy solar-powered AI GPUs in space, marking significant advancements across AI video, mind-reading, environmental analysis, and infrastructure.

This week in AI news has been packed with groundbreaking developments, starting with Bite Dance’s release of Bindweave, a powerful open-source video generation tool. Bindweave allows users to upload reference photos of people, objects, or backgrounds and insert them into videos with impressive face fidelity and consistency. Users can control the content with text prompts and combine multiple elements seamlessly, creating cinematic-quality videos. Although the model is large and requires high-end GPUs, its open-source nature promises future optimizations and wider accessibility.

Alibaba introduced Uni Lumos, an AI tool that automatically relights characters placed into new video backgrounds to ensure color and lighting consistency. This tool significantly simplifies what was previously a manual and tedious process, outperforming other video relighting models in both quality and speed—generating videos up to 76 times faster. Uni Lumos is already available for local use, making it a valuable asset for video editors and creators seeking seamless compositing.

A fascinating breakthrough comes from Brain It, an AI capable of reconstructing images directly from brain activity using fMRI signals. This transformer-based model decodes high-level semantic and structural features from brain scans to generate images that closely resemble what a person is seeing, including detailed poses and object orientations. While still in the research phase with code forthcoming, Brain It outperforms other models in image reconstruction accuracy, marking a significant step toward mind-reading AI technologies.

In the realm of geospatial AI, the Allen Institute released OMO Earth, an Earth observation foundation model trained on massive datasets including satellite imagery and environmental sensors. Available in multiple sizes suitable for edge devices to consumer GPUs, OMO Earth excels at tasks like deforestation detection, wildfire risk assessment, and ecosystem classification. Its open-source release provides researchers with a powerful tool for analyzing and responding to environmental changes globally.

Finally, the open-source AI landscape saw a major leap with Kimmy K2 Thinking, a trillion-parameter mixture of experts model that excels in multi-step reasoning, coding, and complex autonomous tasks. It matches or surpasses top closed models like GPT-5 and Claude 4.5 in benchmarks, offering efficient and cost-effective performance. Alongside this, Google announced Project Suncatcher, aiming to deploy solar-powered AI GPUs in space to overcome Earth-based cooling and energy limitations, potentially revolutionizing AI infrastructure. These advances, along with real-time video generation tools and improved humanoid robots, highlight an exciting and rapidly evolving AI frontier.