12 Days of OpenAI: Day 3

On Day 3 of OpenAI’s 12 Days of OpenAI event, the company launched Sora, a new video generation product aimed at enhancing user creativity through AI collaboration. The product features tools like Sora Turbo for accelerated video creation, an “Explore” feed for community inspiration, and a “Storyboard” tool for detailed video direction, with accessibility for ChatGPT Plus Pro users and plans for broader availability.

On Day 3 of OpenAI’s 12 Days of OpenAI event, the company launched Sora, its new video generation product. The team expressed excitement about this launch, emphasizing the importance of video in enhancing creative tools for users. They highlighted three key reasons for developing Sora: fostering a co-creative dynamic between AI and users, expanding beyond text-based interactions, and aligning with OpenAI’s roadmap towards artificial general intelligence (AGI). The presentation included a demonstration of Sora’s capabilities, showcasing a feed of user-generated videos.

The Sora product is designed to be accessible to users with a ChatGPT Plus Pro account, allowing them to start generating videos without any additional costs. The team introduced Sora Turbo, an accelerated version of the original model, which offers features such as generating videos from text, animating images, and remixing existing videos into new styles. The presenters acknowledged that while the early version of Sora may make mistakes, it is already positioned to augment human creativity significantly.

The demonstration included a walkthrough of Sora’s interface, highlighting features like the “Explore” feed, where users can find inspiration from community-generated videos. Users can view the methods used to create these videos, allowing them to learn and incorporate new techniques into their own creative processes. The presenters also showcased the “Library” feature, which serves as a home base for users to organize their video generations and access various creation tools.

One of the standout features introduced was the “Storyboard” tool, which allows users to direct videos by sequencing actions across a timeline. This feature enables users to describe scenes, characters, and actions in detail, giving Sora the context needed to generate coherent video narratives. The presenters demonstrated how users could upload images and have Sora create videos based on those images, showcasing the model’s ability to understand and expand upon user input.

The video concluded with a discussion on the availability of Sora, which was set to launch in most countries, excluding some regions in Europe and the UK. Users with different subscription levels would have varying access to video generations, with the team encouraging feedback on moderation and safety features. The presenters expressed their eagerness to see how users would leverage Sora to create innovative content, emphasizing that while the tool is powerful, it is meant to enhance human creativity rather than replace it.