Tech

Try Gen-1, a generative AI for video creation.

What happened now? The original startup that developed one of the first machine learning algorithms for art creation is now back with a new product. Gen-1 can turn video clips into something completely different, with seemingly unparalleled quality compared to similar tools.

In 2021, Runway worked with researchers at the University of Munich to create Stable Diffusion, one of the main machine learning algorithms that has brought generative AI into the spotlight. Now the company is back with Gen-1, a new model that can convert pre-existing videos by following user-provided text prompts.

As explained on Official website, Gen-1 can “synthesize new videos realistically and consistently” by basing a new style on an image or text prompt. It’s like filming something new, “not filming anything at all,” says Runway Research.

Gen-1 can actually run in five different “modes”: Styling, to convey the style of any text cue image to every frame of the video; Storyboard for turning mockups into fully animated renders Mask to isolate video objects and change them according to the hint (for example, add black dots to a dog); Rendering to turn non-textured renders into “realistic results” via image or text input; and Customization to “unleash the full power of Gen-1” by tweaking the video model.

The machine learning model behind Gen-1 is not the first video creation AI to hit the market, as many companies have already released their own video creation algorithms in 2022. Compared to Make-a-Video by Meta, Phenaki by Google and Muse. , however, the Runway model can provide both YouTube professionals and amateurs with innovative tools of higher quality and sophisticated capabilities.

According to Runway himself, user research has shown Gen-1 results to be superior to existing generative models for image-to-image or video-to-video conversion. Apparently Gen-1 is preferred by Stable Diffusion 1.5 by 73.53% of users and 88.24% of Text2Live users.

Podium certainly has the right expertise when it comes to video rendering and conversion, as the AI-powered tools developed by the company are already being used for online video platforms (TikTok, YouTube), movie making (Everything everywhere at once). and TV shows like The Late Show with Stephen Colbert.

Gen-1 was developed based on the aforementioned experiences and with video production customers in mind, Runway said, after many years of studying the editing and post-production technologies of visual effects in the film industry. The new generative tool runs in the cloud, and access is now restricted to a few invited users. General availability should appear “in a few weeks”.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button