With Runway Gen-4, you are now able to precisely generate consistent characters, locations and objects across scenes. Simply set your look and feel and the model will maintain coherent world environments while preserving the distinctive style, mood and cinematographic elements of each frame. Then, regenerate those elements from multiple perspectives and positions within your scenes.
Runway is a New York–based generative-AI platform that lets anyone create, edit, and stylize video, image, and 3-D content directly in the browser. Its flagship Gen-4 model delivers high-fidelity text- or image-to-video generation, while earlier releases such as Gen-1, Gen-2, and Gen-3 Alpha underpin a full creative suite for filmmakers, advertisers, and developers. Founded in 2018, Runway co-created the Stable Diffusion image model, powers Oscar-winning productions, and hosts an annual AI Film Festival that showcases AI-driven storytelling. With a browser-based UI, REST API, and enterprise integrations, Runway lowers the technical barrier so storytellers of any background can prototype cinematic ideas in minutes.