The field of AI video generation has been advancing at an incredibly rapid pace over the past year. Two of the leading companies in this space, RunwayML and Pika Labs, have both recently released major updates that allow for much greater control and direction of AI-generated video.
RunwayML's new "Director Mode" for their Gen 2 software is a game-changer. Instead of just typing a text prompt and getting whatever video results, you can now control aspects like zooming, panning, tilting, and camera movements.
The ability to dictate these cinematic techniques makes the generated videos appear much more polished and intentional. While the underlying video quality itself still appears quite dreamlike and distorted, this controllability is a huge step forward.
Some early testers like Nick St Pierre have created impressive scenes utilizing Director Mode. In one video, the camera smoothly zooms in on an airplane wing as the pilot moves into frame, panning across the cockpit, then following the pilot as he jumps out, pans down to the ground, and zooms out. While not perfect, sequences like this showcase the new dramatic possibilities.
According to expert user David Villalva, horizontal panning works well, but vertical movements prove more difficult. Combining tilt and pan can lead to conflicts, with the best results coming from zooming. Videos over 4 seconds often morph or mutate. Overall the feature shows promise but has room for improvement.
Meanwhile Pika Labs has released a similar "Dash Camera" feature for controlling direction and movement. Here the parameters must be typed out manually, rather than selected through buttons like with RunwayML.
But the quality of Pika Labs' videos appear slightly better, with more detailed textures and fidelity. The generation speed is also faster. However, the movements seem limited to slower pans, without any quick motions.
Some users have created impressive scenes, like peaceful macro footage of bugs and flowers. But it's clear we're still in the early stages of this technology. You won't be making a Pixar film yet. The videos remain short, with obvious distortions. Yet it's incredible to witness such rapid advancement in less than a year. What was once non-existent is now creating beautiful, if eerie, moving imagery.
While AI image generation exploded with numerous competitors, RunwayML and Pika Labs seem to have so far made real strides in video creation. We can expect further enhancements as they continue iterating. Imagine the possibilities once longer-form generation improves. For now, artists are experimenting with these tools to showcase their potential and push boundaries.
The rapid evolution across AI creative fields is staggering. Video has lagged behind image generation until recently, when these new controls unleashed its promise. It will be fascinating to see what creative minds produce as the technology matures.
We are glimpsing the future of automated video production. While ethical concerns remain, the momentum toward ever more powerful and accessible AI creation tools is undeniable. The seeds have been planted for a content generation revolution.