Meta brings us one step closer to AI-generated movies

Posted by

Like “Avengers” director Joe Russo, I’m becoming increasingly convinced that completely AI-generated movies and TV shows will be possible within our lifetime.

Several AI unveilings over the past few months, most notably OpenAI’s ultra-realistic-sounding text-to-speech engine, have given glimpses of this brave new frontier. but of meta Announcement Today brought our future of AI-generated content into particularly sharp relief – at least for me.

Meta started his morning with Emu Video, an evolution of the tech giant’s image creation tool, Emu. Given a photograph paired with a caption (e.g. “a dog running across a grassy knoll”), image or description, Emu Video can generate a four-second long animated clip.

Emu video clips can be edited with a complementary AI model called Emu Edit, also announced today. Users can describe the modifications they want to make in Emu Edit, in natural language – for example “same clip, but in slow motion” – and see the changes reflected in the newly generated video.

Now, video generation technology is not new. Meta has used this before, as has Google. Meanwhile, startups like Runway are already building businesses on it.

But Emu Video’s 512×512, 16-frames-per-second clips are among the best I’ve ever seen in terms of their fidelity – to the extent that my untrained eye had difficulty distinguishing them from the real thing. Is.

Image Credit: meta

Well – at least some of them. Emu Video seems to be most successful at animating simple, mostly static scenes (like timelapses of waterfalls and city skylines) that deviate from photorealism – i.e. styles like cubism, anime, “paper cut craft” and steampunk. In. A clip of the Eiffel Tower at dawn “as in a painting”, with the tower reflected in the Seine River below, reminded me of an e-card that you can see american greetings,

emu video

Image Credit: meta

However, even in the best work of emu videos, AI-generated weirdness creeps in – like bizarre physics (e.g. skateboards that run parallel to the ground) and strange appendages (toes that curl at the back of the feet and feet that mix into each other). Objects often appear and disappear from view without any logic, such as the birds overhead in the Eiffel Tower clip above.

After spending a lot of time browsing Emu Video’s creations (or at least meta cherry-picked examples), I started to notice another glaring thing: The clips don’t have subjects… well, to do excess. As far as I can tell, emu videos don’t have a strong understanding of action verbs, perhaps this is a limitation of the underlying architecture of the model.

emu video

Image Credit: meta

For example, a cute anthropomorphic raccoon would hold a guitar in the Emu video clip, but it would not jingle Guitar – even though the clip’s caption includes the word “strum”. Or two unicorns will “play” chess, but only in the sense that they will sit curiously in front of the chessboard without moving the pieces.

emu video

Image Credit: meta

So clearly there is work to be done. Still, the emu video is more basic B-roll. I’d say it wouldn’t be out of place in a movie or TV show today – and the moral implications of it frankly horrify me.

emu video

Image Credit: meta

Leaving aside the risk of deepfakes, I fear animators and artists whose livelihoods depend on creating AI sequences like the emu video are now predictable. Meta and its generic AI rivals will likely argue that Emu Video, which Meta CEO Mark Zuckerberg They say Being integrated into Facebook and Instagram (hopefully for the better). toxicity filter Compared to Meta’s AI-generated stickers), Increase instead of this replace the Human artist. But I would say that’s an optimistic, if not hypocritical, outlook – especially where money is involved.

Earlier this year, Netflix used AI-generated background images in a three-minute animated short. company Claimed This technology may help alleviate anime’s perceived labor shortage – but it could easily conceal how low pay and often difficult working conditions are putting artists out of work.

In a similar controversy, the studio behind the credits sequence of Marvel’s “Secret Invasion” admitted that AI, primarily the text-to-image tool MidJourney, was used to generate much of the sequence’s artwork. Series director Ali Selim made the case that the use of AI fits with the show’s bizarre themes, but the bulk of the cast community and fans strongly disagreed,

emu video

Image Credit: meta

Actors may also be on the chopping block. One of the major speculations in the recent SAG-AFTRA strike was the use of AI to create digital likenesses. The studios eventually agreed to pay the actors for their AI-generated likenesses. But will they reconsider as technology improves? I think it’s a possibility.

To add insult to injury, AI like Emu Video are typically trained on images and videos produced by artists, photographers, and filmmakers – and without informing or compensating those creators. one in white paper With the release of the Emu video, Meta only says that the model was trained on a data set of 34 million “video-text pairs” ranging from 5 to 60 seconds in length – not where those videos came from, their copyright status. Or did Meta license them.

emu video

Image Credit: meta

There have been steps toward industry-wide standards to allow artists to “opt out” of training or receive payment for AI-generated works to which they have contributed. But if the emu video is any indication, technology – as is so often the case – will soon far outweigh ethics. Maybe it already is.

#Meta #brings #step #closer #AIgenerated #movies

Leave a Reply

Your email address will not be published. Required fields are marked *