close
close

How to recognize an AI-generated video

Generative AI is filling the internet at a rapid pace, and as we weigh the consequences for education, energy, and human creativity, it's becoming increasingly difficult to trust everything that appears online. Is it real, or is it generated by AI?

Currently, we don't have a way to detect AI content 100 percent reliably every time, but there are some telltale signs of computer-generated text, audio, images, and videos to look out for. By adding a little human intelligence, it's usually possible to tell when something is most likely AI-generated.

Here we focus on AI-generated videos created by tools like OpenAI's Sora, and have also included some examples. Next time you come across a video you're not sure about, check how it meets these criteria.

Bad text

You'll notice that many AI videos (and images) are missing text. Generative AI can't really handle text well because it doesn't understand letters or language – at least not in the same way that humans do. AI signs often look like they're written in a foreign language, so watch out for garbled text or no text at all.

That's not to say there isn't good text in AI videos, but if there is, it was probably added after the fact. In this Monster Camp trailer generated by Luma AI (embedded below), some of the signage looks good (and was most likely added manually) – check out the lettering on the bus and carnival stalls to spot gibberish. You'll have to look closely, because the weird text isn't visible for long.

Fast (or slow) cuts

This brings us to another characteristic of AI-generated videos: you'll often notice that the cuts are very short and the action moves very quickly. This is partly to hide inconsistencies and inaccuracies in the videos you're shown – the idea is to give you the impression of something real, rather than the real thing itself.

On the other hand – and this contradicts what we have just said – the action is sometimes significantly slowed down. However, the end goal is the same: to prevent the limits of the AI's capabilities from becoming apparent.

In this AI-generated music video (embedded below) by Washed Out, the former is the case: everything is fast and furious and over before you can really take it in. Try pausing the video at different points to see how much weirdness you can spot (we noticed at least one person melting into a wall).

Bad physics

Generative AI knows how to stitch moving pixels together to show something that resembles a cat, a city, or a castle—but it doesn't understand the world, three-dimensional space, or the laws of physics. People disappear behind buildings or look different in different scenes, buildings are constructed in strange ways, furniture isn't aligned correctly, and so on.

Consider this drone shot (embedded below) created by Sora from OpenAI. Keep an eye on the group of people walking down in the bottom left corner of the scene: They absorb each other and eventually blend into the railing because the AI ​​sees them as pixels, not people.

The Uncanny Valley

AI videos often have an unnatural sheen to them and we already know the expression “uncanny valley” when it comes to computer-generated graphics that try to replicate the real and natural. When watching AI videos, you will often enter the uncanny valley, even if it is only for brief moments.

If you watch Toys R Us' branded film, created using the AI ​​Sora, you'll notice that the little boy's smile and hair movements look suspiciously unnatural. Additionally, he looks like a different boy from scene to scene because he's not an actor: he's a generated 2D representation based on what the AI ​​thinks a boy should probably look like.

Perfect (or imperfect) elements

There's a bit of a contradiction here too, because AI videos can be betrayed by elements that are too perfect or not perfect enough. These clips are, after all, generated by computers, so designs on buildings, vehicles or materials can repeat over and over again, in patterns that are too perfect to exist in real life.

On the other hand, the AI ​​continues to have trouble with anything natural, whether it's a hand, a chin, or leaves blowing in the wind. In this video from Runway AI of an astronaut running, you'll notice that the hands are a mess (just like a lot of the background physics data is wrong and the text is blurred).

Check the context

Then you have all the tools we already had to detect misinformation: Photoshop existed before generative AI, so some of the rules for detecting fakes remain the same.

Context is crucial – if a video from the New York Times official social media account, then it is most likely trustworthy. If it is forwarded by a faceless person on X whose username contains a lot of numbers, then it may be less so.

Check to see if the action in the video was shot from other angles, if it has been covered in detail elsewhere, or if it even makes sense. You can also ask the people featured in the video: if there is narration from Morgan Freeman in the background, Morgan Freeman can confirm it in one way or another.