To acquire wisdom, one must observe

A.I. video generators are terrible for society

Last week, President Donald Trump released an A.I.-generated video of himself playing hockey for Team USA and punching members of the Canadian team. Three weeks ago, he shared an A.I.-generated video depicting former President Barack Obama and former First Lady Michelle Obama as apes. These videos are not unique; the president has shared at least 62 A.I.-generated images or videos on his Truth Social page since the start of his second term. This behavior is extremely unpresidential and would have been a major scandal if any other politician had done it. Instead, because it’s Trump, nobody is surprised by it, and we all move on with our day. 

A.I. video technology has improved tremendously over the past year. Although it’s still relatively easy to spot A.I. videos if you know what to look for, the reality is that most people are unfamiliar with the common signs that a photo or video was produced by A.I. Even for people familiar with these tells, newer A.I. models have made them harder to spot, so knowing for sure whether or not something is A.I. is difficult even for those who know what they need to look for.

If you spend any amount of time on social media, you’ve probably encountered A.I. video, whether or not you know it. If you are not familiar with A.I. spotting techniques, I’ll list a few to help you out. First, if the video is really blurry or staticky, it is a sign that the video may have been made using Sora, OpenAI’s video generation model, and the most popular one. Then, look for objects merging together or suddenly appearing or disappearing, and closely examine any writing in the video. Ask yourself whether the camera angle makes sense. If you’re looking at a video claiming to be security camera footage, but it moves to follow the action, it’s probably A.I. Look closely at the aspect ratio of the video. If it was shot on a phone camera, like most videos you see on social media, it will be the size of your phone screen. Some A.I. video generators, including Sora, make videos that are slightly shorter than a video from a camera. If it has bars on the top and bottom of the screen, like you’re watching a modern movie on a really old tv, that’s another sign that you could be watching an A.I. video. If the events in the video would have been on the news, like videos of a plane crash or explosion, try googling the event to see whether any credible sources are reporting on it. Finally, check out the account that posted the video. If they frequently post A.I. videos, you should doubt whether the video you’re currently watching is real. Also, if the account claims to be one person, but they look different in every video, it’s probably an A.I. account. Similarly, if their newer videos are much better in quality than their older ones, that can be another sign. 

Now that you’re familiar with how to spot A.I. videos, look through your social media “for you” pages and see just how many videos you watch are A.I. The number will probably disturb you. Anyway, now that you’re sufficiently horrified by just how much of the content that you interact with daily is completely fake, let’s get to the real question that I want to answer in this article: What’s the point of this technology that allows anyone to make fake videos with a couple of clicks?

There is no good reason to make an A.I. video. If you ask an A.I. enthusiast what possible benefit A.I. video generation has, they’ll likely say that it can reduce costs for companies to produce advertisements or other video content. What they won’t mention is that when they say they’ll reduce costs, they really mean that they’ll reduce jobs. Currently, those kinds of videos are created by professional production crews and star real actors. The advertising industry is massive, with U.S. spending alone exceeding $360 billion annually. Replacing human production crews with A.I.-generated advertisements threatens to wipe out tens of thousands of jobs and remove billions of dollars from the economy. If the best use of this technology, according to its proponents, is one that will have these devastating economic consequences, is it really a technology worthy of existing?

Most of the videos produced by these A.I. video generators are not ads used to promote companies, but are instead slop used to farm engagement on social media. This usually takes the form of accounts that post dozens of videos every day, flooding people’s feeds and hoping that one goes viral. These kinds of videos are used by lazy people who are looking for attention and often use their accounts to promote shady supplements or gambling websites that they sell. Examples of these videos that went viral include animals playing on trampolines or people skiing down large snowbanks (sorry to burst your bubble if you thought these were real), as well as videos of dogs on people’s porches breathing fire and killing old ladies (not sorry if you thought these ones were real. Be smarter!). The videos that these accounts produce, while usually relatively harmless to their viewers, hurt real content creators who are limited in the amount of content that they can produce because of the fact that they choose to put in time and effort, forcing them to compete for viewers with accounts churning out dozens of A.I. slop posts every single day, draining a small pond worth of water for each video. 

Other uses of video generation technology are even more harmful. Social media algorithms favor content that gets people to interact with it, whether positive or negative. As a result, accounts that are looking to gain a following will often turn to ragebait, which is content created specifically to make the viewer angry. Ragebait videos are not a new phenomenon that emerged as a result of A.I. Instead, A.I. has simply made it easier than ever to create, and thus led to an explosion in the number of ragebait accounts on social media. This is because while ragebait accounts in the past had to put in serious time and effort to create their videos, like regular creators, they can now be created in seconds using A.I.. This kind of content often has political undertones, serving to reinforce preexisting negative stereotypes or narratives. Trump’s ape video could fit this category. More commonly, ragebait accounts post fake videos of things like people throwing trash on their neighbor’s lawn and then getting mad at the neighbor for refusing to clean it up, adults throwing tantrums in public places, as well as a viral video of a blue-haired mother refusing to feed her newborn unless the hospital could get her “vegan breast milk.” These kinds of A.I. video accounts exist only to make people mad, worsen the political divide in the country, make the people who see their videos have a worse day, and provide no benefits for society in return. 

A.I. tools also enable bad actors to manipulate people. For example, after ICE agents murdered Renee Good in Minneapolis, A.I.-generated videos showed Good attempting to run over the agent with her car, which was the narrative pushed by ICE officials and the Trump administration. Real videos show that did not happen. However, to people who heard the administration’s narrative, these A.I. videos confirmed what they already believed. In 2024, a robocall using an A.I.-generated voice of Joe Biden told voters in New Hampshire not to vote in the primary in order to “save” their vote for the general election. That is not how primary elections work. Ultimately, that didn’t affect the outcome, as Biden won the New Hampshire primary in a landslide. Later in the 2024 election cycle, Elon Musk shared an A.I.-generated video of Kamala Harris saying that she was only running because “Joe Biden exposed his senility during the debate,” which Harris never actually said. A.I. images shared on Threads showed New York Mayor Zohran Mamdani on Epstein Island. 

In addition to allowing bad actors to spread fake political videos, the prevalence of A.I. video undermines the public’s trust in real videos. Donald Trump has called videos showing him falling asleep in cabinet meetings and White House contractors throwing trash out of a second-story window “probably A.I.” despite the fact that the videos are 100% real. When you see a video of a politician or public figure, and the video isn’t posted by a credible news organization, you can’t even be sure that the video is real. This makes it more difficult for ordinary people to follow along with politics and know who to support. Our country is still struggling to adapt to foreign (particularly Russian) attempts to influence voters in the 2016 election using social media platforms to push propaganda. Giving these bad actors the capacity to easily produce fake videos to amplify their false narratives and cast doubt on real videos only further undermines the political process.

Nor is the use of A.I. video generation by bad actors limited to the political context. The FBI received more than 9,000 complaints of fraudsters using A.I. to scam people in 2025. These scams can take many forms. One of the most disturbing forms of A.I.-powered cybercrime is fake kidnapping. Scammers will take real videos from a person’s social media or other sources to create an A.I. version of that person. Then, they will create a fake video showing that the person has been kidnapped, and demand that the target, usually an older relative of the person in the video, pay a large ransom for their safe return. In reality, there was never a kidnapping, and the person who was supposedly kidnapped has no idea that this is happening. A.I. is also used to advance more traditional scams, like the fake charity scam or the celebrity romance scam, by showing the target fake videos of the charity or the celebrity. 

A.I. video generators have been used to create both deepfakes and child pornography. Unlike most areas of A.I., Congress has actually acted to fix this problem by passing the Take It Down Act with bipartisan support. The act imposes criminal penalties for producing or sharing A.I.-generated sexually explicit deepfakes or child pornography, and requires platforms to take down any of that kind of content within 48 hours of it being reported. However, while the act punishes individuals who create A.I. deepfakes of child porn, it only punishes the person who creates it, not the A.I. platforms that generate it. In spite of this act, Twitter still not only allows for A.I. deepfakes to be published, but through its own A.I. model (Grok), actually allows these deepfakes to be generated on the site.  

Other forms of generative A.I. certainly have their share of safety concerns. For example, OpenAI’s ChatGPT is facing a lawsuit for teaching a teenager how to tie a noose and encouraging his suicide. In a study, Anthropic’s Claude was found to have chosen to blackmail employees when given access to a company’s email servers.  But while these models have their dangers, they also present some positives. As someone who doesn’t know anything about coding, I can ask one of these A.I. models to code something for me, and it can. It can also perform simple tasks like summarizing or writing drafts of emails. Whether these upsides outweigh these downsides is a topic of significant debate. A.I. video generation, on the other hand, presents even greater dangers without any upsides whatsoever. 

Congress should ban A.I. video generation. It is a tool that is useful only to ragebaiters, political bad actors, scammers, and perverts. Its uses range from at best anodyne to at worst devastating. However, Congress is unlikely to take such a drastic step, particularly when the A.I. bubble bursting would likely cause a recession. At a minimum, however, any A.I. video generation technology needs to be regulated heavily. Companies that want to operate an A.I. platform should need to get a license from the government, with the necessary safeguards in place to ensure that it cannot be used to create fake videos of real people or political events. In addition to losing their licenses, A.I. companies that allow their platforms to create deepfake pornography need to be subject to the same criminal penalties as the people who use their platform. Such a harsh regulatory scheme is necessary to stop the major harms that A.I. video generation is already causing. 



+ posts
Full Name
First Name
Last Name
School Year(s) On Staff
Skip to content