Google 3’s latest AI video creator, Veo 3, has become a unlikely tool for spreading racist and anti-Semitic content online, especially on Tiktok. The video quality model in May is being used to create harmful content aimed at black, immigrant and Jewish communities.
Over the past few weeks, we have seen a shocking increase in racist stereotyped videos created by AI on Tiktok. In a Mediamatters report, many accounts post simple videos that perpetuate negative black stereotypes and dehumanize criminals, Deadbeat Dads, and more.
Why is Google’s VEO 3 AI a new frontier in hate speech?
The 8-second-long video features a signature “Veo” watermark, confirming its origins from Google’s advanced AI platform.
The content doesn’t stop there. Such videos, created by AI, use anti-Semitic symbols and stereotypes to target Jews and immigrants. What bothers me about that is the best quality for pixels in the output of the VEO 3. This makes its content more realistic and potentially reliable than previous AI-generated competitors.
Testing shows that it is surprisingly easy to create such content using VEO 3. The basic prompts can replicate content similar to racist videos circulating online, indicating that AI safety guardrails are not as strong as they are. This model appears to be more kinder than Google’s past models, but it’s easy for malicious actors to bypass content restrictions.
Part of the problem is that it is nuanced in racist content. When artists use coded languages and images, such as drawing monkeys on behalf of humans in certain circumstances, AI cannot recognize racist intent in teaching. Uncertainty creates loopholes that allow users to avoid rules, but continue to generate harmful content.
Both Google and Tiktok have very specific policies for such materials. Tiktok’s Community Guidelines include an explicit ban on hate speech and violence against protection groups, and Google’s banned use policy includes a ban on using the service to promote harassment, bullying and abuse.
However, the execution is patchy. Tiktok employs both artificial intelligence software and human moderators to identify content that breaks the rules, but the number of videos posted means that timely moderation is virtually impossible.
Why content moderation can’t keep up
The video had already gained numerous views, despite a Tiktok spokesperson reporting that more than half of the accounts on the Mediam Alters list had been suspended before the report was released.
The problem is not specific to Tiktok. X (formerly Twitter) has also been criticised for moderating sloppy content, offering a fertile position for hateful AI content. It could get worse as Google is set to add VEO 3 to YouTube shorts, providing yet another huge platform for the same type of content.
This is not the first example of generation AI misuse for the production of inflammatory content. Since these technologies were present, individuals have always been able to find ways to create racist and harmful content despite their parents. But Veo 3’s unparalleled realism makes it even more appealing to individuals who want to spread hateful stereotypes.
This episode implies the difficulties underlying AI: capabilities and security development. Google highlights the security elements of AI releases, but the truth is that persistent users tend to be able to create workarounds. The company’s guardrails are as strong as they are, and are too weak to produce patently destructive content.
Surge in AI, platforms, and racist content: Crisis of responsibility
This case raises the fundamental issues of platform liability and AI regulation. Creating more sophisticated and accessible AI videos exponentially increases the likelihood of abuse. The virality of social media means that problematic content reaches millions of eyes before the platform actually takes action.
The crisis shows that anti-aggressive content policies are not a sufficient platform, indicating that AI content creators need to do more to prevent the creation and spread of racist content. Until then, the combination of powerful AI tools and substandard content moderation will continue to produce these unstable results.
The problem is not technical. It creates a system that allows you to understand the context, intent, and real-world impact of the content that helps them create.