Racist AI videos are blowing up on TikTok | DW News

Racist AI-generated videos, including harmful stereotypes of Black people and other minorities, are rapidly spreading on TikTok, often created using Google’s text-to-video tool VO3, highlighting the challenges of moderating AI content. Despite efforts by TikTok and Google to remove such content, experts stress the need for stronger ethical frameworks to prevent the misuse of generative AI in promoting hate and discrimination.

Racist AI-generated videos are rapidly gaining popularity on TikTok, with some created using Google’s new text-to-video AI tool, VO3. These videos often depict Black people through harmful stereotypes, including offensive portrayals of Black women as primates. Additionally, the content includes anti-Semitic themes and racist depictions of immigrants and Asian individuals. Despite their offensive nature, these videos are attracting millions of views, fueled by TikTok’s algorithm which tends to amplify viral content.

Google’s VO3 allows users to generate video clips and audio simply by inputting text prompts, making it a powerful creative tool. However, instead of being used for storytelling or positive content creation, some users exploit the technology to recycle and spread old-fashioned racist tropes. This misuse highlights a significant challenge with generative AI: creators cannot anticipate all the ways their tools might be abused to promote hate and discrimination.

Experts emphasize the deep historical roots of such racist imagery. Nicole Turner Lee, director of the Center for Technology Innovation at the Brookens Institute, explains that racist cartoons have a long history dating back to slavery, where Black people were exaggerated in illustrations to emphasize so-called primal traits. This historical context underscores why these AI-generated videos are particularly offensive and harmful, as they perpetuate longstanding stereotypes that have been used to justify discrimination and violence.

TikTok has stated that it removes hateful content and has banned accounts linked to these racist videos, but many offensive clips remain accessible and continue to gain traction. Google claims that VO3 blocks harmful prompts and includes watermarks on generated content, yet it has not directly addressed the specific issue of racist videos created with its tool. This gap in response points to the broader difficulty of moderating AI-generated content effectively.

The rise of these racist AI videos illustrates the dual nature of generative AI technology. While it offers exciting possibilities for creativity and innovation, it also has the potential to amplify harmful content when safeguards are insufficient. As Meredith Bersard, a professor at New York University, notes, AI developers often cannot foresee all the ways their tools might be misused, highlighting the urgent need for more robust ethical frameworks and moderation strategies to prevent the spread of hate online.