Racist videos created with the I see Google 3 devastate Tiktok

I see 3the last AI Google To generate video from text instructions, it was released last May. Surprised by his realism and, above all, for including audio generation together with the imagewhich allowed to create a clip that in many cases can go through real and not a creation AI. Logically, all types of users have launched all kinds of content. And, according to a report from Media Matters, racist content videos created with I see 3 are having a viral success in Tiktok.

The organization, dedicated to monitoring the misinformation and biases in the US media, notes that videos do not exceed the 8 seconds or they are a combination of clips with this duration, The limit of the duration of one created with I see 3. In addition, the Water brand I see, which confirms its origin.

These videos, one of which he received 14.2 million visualizationsThey use racist stereotypes to attack black people, mainly, and also immigrants and Jews. The first, for example, are represented as ‘usual suspects’ in crimes and how monkeys. Media Matters emphasizes that the problem is not limited to videos, but that numerous comments from other users in publications adhere to that racism. Tiktok has assured The Verge that many of the accounts published by these videos were eliminated before the publication of the report.

https://www.youtube.com/watch?v=c16Ozpkeg-u

The terms of the service of both companies They prohibit this type of content. Tiktok, in particular, points out that ‘we do not allow hate speeches, hateful behaviors or the promotion of hate ideologies. This includes explicit or implicit content that attacks a protected group ‘. However, it recognizes the Technica Arsa that, although they use both technological and human moderators to identify the content that infringes its policies, The volume of videos published is too large to control it immediately.

Googlelike any AI company, presumes the safeguards with which it launches their products to avoid uses against their policies, but these are not always effective. The case of Gemini When, in February 2024, he won the ability to generate images with the model Image 2. It was so inclusive that he made carafales mistakes as historical characters with races and sex changed. Google withdrew that ability to Gemini and did not include it until September with Image 3already without these problems.

The problem of safeguards is The ambiguity with which language can be used and how subtle details allow to achieve a result that is not allowed. For example, I see 3 does not understand the context in which the representation of monkeys is being used and that makes it easy to avoid them.

It is not a unique problem of Tiktok and Google. These videos are also disseminated in X and Instagramalthough its impact is less there. And now the favorite tool can be I see 3which is available in Spain since yesterday, but will soon arrive that generate synchronized image and audio, perhaps with better safeguards or worse, and also viral. Googlein any case, he has not commented on this situation.