By now it is clear that artificial intelligence is a double-edged sword and needs regulation so that we can take advantage of its potential without it harming us in the long term. With this in mind, a team of scientists from Google, led by William Isaac, director of Deep Mind, has published a study that warns that Generative AI is ruining areas of the internet with fake content.
The study notes that the vast majority of generative AI users are leveraging the technology to “blur the line between authenticity and deception” by posting fake or manipulated AI content, such as images or videos, on the InternetIsaac’s team also analyzed previous studies on generative AI and about 200 news articles reporting on the misuse of generative AI.
“Human image manipulation and evidence falsification are the basis of the most frequent tactics in real-world abuse cases – the authors conclude -. Most of these were implemented with a discernible intention to influence public opinionenable fraudulent activities or scams, or generate profits.”
To compound the problem, generative AI systems are becoming more advanced and readily available, in fact “require minimal technical experience“The study says this situation is distorting people’s “collective understanding of sociopolitical reality or scientific consensus.”
The problem is that the study portrays a B side of AI: its operation was Designed to carry out these types of tasks as well, it is in its DNA. In this way, it is not about the technology, but about who designed it and with what objectives. And this is where Google itself bears some responsibility for allowing this false content to proliferate or even being its source, whether it is false images or information.
According to the authors, this dilemma is also testing people’s ability to distinguish the fake from the real: “Similarly, the mass production of Low-quality, spam-like, and harmful synthetic content risks increasing skepticism of people towards digital information and overload users with verification tasks,” they explain in their conclusions.
So much so that the study talks about cases in which “high-profile individuals can explain unfavorable evidence as being generated by AI, shifting the burden of proof in costly and inefficient ways.” And The reality is that this is just beginning.