The amount of content created using artificial intelligence on social networks is increasing more and more. Artificial influencers are no longer new and neither is leaving the networks in the hands of ChatGPT, for example. The good news is that From now on we will know when it is doneaccording to a recent Meta statement.
“We are making changes to the way we we handle manipulated media on Facebook, Instagram and Threads – they explain from Meta- based on the comments of the Supervisory Board. “These changes are also based on the policy review process, which included extensive public opinion surveys and consultations with academics, civil society organizations and others.”
In 2018, Mark Zuckerberg met with Harvard Law School professor Noah Feldman, who had proposed the creation of a quasi-judicial power on Facebook, that is the Oversight Board that the comment points to. Zuckerberg originally described it as a kind of “Supreme Court,” given its role in conciliation, negotiation and mediation, including the power to overrule company decisions.
It was precisely this “court” that would have pointed out the need to create this type of labels to “address manipulation that shows a person doing something they did not do.” Of course, there is terrain that is not very clearly defined. For example, the Board also argued that “We run the unnecessary risk of restricting freedom of expression when we remove manipulated media that do not otherwise violate our standards.” The recommendation is a “less restrictive” approach to manipulated media, such as tags with context.
Obviously, it is not yet known how this will be carried out and how this “context” will be determined. The “Made with AI” label can be seen starting in May on videos, audio and images generated by AI and will be based on “our AI imaging industry-shared signal detection or people who themselves reveal that they are uploading content generated by AI – the statement explains -. We also have a network of nearly 100 independent fact-checkers who will continue to review false and misleading AI-generated content. When fact-checkers rate content as false or doctored, we show it lower in the feed so fewer people see it. Additionally, we reject an ad if it contains discredited content, and since January, advertisers have been required to disclose when they create or digitally modify an ad about a political or social issue in certain cases.”
Another factor that will also be taken into account is that, if it is detected that the images, videos or audios created or altered digitally create a particularly high risk of materially misleading the public about a matter of importance, “we can add a more prominent label so that people have more information and context,” they explain from Meta.