Goal today announced a historic change to its content moderation policies. In a statement signed by Joel KaplanDirector of Global Affairs at Goalthe company has recognized ‘errors’ made by your automated moderation systems and third-party fact-checking agencies that work with the company’s platforms. For these reasons, the company is undertaking a restructuring in which data verifiers will be replaced by a system like the X Community Notes and a new moderation policy that will give greater room to diverse points of view and will have fewer restrictions on common topics that may be considered controversial such as immigration and gender identity. All this for ‘return to that fundamental commitment to freedom of expression’something that Meta admits to having abandoned in recent years.
‘We have seen that this approach works on X– There they give their community the power to decide when posts could be misleading and need more context. People with a diverse range of perspectives decide what type of context is useful for other users to see. We think this might be a better way to fulfill our original intention of giving people information about what they’re watching, and one that is less prone to bias‘ says Kaplan.
X Community Notes arrive on Facebook, Instagram and Threads
The Community Notes feature will be implemented first in the United States ‘over the next few months’according to Meta, and will display a discreet label indicating the availability of additional information in a post, replacing the full-screen warnings that users had to tap to dismiss. Like the X feature, Kaplan states that his own Community Notes ‘will require agreement between people with a variety of perspectives to help prevent biased assessments’.
Community Notes came to The idea of a system that only makes visible the notes that obtain an agreement among enough users with a history of different political positions, It started with the previous Twitter address, but its global deployment came with Musk. The tycoon was the subject of multiple criticism for replacing traditional moderation with this system, but time, and now Zuckerberg, seem to have proven him right.
Mark Zuckerberg, with a speech on the situation of freedom of expression that Elon Musk could sign
But how did this 180 degree turn come about? Mark ZuckerbergCEO of Meta, explained in a video posted on Facebook that ‘we are going to return to our roots and focus on reducing errors, simplifying our policies and restore freedom of expression on our platforms‘. He points to the US elections as one of the reasons for the company’s decision and criticizes ‘governments and traditional media’ for allegedly pushing ‘to censor more and more’. ‘The recent election also feels like a cultural turning point towards, once again, prioritize freedom of expression‘, he states.
In addition to the new political air that comes with the administration of donald trump and the need to adapt to it, Meta has expanded on the system’s failures. In fact, Meta’s statement can be considered a amendment to the entire of what content moderation has been on Facebook and Instagram during the last decade, driven ‘partly in response to social and political pressure to moderate content’.
Those responsible for censorship: biased fact-checkers, political pressure and automated systems
‘We are making too many mistakes, frustrating our users, and too often standing in the way of the freedom of expression we set out to allow. Too much harmless content is censored, too many people are unfairly locked up in ‘Facebook jail’ and we are often too slow to respond when they do.‘ explains Kaplan.
It was in 2016 when Facebook started its fact-checking program and reached agreements with ‘independent organizations’ to delegate this task. ‘Experts, like everyone, have their own biases and perspectives. This was reflected in the decisions some made about what to check and how. “Over time, we end up with too much fact-checked content that people understand as legitimate political discourse and debate,” says Kaplan. ‘A program intended to inform too frequently became a tool for censorship.’
To reduce the number of errors, the automated systems that Meta uses to find violations of its policies They will no longer be focused on all possible infractions and will be applied only to the most serious ones.such as terrorism, child sexual exploitation, drugs, fraud and scams. The approach maintained until now ‘has made our rules too restrictive and too prone to over-application’ and ‘has given rise to too many errors and to the censorship of too much content that should not have been‘. Most Meta systems automatically predict which posts might violate their policies and lower your visibility They will also be removed.
For the less serious violations‘we will rely on someone to report a problem before taking any action.’
Before deleting a post, Meta will begin requiring that several reviewers agree on the decision and will also use models of artificial intelligence to ‘provide a second opinion on some content before taking enforcement action’. To simplify and expedite account recovery and moderation decision review processes, the company will hire more staff.