Meta, the parent company of Facebook and Instagram, has come under scrutiny following significant changes to its content moderation policies. In January 2025, the company implemented a policy overhaul that included reducing fact-checking efforts and easing restrictions on discussions of contentious topics. These changes have drawn criticism from Meta’s own Oversight Board, which has raised concerns about the potential global impact and the lack of human rights due diligence in implementing these policies.
Overview of Policy Changes
The policy overhaul introduced by Meta in January 2025 marked a shift towards promoting “more speech” on its platforms. Key changes included:
- Elimination of U.S. Fact-Checking Program: Meta ended its partnerships with third-party fact-checkers in the United States and replaced them with an AI-driven “Community Notes” system.
- Relaxation of Content Moderation: The company reduced restrictions on discussions related to immigration, gender identity, and other sensitive topics.
- Reduction in Proactive Scanning: Meta announced it would stop proactively scanning its platforms for “less severe policy violations,” focusing instead on detecting content related to terrorism, child exploitation, and fraud.
These changes were implemented shortly before the commencement of President Donald Trump’s second term, leading to speculation about the motivations behind the timing.
Oversight Board’s Response
Meta’s Oversight Board, an independent body funded by the company, strongly criticized the policy changes. The board criticized the hasty implementation and the lack of transparency, stating that the changes were made “in a departure from the regular procedure, with no public information shared as to what, if any, prior human rights due diligence the company performed.”
The board expressed particular concern over the potential adverse effects of these changes, especially in countries experiencing crises. It emphasized the need for Meta to assess the global impact of its policies and to strengthen enforcement against bullying, harassment, and hateful ideologies.
Specific Cases Highlighting Concerns
The Oversight Board highlighted several instances where Meta’s new policies failed to address harmful content effectively:
- UK Riots: During riots in the UK following the Southport attack, Meta was slow to remove anti-Muslim posts that incited violence. These posts included AI-generated imagery targeting Muslims.
- Hate Speech in Europe: The board ordered the removal of posts containing hate speech in Poland and Germany, criticizing Meta’s delayed response in these cases.
These examples underscore the Oversight Board’s concerns about the effectiveness of Meta’s revised content moderation policies.
Recommendations from the Oversight Board
In response to the policy changes, the Oversight Board issued 17 recommendations to Meta, including:
- Assessing Global Impact: Conduct thorough assessments of how policy changes affect users worldwide, particularly in crisis-affected regions.
- Enhancing Transparency: Provide clear information about content moderation policies and the rationale behind changes.
- Strengthening Enforcement: Improve enforcement of policies against bullying, harassment, and hate speech.
- Evaluating Community Notes: Regularly assess the effectiveness of the new “Community Notes” system and disclose findings every six months.
Meta has committed to responding to these recommendations within 60 days.
Meta’s Commitment to the Oversight Board

Despite the criticism, Meta has maintained its commitment to the Oversight Board. The company continues to fund the board through an irrevocable trust, ensuring its operational independence. Meta has also consistently referred new cases to the board and followed up on its recommendations.
Frequently Asked Questions (FAQ’s)
What prompted Meta’s policy changes in January 2025?
Meta aimed to promote “more speech” on its platforms, leading to reduced content moderation and the elimination of its U.S. fact-checking program.
What is the “Community Notes” system?
Meta introduced “Community Notes,” an AI-driven tool that allows users to add context to posts and replace third-party fact-checkers.
Why did the Oversight Board criticize Meta’s policy changes?
The board expressed concerns about the hasty implementation, lack of transparency, and potential global impact of the changes.
How has Meta responded to the Oversight Board’s recommendations?
Meta has committed to responding within 60 days and continues to fund the board through an irrevocable trust.
What are the potential risks of reduced content moderation?
Reduced moderation may lead to increased spread of misinformation, hate speech, and harmful content, especially in vulnerable communities.
How does the Oversight Board operate independently?
Although funded by Meta, the board operates independently through an irrevocable trust, ensuring that the company does not influence its decisions.
What specific cases highlighted the shortcomings of Meta’s new policies?
Instances like the delayed removal of hate speech during UK riots and in European countries showcased the challenges of the revised policies.
What steps can Meta take to address the Oversight Board’s concerns?
Meta can enhance transparency, conduct thorough impact assessments, and strengthen enforcement against harmful content to align with the board’s recommendations.
Conclusion
Meta’s recent policy changes have sparked significant debate about the balance between promoting free expression and ensuring user safety. The Oversight Board’s criticism highlights the importance of transparency, due diligence, and global considerations in content moderation policies. As Meta navigates these challenges, its responses to the board’s recommendations will be crucial in shaping the future of its platforms.
