Prince Harry and Meghan Markle today warned they were ‘deeply concerned’ by Meta scrapping its third-party fact-checking and l…

Prince Harry and Meghan Markle today warned they were ‘deeply concerned’ by Meta scrapping its third-party fact-checking and loosening content moderation.
The parent company of Facebook, Instagram and Threads is switching to an X-style Community Notes system where users flag content as false or misleading.
But the move has been seen by some as an attempt by Meta chief executive Mark Zuckerberg to curry favour with the incoming Donald Trump administration in the US.
And online safety campaigners are concerned it will allow misinformation to spread more easily and leave children and young people vulnerable to harmful content.
Now, the Duke and Duchess of Sussex have waded into the row with a 638-word statement saying the changes to Meta policies ‘directly undermines free speech’.
However, the statement has put Harry on a collision course with his friend and professional polo player Nacho Figueras, who responded to Meta’s announcement last week in an Instagram story saying: ‘This is so important. Return of free speech.’
But the Sussexes, who are now online safety campaigners, said ‘allowing more abuse and normalising hate speech serves to silence speech and expression, not foster it’.
In a statement on Sussex .com called ‘fact-checking Meta’, they added: ‘In an already confusing and, in many instances, intentionally disruptive information environment, Meta has shown their words and commitments have very little meaning or integrity.



‘As they announce these changes undoubtedly responding to political winds, they once again abandon public safety in favour of profit, chaos, and control.
Prince Harry and Meghan’s full statement on Meta
FACT-CHECKING META
Jan. 2025
It doesn’t matter whether your views are left, right or somewhere in between—the latest news from Meta about changes to their policies directly undermines free speech.
This should deeply concern us all.
Contrary to the company’s talking points, allowing more abuse and normalizing hate speech serves to silence speech and expression, not foster it.
In an already confusing and, in many instances, intentionally disruptive information environment, Meta has shown their words and commitments have very little meaning or integrity. As they announce these changes undoubtedly responding to political winds, they once again abandon public safety in favor of profit, chaos, and control. The company’s decision to rollback protections is so far away from its stated values and commitments to its users—including the parents and families calling for change around the globe—that it’s now deeply deceptive.
Millions of people are using Meta’s platforms in the United States. Hundreds of millions more are using them globally. Many use the platform to spread joy, build community, and share empowering information. Unfortunately, Meta’s recent decisions go directly against its stated mission to ‘build human connection’ and instead prioritize those using the platforms to spread hate, lies and division at the expense of everyone else.
Given the profound global impact Meta’s decisions have on the world—of which many are still recovering from or actively suffering from—the politics of one country should never determine whether freedom of expression and civil and human rights are protected in the online spaces so clearly shaping or destroying democracy.
Online spaces must be designed with public safety and well-being at their core, resilient against political pressures and lapses in corporate leadership. This latest move from Meta is an example of a social media company—fully aware of their power to shape public discourse—disregarding any responsibility to ensure that power is not abused and instead allowing either ego or profit, likely both, to guide decisions that affect billions.
We are particularly alarmed by plans to abandon commitments to diversity and equity, coupled with internal policy changes that undermine protections for marginalized communities. These decisions echo what experts, whistleblowers, and families have raised in hearings on online harm, especially regarding children’s safety: platform design, dictated by internal policies, directly determines our online experience.
To ignore this is knowingly putting everyone in harm’s way and contributing to a global mental health crisis.
Meta’s changes to its ‘Hateful Content Policies’ do not protect free expression but instead foster an environment where abuse and hate speech silence and threaten the voices of whole communities who make up a healthy democracy.
We urge Meta to reconsider and reinstate policies to protect all users. We also call on leaders across industries to uphold their commitments to integrity and public safety in online spaces, and we applaud leaders who refuse to kowtow to bullying.
In response to these harmful setbacks, we continue to act. We proudly support organizations like the Center for Critical Internet Inquiry, Parents Together, 5Rights Foundation, Accountable Tech, HalfTheStory, The Marcy Lab School, and the Responsible Tech Youth Power Fund, who advocate for accountability and fairness online. Building on this commitment, we are making new contributions to those working to help policymakers better understand platform design impacts and partnering with Screen Sanity to expand their curriculum with a safety-by-design module for parents navigating the digital age.
Having worked in this space for the last five years and witnessing the real-world devastation these decisions have, we feel there is no justification for why this industry behaves as if they are exempt from the ethical and moral standards everyone else abides by.
We at The Archewell Foundation remain committed to promoting accountability, safeguarding information integrity, and protecting all communities in the digital age. We hope and expect those enabling Meta’s profits, like advertisers and shareholders, to do the same.
‘The company’s decision to rollback protections is so far away from its stated values and commitments to its users – including the parents and families calling for change around the glob – that it’s now deeply deceptive.’
Harry and Meghan also said Meta’s decision serves to ‘prioritise those using the platforms to spread hate, lies and division at the expense of everyone else’.
And they claimed the ‘politics of one country should never determine whether freedom of expression and civil and human rights are protected in the online spaces so clearly shaping or destroying democracy’.
The couple said online spaces ‘must be designed with public safety and well-being at their core, resilient against political pressures and lapses in corporate leadership’.
They warned Meta was ‘fully aware of their power to shape public discourse’ but has been ‘disregarding any responsibility to ensure that power is not abused and instead allowing either ego or profit, likely both, to guide decisions that affect billions’.
In particular, they said they were ‘alarmed by plans to abandon commitments to diversity and equity, coupled with internal policy changes that undermine protections for marginalized communities’.
Harry and Meghan said: ‘These decisions echo what experts, whistleblowers, and families have raised in hearings on online harm, especially regarding children’s safety: platform design, dictated by internal policies, directly determines our online experience.
‘To ignore this is knowingly putting everyone in harm’s way and contributing to a global mental health crisis.’
The couple called on Meta to ‘reconsider and reinstate policies to protect all users’ and urged ‘leaders across industries to uphold their commitments to integrity and public safety in online spaces, and we applaud leaders who refuse to kowtow to bullying.’
The Sussexes also said that ‘in response to these harmful setbacks, we continue to act’, citing their support for the Center for Critical Internet Inquiry, Parents Together, 5Rights Foundation, Accountable Tech, HalfTheStory, The Marcy Lab School, and the Responsible Tech Youth Power Fund.
And they spoke of how they are ‘making new contributions to those working to help policymakers better understand platform design impacts and partnering with Screen Sanity to expand their curriculum with a safety-by-design module for parents navigating the digital age’.
In conclusion, they wrote: ‘Having worked in this space for the last five years and witnessing the real-world devastation these decisions have, we feel there is no justification for why this industry behaves as if they are exempt from the ethical and moral standards everyone else abides by.
‘We at The Archewell Foundation remain committed to promoting accountability, safeguarding information integrity, and protecting all communities in the digital age. We hope and expect those enabling Meta’s profits, like advertisers and shareholders, to do the same.’
The Duke and Duchess hugged residents and spoke to emergency crews at a meal distribution site for people affected by the fires last Friday.
On January 7, Meta announced that it was to scrap its longstanding fact-checking programme in favour of a community notes system similar to that on Elon Musk’s X.
Instead of using news organisations or other third-party groups as it does currently, Meta will rely on users to add notes to posts that might be false or misleading.
The changes will affect Facebook and Instagram, the company’s two largest social media platforms which have billions of users, as well as its newer platform Threads.
The policy signals a move towards a more conservative-leaning focus on free speech by Mr Zuckerberg, who met Mr Trump in November after he won the US election.
A community notes system is likely to please the president-elect, who criticised Meta’s fact-checking feature for penalising conservative voices.
Meta donated $1million to support Mr Trump’s inauguration in December, and has since appointed several Trump allies to high-ranking positions at the firm.
Nick Clegg, the former UK deputy prime minister, also left Meta earlier this month, where he had been president of global affairs.
Mr Clegg has been replaced by Joel Kaplan, a prominent Republican and former senior adviser to George W Bush.
Dana White, the head of the Ultimate Fighting Championship and a close ally of Mr Trump, was also appointed to Meta’s board.
Meta said it plans to bring in the community notes function in the US over the next few months and will ‘continue to improve it’ over the year.


It will also stop demoting fact-checked posts and make labels indicating when something is misleading less ‘obtrusive’.
Since Mr Musk bought X in 2022 it has faced heavy criticism over its approach to posts containing misinformation or hateful content.
Meghan Markle DELAYS the release of her Netflix series due to LA fires
In November, the Centre for Countering Digital Hate (CCDH), a non-profit organisation, left the platform, saying the billionaire had turned it into a ‘dangerous, troubled space where hate, conspiracy theories and lies have privileged access to the megaphone’.
The centre had previously published research which found the platform, formerly known as Twitter, failed to act on 99 per cent of hate posted by paid subscribers.
Mr Musk sued the CCDH but the case was thrown out by a US judge who said it was ‘evident’ X Corp did not like being criticised.
Yesterday, the Science Secretary Peter Kyle said British law has not changed and tech giants must still obey it.
Asked whether social media companies had ‘changed the game’ by moving away from content moderation, he told the BBC’s Sunday With Laura Kuenssberg that Meta’s announcement had been ‘an American statement for American service users’.


Mr Kyle said: ‘There is one thing that has not changed and that is the law of this land and the determination of this Government to keep everyone safe.’
He added: ‘Access to the British society and economy is a privilege, it is not a right. If you come and operate in this country you abide by the law, and the law says illegal content must be taken down.’
Historian says Mark Zuckerberg decision to axe independent fact-checkers is a ‘sellout’
But campaigners have argued that the law does not go far enough in preventing harm.
Andy Burrows, chief executive of the Molly Rose Foundation – named after Molly Russell who killed herself after viewing harmful content online – said Mr Kyle was ‘right that companies must follow UK laws’ but said those laws were ‘simply not strong enough to address big tech’s bonfire of safety measures’.
He said: ‘The frontline of online safety now sits with this Government and action is needed to tackle widespread preventable harm happening on their watch.’
His comments follow an intervention by Molly’s father Ian Russell, who on Saturday warned that the UK was ‘going backwards’ on online safety.
Mr Russell said the implementation of the Online Safety Act had been a ‘disaster’ that had ‘starkly highlighted intrinsic structural weaknesses with the legislative framework’.
KINDLY CLICK HERE TO JOIN OUR WHATSAPP COMMUNITY FOR FREE, GET THE LATEST ON THE GO HERE