Meta systemically censors pro-Palestinian content, report claims

Meta has enabled “systemic and global” censorship of pro-Palestinian content on its platforms since the start of the ongoing Israel-Hamas conflict, a new report by Human Rights Watch claims.

The 51-page report documents more than 1,000 instances between October and November when Meta, the parent company of Facebook and Instagram, allegedly removed content or suspended accounts involving or in support of Palestinians. 

The international organization claims this “pattern” is due to flaws in and inconsistent application of the social media giant’s policies, “apparent deference” to government requests and the company’s reliance on automation for content removal and moderation.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, HRW’s acting associate technology and human rights director. “Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

HRW says it reviewed 1,050 cases across 60 countries for its report, though it claims hundreds of people continued to share their cases of censorship after they completed their study. It says 1,049 of its reviewed cases were unduly suppressed or censored after expressing peaceful support of Palestine or public debate about the people’s rights, while the remaining removed case had been in support of Israel. 

In hundreds of cases, HRW claims Meta applied its “Dangerous Organizations and Invidiuals” (DOI) policy, which aims to prevent “organizations or individuals that proclaim a violent mission or are engaged in violence” to have a presence on its platforms. The report says this policy in practice had restricted “legitimate speech around hostilities between Israel and Palestinian armed groups.”

Other removed posts were subject to a misapplication of Meta’s “newsworthy allowance” policy, which justified the removal of content “documenting Palestinian injury and death” though it has news value, HRW claimed.

HRW alleges Meta is aware of its flawed policy enforcement, particularly when related to Israel and Palestine after another warning report by HRW in 2021.

And Meta appeared to acknowledge those faults in a statement regarding the organization’s most recent findings, saying to the Guardian its errors are “frustrating” but that “the implication that we deliberately and systemically suppress a particular voice is false.”

“This report ignores the realities of enforcing our policies globally during a fast-moving, highly polarized and intense conflict, which has led to an increase in content being reported to us. Our policies are designed to give everyone a voice while at the same time keeping our platforms safe,” the company said in its statement.

But to make matters meet “human rights due diligence responsibilities,” HRW says Meta needs to align its content moderation policies with human rights standard practices, such as ensuring the call to remove content is consistent and not overly board or biased.

“Instead of tired apologies and empty promises, Meta should demonstrate that it is serious about addressing Palestine-related censorship once and for all by taking concrete steps toward transparency and remediation,” Brown said.