#########

Justice & Accountability

How Facebook Failed in Syria


The system for reporting content has long allowed pro-Assad users to successfully press for the removal of content published by members of the political opposition.

January 21, 2021


How Facebook Failed in Syria

Source: Syria Justice and Accountability Centre 


In October 2020, in the wake of the decapitation of the French high school teacher Samuel Paty, a Syrian user took to Facebook to praise the brutal murder. After SJAC staff reported the post, Facebook’s technology assessed whether it “goes against our Community Standards” and informed us that it did not. The post, and the violence it sought to incite, remains visible to this day. One year earlier Facebook had decided to remove posts condemning the beheading of Syria human rights activists by ISIS. Facebook (or rather its algorithms) deemed it in violation of the Community Standards even though the post was clearly intended to highlight the plight of the victims of these murders. Despite the reasonable, measured tone struck by Facebook’s automated messages to us, subsequent attempts to appeal this mystifying decision to human moderators fell on deaf ears.

As Congress contemplates new laws regulating social media companies and Facebook’s “Supreme Court,” its 40-person Oversight Board, has begun resolving disputes, the Syrian experience should figure prominently in the debate. It illustrates how Facebook’s content moderation — the process by which it decides to keep or discard posts — is deeply flawed. Facebook systematically enables hate speech and repression without giving sufficient credence to trusted human specialists.

While the technology once held promise, Facebook’s record in Syria has become troubling. At the outset of the conflict in 2011, it helped organizers of civilian protest movements to coordinate action and document government repression. Facebook also helped secure the accounts of Syrian protesters upon notification that activists had been arrested, to prevent local authorities from forcibly accessing their data. On the other hand, it has facilitated vast campaigns of disinformation coordinated by the Syrian and Russian governments to delegitimize political opposition. Facebook’s response to such abuses of its platform in Syria was dangerously slow and halting. This can be attributed to the general reluctance of U.S. media companies like Facebook to regulate content in the name of “free speech,” as well as a financial imperative to generate ad revenue by promoting viral, often extremist content that sustains user attention.

After Facebook’s role in fomenting the Rohingya genocide in Myanmar and spreading disinformation related to the 2016 US presidential election became public, Facebook signaled greater interest in mitigating harm. For example, it started to preemptively remove content that incited violence and discrimination. Facebook now devotes significant resources to regulating content in places like Germany, where it has paid for over one thousand content moderators. However, Facebook’s attention to content from the Middle East in general, and Syria specifically, has been limited and inconsistent. Facebook did hold a roundtable in Beirut in 2019 with representatives of local civil society organizations, to organize a regional network of so-called Trusted Partners. Under the Trusted Partners program, activists and NGOs can bypass normal protocols for reporting dangerous content and immediately alert Facebook managers. Unfortunately, little materialized from the 2019 meeting and Facebook’s periodic financial support to regional Trusted Partners lacks a coherent, long-term plan. It is also not clear that content reported by Trusted Partners is systematically acted upon.

In Syria, Facebook’s moderation has been profoundly harmful, largely because it prioritizes algorithmic formulas over proactive human interpretation. As SJAC staff discovered personally, Facebook algorithms designed to block hate speech from militant groups and sympathizers have ended up removing posts by Syrian Trusted Partners. Subsequent attempts to appeal those decisions often go unanswered while the violent content in question continues to circulate through and beyond the platform. Those same algorithms have instead promoted and auto-generated viral, extremist content in and about Syria.

The system for reporting content has long allowed pro-Assad users to successfully press for the removal of content published by members of the political opposition. Such users report the content as ”graphic” so as to prevent the documentation of war crimes committed by the Syrian government. This has led to many opposition accounts being banned outright by Facebook. More recently, pro-government forces have turned to copyright law as a means of suppressing anti-government content. Last summer Facebook was compelled to remove posts detailing anti-government protests in Sweida on the pretext that they infringed on a pro-Assad media company’s digital copyright. It is difficult to conceive how the newly operational Facebook Oversight Board will benefit protesters in Sweida and elsewhere, many of whom are unaware how they can appeal the decision to remove their posts. Their plight will not be helped by the fact that the sole representative of the Middle East among the Board’s membership is neither an expert in content moderation nor in digital rights.

Human rights law and consultation with specialists on Syria should anchor Facebook’s response to these problems. According to UN experts, freedom of expression and the obligation of non-discrimination are mutually reinforcing and must be upheld in a balanced way by companies as well as States. This is especially the case when it comes to a company like Facebook, which has effectively constituted the main space of public debate for people living under governments that are hostile to independent media, as in Syria. Facebook therefore has an obligation to protect both its users’ freedom of expression and their freedom from violent, deceptive information. Although the Oversight Board’s emphasis on human rights has garnered some praise from civil rights groups, critics at the UN and elsewhere have noted the Board’s limited conception of hate speech and pointed out that the Board will only review decisions about content that Facebook has taken down. Content that is erringly retained on the site won’t be considered by the Board.

It is crucial, therefore, that Facebook involve human rights organizations and media activists at every stage of content moderation regarding Syria. They are well-suited to clarify the social context of politically sensitive content, identify where certain acts of expression threaten particular communities, and highlight content that is relevant to justice processes in Syria and should be preserved. Facebook’s algorithms are no substitute for this blend of expertise and concern for the most vulnerable. Adopting an inclusive approach to content moderation will require a concerted organizational effort and additional financial commitments. But Facebook cannot avoid its responsibility to implement a moderation system that fully respects human rights by delegating responsibility to an Oversight Board with limited powers.

If Facebook does not commit to reforms, calls for government regulation, financial penalties or civil and criminal lawsuits will become more attractive.


SJAC