by David Greene, Paige Collings, and Christoph Schmon Electronic Frontier Foundation
Reprinted under Creative Commons Attribution license
Government involvement in content moderation raises serious human rights concerns in every context, and these concerns are further troubling when the involvement originates with law enforcement. We recently filed a comment with the Meta Oversight Board urging it to treat this issue seriously.
When sites cooperate with government agencies, it leaves the platform inherently biased in favor of the government's favored positions. It gives government entities outsized influence to manipulate content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for government—and particularly law enforcement—to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.
For example, Vietnam has boasted of its increasing effectiveness in getting Facebook posts removed but has been accused of targeting dissidents in doing so. Similarly, the Israeli Cyber Unit has boasted of high compliance rates of up to 90 percent with its takedown requests across all social media platforms. But these requests unfairly target Palestinian rights activists, news organizations, and civil society, and one such incident prompted the Facebook Oversight Board to recommend that Facebook "Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting."
Issues with government involvement in content moderation were addressed in the newly revised Santa Clara Principles 2.0 where EFF and other organizations called on social media companies to "recognize the particular risks to users' rights that result from state involvement in content moderation processes." The Santa Clara Principles also affirm that "state actors must not exploit or manipulate companies' content moderation systems to censor dissenters, political opponents, social movements, or any person."
Specifically, users should be able to access:
- Details of any rules or policies, whether applying globally or in certain jurisdictions, which seek to reflect requirements of local laws.
- Details of any formal or informal working relationships and/or agreements the company has with state actors when it comes to flagging content or accounts or any other action taken by the company.
- Details of the process by which content or accounts flagged by state actors are assessed, whether on the basis of the company's rules or policies or local laws.
- Details of state requests to action posts and accounts.
User access to this information is even more pertinent when social media sites have granted government authorities with "trusted flagger" status to inform the platform about content that is illegal, or which violates its Community Guidelines or Terms of Service. This status has been bestowed on governments even when their own civil liberties record is questionable, thus enabling censorship of discourses that challenge government-imposed narratives.
These concerns about government influence over the content available to users online are even more dire given that the EU's Digital Services Act (DSA) will soon impose new mechanisms allowing platforms to designate governmental agencies—and potentially law enforcement agencies such as Europol—as trusted flaggers, consequently giving governments priority status to "flag" content for platforms. Although trusted flaggers are only supposed to flag illegal content, the preamble of the DSA encourages platforms to empower trusted flaggers to act against content incompatible with their terms of service. This opens the door to law enforcement overreach and platforms' over-reliance on law enforcement capacities for the purpose of content moderation.
Moreover, government entities may also simply lack the relevant expertise to effectively flag content on a variety of platform types. This is evident in the United Kingdom where London's Metropolitan Police Service, or the Met, consistently seek to remove drill music from online platforms based on the mistaken, and frankly racist, belief that it is not creative expression at all, but a witness statement to criminal activity. In a global first for law enforcement, YouTube gave officers from the Met trusted flagger status in 2018 to "achieve a more effective and efficient process for the removal of online content." This pervasive system of content moderation on drill music is governed by the Met's Project Alpha, which involves police officers from gang units operating a database, including drill music videos, and monitoring social media sites for intelligence about criminal activity.
The Met has refuted accusations that Project Alpha suppresses freedom of expression or violates privacy rights. But reports show that since November 2016, the Met made 579 referrals for the removal of "potentially harmful content" from social media platforms and 522 of these were removed, predominantly from YouTube. A 2022 report by Vice also found that 1,006 rap videos have been included on the Project Alpha database since 2020, and a heavily redacted official Met document noted that the project was to carry out "systematic monitoring or profiling on a large scale," with males aged between 15 to 21 the primary focus. Drill lyrics and music videos are not simple or immediate confessions to engagements in criminal activity, yet law enforcement's "street illiteracy" exacerbates the idea that drill music is an illustration of real-life activities that the artists have themselves seen or done, rather than an artistic expression communicated through culturally-specific language and references that police officers are seldom equipped to decode or understand.
Law enforcement are not experts on music and have a history of linking it to violence. As such, the flags raised by the police to social platforms are completely one-sided, rather than with experts supporting both sides. And it is especially troubling that law enforcement is advancing concerns for gang activity through their partnerships with social media platforms, which is disproportionately targeting youth and communities of color.
Indeed, the removal of a drill music video at the request of unnamed "UK law enforcement" is the very case the Oversight Board is considering, and on which we commented.
All individuals should be able to share content online without their voices being censored by government authorities because their views are oppositional to that of the powerful. Users should be informed when government agencies have requested the removal of their content, and companies should disclose any back-channel arrangements they have with government actors—including trusted or other preferred flagger systems—and reveal the specific government actors to whom such privileges and access are granted.
|