As social media platforms increasingly become the global gatekeeper, deciding what we see and don’t see, who has a voice and who is suppressed, the myriad decisions they make each day in deleting content and suspending accounts is facing increasing scrutiny for the way in which those many small choices profoundly affect our shared global discourse and understanding of the world around us. Three recent events put the impact of these choices in stark relief: the Rohingya crisis, corruption claims in China and assault allegations in the US.
Last month a wave of media reports claimed that Rohingya activists attempting to document and share what they said were the conditions and atrocities they faced, were having their Facebook posts deleted and their accounts suspended and that the company was not being responsive to their requests to have the content restored. Given that Facebook in particular is increasingly becoming the global news frontpage with an outsized influence on what news we see, and don’t see, when it begins systematically removing content, that content for all purposes ceases to exist to much of the world.
As US Supreme Court Justice Anthony Kennedy put it earlier this year, social media sites “for many are the principal sources for knowing current events … speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to ‘become a town crier with a voice that resonates farther than it could from any soapbox.’”
When asked about the Rohingya activist posts, a Facebook spokesperson responded by email that “We allow people to use Facebook to challenge ideas and raise awareness about important issues, but we will remove content that violates our Community Standards. … In response to the situation in Myanmar, we are only removing graphic content when it is shared to celebrate the violence, versus raising awareness and condemning the action. We are carefully reviewing content against our Community Standards and, when alerted to errors quickly resolving them and working to prevent them from happening again.”
The spokesperson further clarified that for all posts it reviews, the company has native language speakers who are aware of and understand the local context of each situation to ensure that its policies are correctly applied. However, given the relatively small size of its reviewer workforce, it is likely that this language expertise and contextual knowledge varies dramatically by geography, language, culture and situation and makes it likely that members of minority groups will be far less represented on its reviewer teams.
When pressed on how Facebook determined that the deleted posts “celebrate[d] the violence” when media reports seemed to suggest that many of the posts being removed and accounts being suspended were of Rohingya activists documenting atrocities on the ground, a spokesperson would respond only that the company acknowledges making mistakes.
Yet, such mistakes can have grave consequences. In light of the media’s infinitesimally short attention span, social media is one of the very few outlets oppressed groups have to document their daily lives and to try to build awareness of their suffering, as well as to reach out to groups which might be able to help with both immediate and long-term needs.
Thus, removal of such documentaries from a social media platform can have the same effect as airbrushing that history away, making it invisible to an easily distracted world and depriving those involved of a voice to tell the world their side of a conflict. While a social media platform removing a photo of a nude art sculpture might be unfortunate, the effective wholesale blocking of countless posts and activists documenting a humanitarian crisis has a very real and profound impact on society’s awareness of that crisis and in turn the ability of affected groups to generate the kind of public outcry that could generate change.
In short, the growing influence of platforms like Facebook means the digital decisions they make can profoundly affect the real world, with real life-and-death human consequences when it comes to crises.
This imbalance of power between activists and the platforms they use to document and spread the word of what they experience and uncover spans beyond humanitarian crises. At the end of last month a Chinese activist who has used Facebook to publish accusations of what he claims is corruption by Chinese government officials had his account suspended by the company on the grounds that he had “publish[ed] the personal information of others without their consent.”
While the company noted to the Times that the suspension was based on a complaint that had been lodged about the posts, it declined to identify whether the Chinese government was behind the complaint. When asked specifically whether Facebook had conversations about the posts with representatives or affiliates of the Chinese government prior to suspending the user, a company spokesperson responded by email that the company was explicitly declining to comment on whether the Chinese government was behind the suspension. He clarified that all reports of violations of its community guidelines are treated confidentially and thus even if a national government official formally requested that specific content be removed, the company will not disclose that.
The company further clarified that it applies a very different standard than traditional news reporting in how it handles the publication of personal information. While major news outlets like the Times may publish certain personal information about public officials when reporting on allegations of wrongdoing, Facebook emphasized that its community guidelines do not apply such a “news standard” to its platform, meaning that professional journalists, citizen journalists and activists are not treated any differently than ordinary users when writing about issues of public interest.
This itself is a critical distinction that portends a foreboding future for investigative journalism and public accountability. News outlets can adhere to standard journalistic practice and accepted norms when publishing stories on their own websites, but as Facebook becomes a gateway to the news and tries to become a native publishing platform rather than merely an external link sharing site, journalism standards will be forced to give way to Facebook’s arbitrary and ever-changing rules. Instead of occupying a privileged role in the information ecosystem, journalists will be subject to the same restrictions as an arbitrary citizen and where journalistic firewalls between advertisers and content may not be so strong, meaning that content guidelines could curtail reporting over time that is viewed negatively by advertisers.
Both of these examples reflect ongoing events. What happens when a public interest breaking news story bursts onto the scene, with large numbers of involved individuals coming forward to share what they claim are their experiences and knowledge about the event in question? How do social media companies handle their role as publisher of criminal allegations which the other party may vehemently deny, as well as the deluge of harassment and hate speech that often follows in the wake of such allegations? How does a company balance giving voice to formerly voiceless potential victims, while preventing their platforms from being used to launch false attacks or hate speech?
[“Source-TimesofIndia”]