Facebook’s Dilemmas With Hate Speech, Fake Speech And Free Speech
If Facebook was an experiment in creating a true “marketplace of ideas,” the results are disappointing, naively utopian and sometimes dangerous. Yet the platform has no easy solution for the daunting problem of moderating its two billion subscribers.
Social media provides an unprecedented mechanism for all people — or at least the many with some form of internet access — to broadcast their words to the world, and to connect simultaneously with multitudes of other people. The most prominent social media platform, Facebook, has an extraordinary “active user” base of 2.38 billion people, one third of the world’s population. After a decade and a half of social media platforms, the advantages and disadvantages for society of social media, which were not perhaps predictable in its early days, are becoming apparent. Unfortunately, the remedies for its disadvantages may not be so readily apparent.
Early Optimism
In the days of the Arab Spring, crowds of pro-democracy protesters thanked Facebook for providing a major platform upon which protests were organised. There was an overall feeling of optimism, including from this author, regarding the impact of social media on society. Social media seemed to provide a valuable tool for connecting, galvanising and organising ordinary people against oppressive instruments of power.
Growing Pessimism
Fast forward from 2011 to 2019 and that optimism has dissipated. The Arab Spring has thus far not turned out so well. While Tunisia freed itself from dictatorship, oppression increased in Egypt and Syria, Libya and Yemen are mired in vicious civil wars. It is possible that social media facilitated premature revolutions which were overly vulnerable to co-option or subversion by reactionary forces. Having said that, the use of social media for progressive revolutionary purposes continues, for example in the ouster of President Omar al-Bashir in Sudan this year.
Social media has facilitated the spread of “fake news,” which has poisoned democratic processes and election outcomes by prompting people to cast their vote on the basis of false stories, which they might receive via an elaborate process of micro-targeting. Facebook has been criticised for its failure to control Russian operations during the US election, while its subsidiary Whatsapp was used to spread false stories during the more recent Brazilian election. Indeed, targeted fake news stories can be expected to become a feature of all forthcoming elections in countries where social media operates relatively freely.
Even worse, the Islamic State group has used social media with murderous ingenuity as a tool of recruitment and intimidation. Facebook was used by the Christchurch shooter to live stream his murderous spree. Furthermore, Facebook has been implicated in the spread of hate speech around the world, which at its worst can contribute to violent ethnic conflict, most notably the genocide in Myanmar in 2017.
The Myanmar Catastrophe
Despite Myanmar’s emergence from the shadows of isolationist military dictatorship, the Myanmar military retains significant power in the country. Furthermore, long-standing ethnic conflicts continue, unresolved and to a large extent unrestrained. The most extreme hatred is aimed at the Muslim minority, the Rohingya people, in Rakhine State. The Rohingya are denied citizenship, as they are viewed the Burmese majority as illegal immigrants from Bangladesh. Hence, they are essentially a stateless people. Yet the Rohingya have lived in what is now western Myanmar for at least decades and probably centuries.
The persecution of the Rohingya has grown over the last three decades and erupted into extreme violence in 2012. That conflagration set the stage for even worse atrocities in 2017. A UN fact-finding mission reports that a “clearance operation” was conducted from August 2017 within Rakhine State, resulting in the deaths and gang rapes of thousands of Rohingya, while hundreds of thousands more fled. 70 percent of the homes in Maungdaw township, where the majority of Rohingya lived, were destroyed. The fact-finding mission found sufficient evidence to justify the investigation and prosecution of Myanmar army officials for the crime of genocide.
The mission declared the role of social media to be “significant” in facilitating atrocity: “Facebook [was] a useful instrument for those speaking to spread hate, in a context where, for most users, Facebook is the internet.” Indeed, a New York Times investigation details the deliberate incitement of genocide by senior officers in the Myanmar military. Facebook has conceded that its platform played a significant role in the catastrophe. It commissioned a human rights impact assessment of its role in Myanmar by BSR, which was released in November 2018.
Several factors contributed to the profound nature of Facebook’s role in Myanmar. First and most importantly, an extremely hostile atmosphere with regard to the Rohingya already existed, as confirmed in 2012. Furthermore there is low digital literacy in Myanmar as internet use has grown exponentially in the few years that it has been broadly available. Secondly, Facebook literally is the internet for many in Myanmar for a number of reasons. Facebook uses comparatively less data than many other applications, and under many phone plans Facebook was not counted within the data quota. This is significant in a poor country like Myanmar, where people will pay as little as possible for data plans. Hence, people in Myanmar often fail to access or even have access to alternative sources of information beyond Facebook. Furthermore, the Zawgyi text for the Myanmar language is not a recognised Unicode font, which made it much more difficult for Facebook’s algorithm to detect harmful posts to be removed.
Solutions Amidst Competing Demands
Facebook has promised to do better with regard to controlling the spread of fake news and in stopping its platform being used to catalyse violence and even genocide. Unfortunately, the Facebook mea culpa is becoming a depressing ritual that undermines confidence that it will in fact “do better.”
In fairness to Facebook, it is difficult to “do better” when it is subjected to competing demands. On the one hand it has been criticised for being overly censorious, on the other it is criticised for allowing too much nasty speech. While the need for some censorship should be obvious, as with the Christchurch video and apparently in Myanmar, it is not always easy to identify actual “hate speech,” especially in the vast number of differing cultural contexts in the world. Furthermore, the removal of objectionable content can also remove evidence of war crimes. Similarly, while Facebook has been rightly condemned for its routine compromising of users’ privacy, it has also been criticised for its failure to make content available for the purposes of the gathering of evidence of atrocities.
Having said that, Facebook is taking steps to avoid repetition of monstrous uses of its platform. It has for example committed to taking up the BSR recommendations. It has also created an in-house Strategic Response Team to avoid fuelling another genocide. This team consists of former diplomats, human rights specialists and other relevant experts. Its creation is testament to the very real and consequential political impacts of Facebook’s platform. It is hoped that the team is not US-centric, which could otherwise lead to Facebook becoming a de facto foreign policy arm of the US government.
The Unwanted Political Impact of Social Media
Social media companies, and especially Facebook given its sheer enormity, are struggling with the reality of their transformational political and social impact. That profound impact was probably not predicted nor wanted. Indeed, Facebook was noticeably reticent about its role in the Arab Spring. It failed to join Google and Twitter in a euphoric self-congratulatory response, while its “real names policy” deters online activism in repressive states. One gets the impression that its founder Mark Zuckerberg would prefer that people spend their time on the platform posting memes of cute cats, or even, in a throwback to Facebook’s origins, rating the “hotness” of women.
Yet the reality of that political and social impact cannot be ignored and social media companies as well as governments and societies have to figure out what to do about it. This is not necessarily easy, given the competing rights considerations regarding free speech, hate speech, fake speech and privacy. For Facebook, the appropriate moderation of over two billion users may well be impossible.
Conclusion: A Free Speech Experiment or a Manipulated “SNAFU”
In his classic nineteenth century tome on liberal society, On Liberty, J.S. Mill mused that truth would prevail if ideas and speech were permitted to flow freely. This notion morphed into “the marketplace of ideas,” where good ideas were felt to be likely to rise to the top and vanquish bad ideas.
Social media may be the closest we have ever come to a real life marketplace of ideas. It has created a space where good and bad ideas from all sorts of people are planted beyond the perimeters established by traditional gatekeepers, such as governments and the mainstream media.
From the heady perspective of 2011, it seemed that Mill may have been correct in his implicitly benevolent view of human nature. From the current perspective of 2019, the results of this “experiment” seem disappointing, naively utopian and dangerous.
Having said that, social media is not an unfettered marketplace. It is manipulated, aside from extant domestic and international regulations, by algorithms and bots. Social media companies must rigorously investigate the impacts of their own market manipulation and make adjustments as necessary: which has apparently occurred recently with YouTube. The jury is currently out on whether algorithms built to maximise user engagement, and therefore the platforms’ profitability, are compatible with a healthy pluralist and democratic society.
Professor Sarah Joseph is an internationally-recognised human rights scholar and the director of the Castan Centre for Human Rights Law at Monash University, a position she has held since 2005.
This article is published under a Creative Commons Licence and may be republished with attribution.