Can Facebook Help Steal an Election?
Facebook’s Mark Zuckerberg recently testified that more than 300,000 Australians were touched by the Cambridge Analytica privacy scandal linked to the 2016 US election. But what evidence is there that Facebook can have a genuine impact on how we vote?
Last month there was public outcry when British consulting firm, Cambridge Analytica, was accused of gaining access to and using the personal data of more than 80 million Facebook users for political campaigning. Since then, lawmakers in the European Union, Australia and the United States have begun investigating the size of the data breach, the CEO of Cambridge Analytica has been suspended, and Facebook is facing growing pressure regarding its privacy practices.
Meanwhile the media are tripping over themselves with spectacular headlines about large-scale election tampering, suggesting that Cambridge Analytica used psychological targeting in social media to interfere in elections and manipulate public opinion. But can you really steal an election through Facebook?
Social media platforms like Facebook do have a lot of data and offer enormous potential for psychographic microtargeting. Psychographic microtargeting is the analysis of data to predict the behaviour, interests, opinions and personality types of people, followed by targeting advertisements and messages that they are most likely to respond to.
Psychological researchers from Cambridge University showed that the things we ‘like’ on Facebook can be used to predict our intelligence, sexual orientation and other personal factors. They came to this conclusion by deploying a personality quiz on Facebook to assess user personality traits and personal characteristics. They then correlated those answers with the Facebook likes of the participants. In a follow-up study the researchers showed that matching the content of ads to individuals’ psychological characteristics resulted in up to 40 per cent more clicks and up to 50 per cent more purchases than their unpersonalised counterparts.
Companies like Cambridge Analytica use this psychological assessment of digital footprints to match political messages to the specific interests of voters and then use social media to spread those messages through a network of social bots and mutually-reinforcing hyper-partisan sites. But swaying an entire election is a whole different ball game to creating engaging personalised ads to boost clicks, and it is unclear if psychographic microtargeting even works for political campaigns.
Empirical studies examining the causal link between personality traits and political behaviour found that personality traits are only somewhat useful in predicting voting preferences and voter turn-out. Philips Chen from the University of Minnesota found mixed evidence on the ability of personality traits to predict who is most likely to turn out to vote. Moreover, researchers at New York University have found that simply asking people if they were liberal or conservative was much more predictive of voting behaviour in the 2008 US election than any personality traits.
These results are consistent with a metadata analysis of 36 voter mobilisation experiments suggesting that social media use has minimal impact on participation in election campaigns and that online communication may not be an effective tool to change people’s political opinion. Microtargeted political ads online have even been associated with lower persuasiveness because the greater the selective exposure of the audience to media content, the less likely that media messages will do anything other than reinforce prior predispositions. In other words, it is impossible to convert a voter’s political leanings if they only receive personalised information that confirms their own opinions and judgments. Personalised ads are therefore more likely to be accelerators for one’s own political camp, as they appeal to those voters who would not have voted for another politician or party anyway.
Even in the 2016 US election, which was allegedly subject to an enormous Russian-backed misinformation campaign and large-scale psychographic microtargeting, it is very unclear what impact it had on the election results. By drawing on web-browsing data and results from an online survey, researchers were able to conclude that fake news only changed vote shares by an amount in the order of just hundredths of a percentage point. This is far less than Trump’s margin of victory in the pivotal states on which the outcome depended.
When comparing vote shares across demographic groups there was no evidence that Trump received a greater proportion of the vote than his predecessors from demographic groups with high propensity to use the internet. Overall, the data is still too thin to clarify the effects of psychographic microtargeting on real world voting behaviour. And, contrary to some sensational headlines, there is no systematic empirical evidence that Cambridge Analytica or any other company could steal an election using Facebook.
This does not mean that Facebook has no effect on voting behaviour at all. In 2012, a compelling study with more than 61 million Facebook users found that an election-day Facebook message could motivate 340,000 extra people to vote in the 2010 US congressional elections. However, to achieve this, the designers of the experiment had to rely on the power of real-life social networks and include the names of up to six friends who had already voted in the message. People who saw the message were 0.3 per cent more likely to seek information about their local polling place and about 0.4 per cent more likely to get to the polls than those who received either an informational message about voting or no message at all. It was not an effect of psychological profiling; it was pure peer pressure.
This indicates that online messages trigger a variety of offline behaviours that can influence elections. But the thing to fear is not a few shady databrokers targeting your inner demons. It is the indirect effects of computational propaganda and what it does to our democratic society. Phenomena such as the information disinformation paradox of the digital age, fake news and post-truth politics can amplify existing social division and help create a mood of insecurity.
Hence, the recent story should open the door for a discussion that not only takes into account the technological and legal aspects, but also the more indirect effects computational propaganda has on society. This will help better define how best to respond to such new threats to the political system. As media consumption continues to shift from traditional media to social media platforms, we should continue to monitor their usage and impact on the political sphere. But until there is evidence that changes the above conclusions we should not be hoping for alternative Facebook feeds and new data laws to solve our political problems.
Jason Young is pursuing a Master’s degree in sociology and political science at the University of Lucerne. He is a researcher at AIIA National Office.
This article is published under a Creative Commons Licence, and may be republished with attribution.