It sounded like the start of a glorious future. ‘We have a new North Star’ proclaimed Facebook CEO Mark Zuckerberg, as he introduced Facebook’s ambitious new vision of a virtual-reality based ‘Metaverse’ and unveiled a new company name, ‘Meta’. Yet even as Zuckerberg was announcing the rebrand, Facebook was telling employees to preserve all communications from 2016 onwards in preparation for legal action. The vision of a virtual future was all for show; the latest desperate attempt to deflect the avalanche of scrutiny that is now descending on Facebook from all sides.
The Facebook Papers
The source of this criticism is the Facebook papers; internal documents disclosed to the US Securities and Exchange Commission and Congress by former Facebook employee Frances Haugen, copies of which have been obtained by a number of news organisations. Facebook has denied the findings, claiming that the documents have been exaggerated and taken out of context. But the leaked documents still offer an unflattering glimpse into the inner workings of the social media giant.
Many of them make it clear that Facebook has struggled to cope with the spread of misinformation and far-right movements on its platform. The company was woefully unprepared to deal with post-election violence coordinated on the site, rolling back key safeguards put in place for the 2020 US Election only to hastily reinstate them when rioters stormed the US Capitol on January 6. Despite flagging the far-right groups who participated in the insurrection as the fastest-growing groups on the platform, Facebook was unable to control their ‘meteoric growth’. Researchers admitted that the lack of a cohesive strategy made it difficult for content moderators to distinguish between ‘protected free expression’ and a ‘coordinated effort to delegitimise the election’.
The ability of the groups who attacked the Capitol to grow rapidly on Facebook had its roots in an overhaul to its News Feed algorithm that Facebook introduced in 2018. The aim was to boost ‘meaningful social interactions’ between friends and family and make it more difficult for politically divisive content to spread. In reality, the change had the opposite effect. Employees quickly realised that the new system favoured content that used outrage and sensationalism to gain comments and reactions, allowing ‘misinformation, toxicity, and violent content’ to spread rapidly.
Facebook’s Structural Issues
The deeper issue here is that not even Facebook really understands exactly what kind of impact its systems have on society. One 2019 internal memo concluded that it was the basic building blocks of the social media platform themselves — the ‘Like’ and ‘Share’ buttons, the newsfeed, Facebook groups — that were ‘a significant part of why these types of speech flourish on the platform’. Yet any attempt to fix these structural issues is hampered by the fact that these features are crucial to the central pillar of Facebook’s business model; keeping people engaged. Facebook makes the vast majority of its $86 billion in annual revenue through selling user attention to advertisers, and the leaked documents show there was a clear mandate that changes to improve Facebook’s algorithms would not go forward if there was ‘a material trade off’ with engagement.
The situation is made worse by the fact that the way Facebook moderates misinformation and hate speech in different countries is not consistent. For instance, 87 per cent of Facebook’s global budget for combating misinformation is spent on the US, a region that includes less than 10 per cent of Facebook’s daily active users. The Facebook Papers provide insight into Facebook’s ‘tier list’, which is how it chooses where to allocate these resources. At the end of 2019, the US, Brazil, and India were placed into ‘tier zero’, the top tier, with a further 27 countries in tiers 1 and 2. The rest of the world was put in tier 3, with little to no additional support.
This ranking system begs the question; if Facebook’s moderation of misinformation was that bad in the US, how bad was it in other countries with similar levels of divisiveness but much lower levels of investment?
The short answer; dire, and often with far deadlier consequences. One internal report found that whilst women in the US reported 30 per cent more cases of online harassment than men, in Indonesia, the Philippines and Brazil they often reported over 100 per cent more abuse than male users. Meanwhile, the 2018 adjustment to Facebook’s algorithm had similarly disastrous consequences in countries such as India — Facebook’s largest market with over 340 million users. Facebook struggled to deal with anti-Muslim content in the region, with Facebook groups serving as ‘perfect distribution channels’ for Islamophobic hate speech. Poor coverage of posts in Hindi and Bengali, two of India’s biggest languages, meant that much of the content targeting Muslims was never flagged or actioned. These issues were exacerbated by Facebook’s reluctance to ban nationalist anti-Muslim organisations with close ties to India’s ruling BJP party due to ‘political sensitivities.’
The lack of local language support was at the heart of many of these failings. In Ethiopia, a country in the midst of a brutal civil war, Facebook’s automated misinformation-catching systems were unable to detect misleading content in Oromo or Amharic, the two most widely spoken languages, and the fact-checking organisation they partnered with to combat hate speech employed just six full-time fact checkers who spoke the country’s four main languages.
Facebook’s coverage of Arabic, meanwhile, is particularly woeful, with only 6 per cent of Arabic-language hate content on Instagram (which is owned by Facebook) being identified and removed in 2020. In Yemen and Iraq, two of the most war-torn countries on the planet, Facebook employed almost no content moderators who spoke the local dialects, allowing groups such as ISIS to use local Arabic slang to spread hate speech without being banned. Poor moderation even led to Facebook being used for outright criminal activity, with the site involuntarily doubling up as a black market to buy and sell domestic workers as slaves — a problem which Facebook took only ‘limited action’ to fix until Apple threatened to remove their products from the App store.
Knowingly Exposing to Harms
Haugen has claimed that the damaging impact of Facebook beyond the West was a key part of her decision to turn whistleblower, along with what she called the company’s ‘great hesitancy to proactively solve problems.’ The documents she has provided only reinforce the overwhelming impression that Facebook has placed profit over societal good at every opportunity. How else can you account for the fact that Facebook has concentrated the vast majority of its resources on the US and other western countries while leaving more vulnerable nations at the mercy of misinformation and hate speech?
Facebook might insist otherwise, and attempt to dazzle us with its vision of the metaverse. But it is telling that the most damning indictments of the company’s behaviour come from its own employees. In leaked internal comments and exit statements, many of them agonise over the societal harm Facebook has inflicted. ‘Out of fears of potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity harms,’ claimed one employee. ‘I think integrity at Facebook is incredibly important’, wrote another, in an internal post announcing their departure from the company. ‘The truth is, I remain unsure that Facebook should exist’.
DISCLAIMER: The articles on our website are not endorsed by, or the opinions of Shout Out UK (SOUK), but exclusively the views of the author.