After Facebook's Oversight Board upheld the company's decision to close Donald Trump's account, a report suggests the social network removed less rogue content in 2020 than 2019, despite a surge in hate speech during the US election.
The original ban was placed on former US president Trump at the end of January after posts that were deemed inflammatory by both Twitter and Facebook were followed by the storming of the US Capitol. Trump had only recently finished speaking outside the building before rioters sought to prevent Congress from ratifying Joe Biden's victory. Five people died in the wake of the violence.
Despite the Oversight Board, which operates as an independent entity, appearing to back up Facebook's stance of clamping down on hate speech, social media agency Reboot has raised questions. Figures released by the major social media sites indicate Facebook is the only one to have reduced the amount of content it takes down. During 2020, Reboot said figures from Facebook's transparency reports showed it removed 12.4 billion pieces of content in 2020, a 21% reduction on the 15.6 billion taken down in 2019.
In comparison, the agency said, the same figures showed removal action increased 427% over the same period at Instagram, which is owned by Facebook. Additionally, reports from the site's social rivals showed that content removal increased by 135% at YouTube and by 112% at TikTok. Twitter was relatively unchanged.
Facebook rejected Reboot's methodology. It claimed a comparison between different sites is impossible because each has its own criteria for classifying material and taking removal action. It also pointed out that individual sites can change their classification of harmful material and add new categories over time. Amalgamating quarterly figures into an annual figure can create an inaccurate picture, the social network said.
Less to take down?
However, Reboot's co-founder and managing director Shai Aharony insisted that the numbers come from Facebook's own publicly released figures, as well as those published by rival sites. He rejected Facebook's suggestion that there is no merit comparing content deletion rates between sites or even between different years on the same site.
"We agree that content removal metrics vary from platform to platform, but the factors and rates remain largely consistent," he maintained.
Aharony welcomed the fact that Facebook and Instagram are constantly updating policies and improving detection techniques to, in his words, make social media "safer". However, he stood by the finding that Facebook took down 21% less content in 2020 than 2019 and believes it highlights two possibilities.
"It can only be assumed from the data that either less is being picked up by their evolving metrics, or less removal-worthy content is being published on the platform to begin with," he said.
More hate, seen less often
One thing both Aharony and Facebook will agree on is that 2020 did not bring a reduction in the amount of hate speech content the site had to remove.
The site's own figures showed it took down nearly five times more content featuring hate in Q4 2020 than Q4 2019 (26.9 million posts, up from 5.5 million). Similarly, in the final quarter of 2020, covering the period when the US election was held, the deletion of hate content from dangerous groups quadrupled to 6.4 million posts from 1.6 million in Q4 2019.
Concerns over Facebook's alleged failure to remove hate speech from its platform led to an advertiser boycott last July, focused on the US, that eventually involved the majority of the country's biggest consumer brands.
Facebook said that although removal is up, the prevalence of people seeing harmful material has been reduced. This was due to improvements in detection technology that removes harmful posts before they are widely seen.
That means the rate at which hate speech was consumed dropped in Q4 2020 from 10 in every 10,000 "views of content" to seven. This is a more realistic figure, the site claimed, because it shows a decline in the consumption of material rather than the rise in it being posted and being promptly deleted.
Is AI the answer?
Detection is key to Facebook. It claims that rises in content removal at Instagram reflected improvements in its AI that can spot rogue content. The technology has been improved this year, for both sites, by a better understanding of Arabic, Spanish and Portuguese which could explain why more material was spotted.
Daily use across Facebook and Instagram was up 15% in December 2020, compared with the year before, and so there were more people posting and consuming content on each site, both of which are using better detection technology.
This raises the question, though: why would Facebook be removing less content in 2020, despite a rise in hate speech and its technology getting better at spotting rogue content? The answer could be better detection of fake accounts. These are often set up to send out spam or spread harmful messages anonymously by an individual or as part of an orchestrated bot network churning out misinformation.
Drilling down into Reboot's use of Facebook's figures, the social media agency removed 700 million fewer fake accounts in 2020 than in 2019. Over the same time period, 2.6 billion fewer spam posts were taken down. Facebook figures show that these two categories are responsible for 97% of all content and account removal in both years.
Technological advances in spotting fake accounts, and closing them down before they send out malicious or spam content, could well be the most likely explanation for why content removal went down in 2020.
Facebook claimed it has become much better at spotting such rogue activity. Crucially, it does not report the number of accounts its technology closes down automatically before they have a chance to spread hate or spam. However, it does include those that manage to evade initial detection and are shut down once they post rogue content.
This could well mean the number of content removals is down because fewer spam posts are being sent out by accounts that have been blocked before they can act, yet do not appear in official figures.
Improved enforcement and AI detection of fake accounts would appear to be the most logical answer as to why hate speech spiked in 2020 yet overall content removal appears to have reduced.