Research exclusively shared with the Thomson Reuters Foundation reveals that Meta is facing challenges in effectively managing and tackling hate speech in preparation for the U.S. election.
According to exclusive research shared with the Thomson Reuters Foundation, Meta, the parent company of Facebook and Instagram, is facing challenges in effectively controlling and tackling hate speech in anticipation of the U.S. election.
Global Witness, a non-profit organization, conducted an analysis on 200,000 comments posted on the Facebook pages of 67 U.S. Senate candidates between September 6 and October 6 to evaluate the platform’s handling of hate speech prior to the presidential election.
Upon using Facebook’s reporting tool, Global Witness researchers identified 14 comments breaching Meta’s hate speech regulations within its “community standards”, yet Meta’s response was delayed by several days.
The remarks highlighted by the researchers were found to be derogatory towards individuals of Muslim and Jewish faith, and also made dehumanizing speculations regarding the sexual orientation of a certain candidate.
According to the researchers, Meta responded to Global Witness’ direct email by removing some, but not all, of the 14 comments from Facebook.
Ellen Judson, a researcher from Global Witness leading the assessment, highlighted a significant delay in reviewing these posts.
Meta has frequently encountered scrutiny from researchers, watchdog organizations, and legislators for its failure to promote a conducive information environment during elections worldwide.
In April, the European Commission initiated an inquiry to determine if Meta violated EU online content regulations prior to the European Parliament elections.
Facebook’s response to the comments highlighted by Global Witness, according to Judson, indicates a failure in the platform’s approach to addressing hate speech.
According to a Meta representative who communicated via email, the assessment by Global Witness was conducted on a small subset of comments, and any comments found to be in violation of Meta’s policies were promptly removed.
The spokesperson emphasized that this statement does not accurately represent the diligent efforts of our teams, which include 40,000 individuals dedicated to ensuring the safety and security of our platform leading up to the election.
According to Facebook’s community guidelines, any content that targets individuals due to their race, ethnicity, nationality, sex, gender, gender identity, sexual orientation, religious beliefs, disabilities, or health conditions is deemed as a breach of the platform’s rules.
Judson mentioned that the extent of exposure to hate speech among users remains uncertain, but he emphasized the potential for a significant impact.
She expressed concern about the adverse psychological effects of online abuse, which may lead individuals to reassess their involvement in politics. Observers from the sidelines might perceive such interactions as unwelcoming, deterring them from participating in this sphere.
Even a modest degree of mistreatment can cause significant damage.
Is there a deficiency in funding?
According to Theodora Skeadas, previously a public policy official at Twitter and now working at X, the lack of investment in election preparedness for the upcoming U.S. vote is indicative of a larger issue involving failures in this aspect.
Skeadas, who now serves as the CEO of Tech Policy Consulting focusing on matters like AI governance and information integrity, mentioned that staff reductions and reduced resources for monitoring political content have been implemented.
In recent years, Facebook has downsized its workforce across various departments.
According to data from the U.S.-based Pew Research Center, Facebook and Instagram rank as the second and third most widely used social media platforms in the United States. The research reveals that 68% of American adults are active on Facebook, while 47% are regular users of Instagram.
According to the Pew Research Center, platforms are relied upon by more than one-third of users to obtain information on current events.
Meta has stated that the prevalence of hate speech in violating content is extremely minimal within its platform, with an approximate rate of 0.02% on Facebook and ranging from 0.02% to 0.03% on Instagram. This translates to an estimate that out of every 10,000 views, only 2 to 3 instances would include hate speech.
According to a representative from Meta speaking to the Thomson Reuters Foundation, Facebook enforced measures on 7.2 million instances of content for breaching hate speech guidelines and on 7.8 million instances of content for infringing its bullying and harassment policies during the second quarter of 2024.
Jeff Allen, the co-founder of the non-profit Integrity Institute and a former data scientist at Meta, pointed out a significant flaw in automated systems designed to detect hate speech. According to him, these systems frequently overlook important nuances, struggling to interpret the context of comments and sometimes being misled by slang or subtle expressions.
Allen mentioned that platforms such as Facebook are cautious about being excessively strict in removing posts, as it may impact the duration users spend on the internet.
He mentioned that when you adopt a more assertive approach towards content removal, you might notice a decrease in engagement – it’s a matter of balancing priorities.
Nick Clegg, Meta’s president of global affairs, stated in a blog post published in February discussing the company’s election strategy that Meta is unrivaled in its efforts and investments to safeguard online elections, not only during electoral seasons but consistently throughout.
Clegg mentioned that Meta had poured over $20 billion into preparations for the 2024 U.S. presidential election, underscoring the company’s dedication to enhancing transparency in political advertising and bolstering the teams tasked with tracking down hate groups operating on the platform.
Unveiling the Truth
Despite its promise to protect elections, Facebook has faced criticism for allowing false advertising, spreading election misinformation, and permitting hate speech, as indicated by various recent reports.
Global Witness conducted a study in October testing the advertising systems of prominent social media platforms. The study revealed that despite enhancements to its review process, Facebook was still allowing certain paid advertisements containing election misinformation to be accepted and displayed on the platform.
According to Forbes’ October report, Facebook allegedly displayed deceptive advertisements worth over a million dollars, suggesting that the U.S. election might face postponement or manipulation. Concurrently, the Bureau of Investigative Journalism’s November publication highlighted how e-commerce entities utilized Facebook to market products featuring comparable misinformation. Meta reportedly stated that it was investigating both incidents.
According to experts such as Allen, Meta has the potential to enhance its transparency regarding the handling of hate speech. This could be achieved through the publication of data concerning the number of users encountering such content, clarification on the frequency of posts being reviewed by human moderators, and increased disclosure regarding the functionality of their automated systems.
In August, Meta discontinued the use of the widely utilized “CrowdTangle” tool by external researchers for monitoring viral misinformation. This decision sparked criticism from various groups and professionals who relied on it. However, Facebook stated that they had implemented new tools providing a more comprehensive overview of activities on their platform.
Allen emphasized the importance of having measurements to assess the extent of damages.
Global Witness stated that Facebook completely ignored any communication with the organization regarding the results of its investigation, keeping them uninformed about the management of hate speech in the lead-up to the U.S. election.
Judson from Global Witness pointed out that the platforms’ commitment to addressing abuse at this crucial juncture remains unclear due to insufficient transparency. Consequently, external researchers are left to identify breaches of the company’s regulations.