Social media companies increase policing of COVID viewpoints, election-related content
Companies signal enhanced enforcement of health guidance rules and election content oversight.
The Facts Inside Our Reporter’s Notebook
Major social media companies are signaling their intent to enforce their respective content policies regarding both the COVID-19 pandemic and the 2020 presidential race, setting the stage for a contentious election season as well as a protracted debate over acceptable scientific interpretations of the worldwide coronavirus outbreak.
Companies like Facebook and Twitter have for years fielded sustained criticism over how they handle controversial content posted on their platforms, with many claiming that the social media giants don't do enough to police offensive and potentially dangerous content and others accusing the companies of biased enforcement of speech restrictions.
Now, these social media firms and others are taking much more aggressive stances on content related to both the coronavirus pandemic and this year's presidential race, informing users that they will be subject to enhanced oversight of posts regarding those two pivotal topics.
Unclear guidance on COVID-19 posts
Numerous tech companies have adjusted their speech policies in the wake of the COVID-19 pandemic. Much of the ongoing conversation around the coronavirus outbreak has taken place on social media, with users sharing opinions, evidence, figures and charts related to the disease's transmission, spread and effects.
Both Facebook and Twitter have indicated that, where contentious scientific debate is concerned, the companies intend to side with established public health authorities, indicating that they will effectively curate a public conversation centered on the conclusions of both global and national public officials.
Twitter, for instance, says on its website that it will "require people to remove Tweets" that include "satements which are intended to influence others to violate recommended COVID-19 related guidance from global or local health authorities to decrease someone's likelihood of exposure to COVID-19." Among the problematic statements the company lists under that category is "social distancing is not effective."
Facebook on its website outlines a similar policy using similar language, with the company stating that it will "remove content with false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities," including "claims that are designed to discourage treatment or taking appropriate precautions."
YouTube has adopted a virtually identical policy, stating that it "does not allow content that spreads medical misinformation that contradicts the World Health Organization (WHO) or local health authorities' medical information about COVID-19." The video sharing company provided a glimpse of how it will apply that policy earlier this week when it took down a video in which White House coronavirus adviser Scott Atlas contended that children are extraordinarily unlikely to transmit the coronavirus.
An inherent limitation of such policies, of course, is that there are many "local health authorities" across the world that fundamentally disagree on COVID-19 mitigation measures. Ensuring that a claim about the coronavirus falls in line with all of them would seem to be a virtually impossible task.
In Sweden, for instance, health authorities eschewed nearly all of the strict lockdown and social distancing measures adopted by many of the world's governments since the start of the pandemic. (Echoing Scott Atlas's claims, the country's government claims that Swedish children "represent only a small proportion of the reported cases of COVID-19" in that country, and that "transmission between children is limited.")
Other local authorities, meanwhile — such as the state governments of South Dakota and several other states — have also abjured many or most of those measures, refusing to issue mask mandates and/or stay-at-home orders.
Facebook and Twitter did not respond to queries regarding the potential self-contradictions of the policy insofar as competing health guidance is concerned. Youtube spokeswoman Ivy Choi, meanwhile, was unwilling to address the tension between YouTube's content policies and the diverse viewpoints of "local health authorities" across the globe.
Asked if the company would remove a user-posted video demanding that Sweden's no-mask, no-distancing approach be applied in the U.S., Choi said: "We enforce against the content and the context surrounding it, so it's hard for me to answer in broad hypotheticals."
"If you do see any videos on YouTube that you'd like for us to review, please send them over," she added.
Election content also to come under scrutiny
Both Facebook and Twitter have also announced their intent to monitor and police election-related content as the presidential race continues and Nov. 3 draws closer.
Last month, Facebook was reportedly developing a "contingency plan," intended to address scenarios in which Trump or his campaign attempted to dispute or delegitimize the results of the 2020 election.
Earlier this month, Mark Zuckerberg posted on the site that among the steps Facebook is taking to "fight misinformation" around the election, the company will "attach an informational label to content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud."
"This label will provide basic authoritative information about the integrity of the election and voting methods," he added
Twitter last week, meanwhile, affirmed its intent to "help stop the spread of harmful misinformation that could compromise the integrity of an election or other civic process." To that end, the platform said it will "label or remove" posts that make "misleading claims about the results or outcome of a civic process which calls for or could lead to interference with the implementation of the results of the process."
As examples of such content, Twitter cited "claiming victory before election results have been certified" and "inciting unlawful conduct to prevent a peaceful transfer of power or orderly succession."
The company also said it would enforce policies against "disputed claims that could undermine faith in the process itself, e.g. unverified information about election rigging, ballot tampering, vote tallying, or certification of election results."
The company does not stipulate how information becomes "verified" to the point that it satisfies the platform's content policies.