‘Virality Circuit Breakers:’ Taxpayer-funded researchers devise new stealth censorship strategies
Taxpayer and billionaire-backed researchers admit new techniques could allow Big Tech to minimize “public relations challenges” from direct removal of social posts.
A digital free-speech watchdog is warning that federally funded researchers have captured and examined millions of Twitter posts they deemed to be “misinformation” during the 2020 election and used them to devise new strategies that one day could empower Big Tech to censor or throttle content while keeping the affected users and the public in the dark.
The Foundation for Freedom Online says the work done last year by members of the University of Washington Center for an Informed Public after receiving taxpayer grants devised new strategies like “virality circuit breakers” and “nudges” that could prevent certain users from spreading content without any apparent evidence they were being censored.
The study is a roadmap on “how to censor people using secret methods so that they wouldn't know they're being censored, so that it wouldn't generate an outrage cycle, and so that it'd be more palatable for the tech platforms who wouldn't get blowback because people wouldn't know they're being censored,” Mike Benz, a former State Department diplomat specializing in U.S. foreign policy on international communications and information technology matters told the "Just the News, No Noise" television show Monday night.
Much of the academic research was published in a little-noticed article published last summer in the journal Nature Human Behavior entitled “Combining interventions to reduce the spread of viral misinformation” that identified a cocktail of four tools that the researchers projected could reduce the spread of content deemed misinformation by as much as 63% without direct removal of posts or time-consuming fact checks.
The researchers – who included the University of Washington’s Jevin D. West and Kate Starbird – acknowledged the new tactics like “virality circuit breakers” were less detectable to users and the public and therefore could spare social media giants the public outcries that occurred in 2020 when stories like the Hunter Biden laptop were publicly removed from platforms.
“This approach allows platforms to consider ethical ramifications while minimizing the public relations challenges accompanying direct forms of action,” they wrote.
You can read the full study here:
While acknowledging more research needed to be done, the researchers concluded their “framework” was one “that can be adopted in the near term without requiring large-scale censorship or major advances in cognitive psychology and machine learning.”
They also acknowledged their work was furthered by two taxpayer-funded grants from the National Science Foundation.
In a statement to Just the News on Monday evening, West said the ideas in the published paper were theoretical only and concern about the impact of the research on free speech involved a “fundamental misunderstanding of the paper that appears to be based on non-factual distortion and falsehoods.”
In other words, he considered criticism of the work to be disinformation.
“This research was entirely theoretical, and aimed only to assess the impact that different potential social media interventions would have on the spread of COVID-19 misinformation and disinformation,” he said.
“Furthermore, the paper made no policy or tactical recommendation to social media platforms or the federal government. There was no follow-up from them and we have no idea what, if anything, any of those entities did with the learnings from our paper,” he added.
West’s study acknowledged support for the research from two NSF grants: a $197,538 grant to University of Washington in 2020 for a project entitled “How Scientific Data, Knowledge, and Expertise Mobilize in Online Media during the COVID-19 Crisis” and a $550,000 grant to the same university for a project entitled “Unraveling Online Disinformation Trajectories: Applying and Translating a Mixed-Method Approach to Identify, Understand and Communicate Information Provenance.”
In addition to those grants, federal records show Starbird’s center at the University of Washington received a 5-year $2.25 million grant from the NSF stretching through 2026 for “Rapid-Response Frameworks for Mitigating Online Disinformation.”
The researchers acknowledged in the study that they modeled some of their research on concepts used to validate climate change. ”We take a similar approach to climate models to validate our model internally,” they wrote at one point, while also acknowledging they believed “models of greenhouse gas reduction remain our best hope at reversing climate change.”
Elsewhere, the researchers also declared their hope that their theoretical approach one day could be used to combat resistance to several of liberals’ favorite policy objectives. “Our results highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity and democratic processes around the globe,” they wrote.
Benz, whose nonprofit has played a major role exposing public-private efforts to censor content online that are now the subject of congressional investigations and lawsuits, said the study appeared to open a new front in the disinformation wars that could further disguise censorship so it can’t be contested or shamed.
The groups, he argued, appear to be trying to create an “information purgatory to place largely conservative, populist or heterodox opinions and to stop them from going viral.” He added he feared the motive of such censorship was to promote liberalism and crush alternate political philosophies.
“They explicitly say that the purpose of this censorship psychology study is to eliminate resistance to vaccination efforts, equity and democratic processes, meaning elections,” Benz said. “So they want to be able to control and prevent all opposition to election procedures that they want in place, to vaccination campaigns and to what appears to be racial and climate equity initiatives.”
The research comes as a federal judge has ruled that government agencies like the FBI and Homeland Security Department used third parties as proxies to coerce social media firms into censorship during the 2020 election and the pandemic.
The Biden administration “seems to have assumed a role similar to an Orwellian ‘Ministry of Truth,’” Trump-appointed U.S. District Judge Terry Doughty wrote last month in imposing a preliminary injunction banning federal agencies from having conversations with Big Tech companies aimed at censoring political speech.
Meanwhile, the House Judiciary Committee has launched a far-reaching inquiry into censorship, most recently exposing Facebook files showing the Biden White House held meetings with executives from that platform discuss tweaking its algorithms to diminish the audience of conservative sites like The Daily Wire and at the same time to increase traffic to traditional news sites like The New York Times.
That concept also was touted in the University of Washington researchers’ work, specifically a tactic they called “virality circuit breakers.”
“A more plausible approach could involve ‘virality circuit breakers’, which seek to reduce the spread of a trending misinformation topic without explicitly removing content—for example, by suspending algorithmic amplification,” the researchers wrote. “This approach allows platforms to consider ethical ramifications while minimizing the public relations challenges accompanying direct forms of action. This could aid in lowering the threshold for fact-checking and therefore enable quicker response times.”
Another tactic they recommended was called a "nudge," which they said involved "warning users when they encounter potentially false or misleading information." Such warnings or flags were shown to readers, the researchers said, "to improve discernment of false information by 10–20 percent."
The research also revealed the size and scope of what nonprofit projects like the Election Integrity Partnership – which has worked with the Homeland Security Department during the 2020 election and includes the University of Washington, The Atlantic Council’s Digital Forensic Research Lab, and The Stanford Internet Observatory. According to the University of Washington, the research was funded in part by charities headed by e-commerce billionaires Craig Newmark and Pierre Omidyar.
“This dataset was extracted from a broader set of 1.04 billion election-related posts collected between 1 September 2020 and 15 December 2020. To construct our dataset, we first identified 430 incidents—distinct stories that included false, exaggerated or otherwise misleading claims or narratives. Search terms were devised for each incident, extracting 23 million posts generated by 10.8 million accounts from the broader collection,” research authors explained.
The researchers encouraged social media platforms to take their research and test it further.
“In our case, the gold standard would be to have Twitter implement our recommended policies in some locations but not others and examine subsequent engagement with viral misinformation,” they wrote.
The controversy is heating up after more evidence is being disclosed that the Big Tech platforms such as Twitter, Facebook, and Google have been colluding with the Biden administration to censor or drive traffic away from stories and posts that the Biden administration find embarrassing or dissenting from the administration's "approved" view. Although private actors are usually not accountable for violating speakers' First Amendment rights, the law holds an exception when the private entities act "under color of law."
Robert F. Kennedy, Jr., has employed this exception by filing lawsuits Aug. 4 against YouTube and Google alleging the sites had caved to pressure to “silence” him by the federal government and his Democratic primary rival, President Joe Biden.
The Facts Inside Our Reporter's Notebook
Documents
Links
- University of Washington Center for an Informed Public
- study is a roadmap
- academic research was published in a little-noticed article published last summer in the Journal Human Behavior
- National Science Foundation
- $197,538 grant
- $550,000 grant
- federal judge has ruled that government agencies like the FBI and Homeland Security Department used third parties as proxies to coerce social media firms into censorship
- U.S. District Judge Terry Doughty wrote
- Facebook files