As Russia criminalizes fake news, U.S. lawmakers push to regulate 'harmful content' on social media
The NUDGE Act is a "Tech Trojan Horse" for censorship, using federal agencies to "cloak content modification in pseudo-scientific terms," legal commentator Jonathan Turley wrote recently. Democrat says it would stop "misinformation," while GOP cosponsor says it's about limiting screen time.
In the wake of the U.S. campaign against Islamic terrorism, autocrats worldwide started referring to their persecution of dissidents as antiterrorism programs.
Two decades later, the West's new foe is purported misinformation, particularly around election and COVID-19 claims, and Vladimir Putin is usurping the lingo to control the optics of Russia's latest foreign adventure.
Russia's new law criminalizing "false information" about the military led TikTok to suspend new uploads there and media companies to pull their employees out of the country. The open question is whether Putin's move could erode the credibility of Western governments to regulate social media in the name of protecting users.
Two weeks before Russia invaded Ukraine, U.S. lawmakers introduced the Social Media NUDGE Act (S-3608) to "reduce addiction and the amplification of harmful content" through federal research agencies and a regulatory rulemaking process.
The legislation applies to commercial enterprises specializing in user-generated content with more than 20 million monthly active users for most of a 12-month period. They must devise plans for "content-agnostic interventions" and submit them to the Federal Trade Commission (FTC) for approval.
Some of the proposed interventions are already common, such as Twitter's prompt to users to read links before retweeting them.
While S-3608 never refers to misinformation or false information, it also never defines the "harmful content" it seeks to prevent, allowing its bipartisan cosponsors to give their own spin on what it would accomplish.
In a joint press release, Minnesota Democratic Sen. Amy Klobuchar said S-3608 would protect against "dangerous content that hooks users and spreads misinformation." Wyoming Republican Sen. Cynthia Lummis said it would protect children "from the negative effects of social media" without "dictating what people can and can't say.”
Lummis' interest in expanding federal authority may surprise Hill watchers, given her reputation as the Senate's foremost defender of cryptocurrency, whose core purpose is evading federal scrutiny.
But she raised similar issues when Facebook whistleblower Frances Haugen testified before Congress last fall, asking how members can protect free speech while agreeing that it's fair to compare social media to the tobacco industry.
The senator later asked Meta CEO Mark Zuckerberg for documentation of its knowledge about Facebook and Instagram harming children, while reminding him she was concerned about "censorship that occurs on your platforms."
Some law professors expressed skepticism that the bill's interventions were really "content-agnostic" and could survive constitutional scrutiny.
George Washington University's Jonathan Turley called it a "Tech Trojan Horse" for censorship, using the National Academies of Sciences, Engineering, and Medicine and FTC to "cloak content modification in pseudo-scientific terms."
Congressional Democrats have long argued that social media companies "need to protect citizens from bad choices by using beneficent algorithms to guide us to 'healthier' viewing and reading habits," he wrote in an op-ed for The Hill.
"The term 'content-agnostic' is unlikely to save the bill" from First Amendment problems, Ellen Goodman, who co-directs the Rutgers Institute for Information Policy & Law, wrote in a bill analysis.
The legislation's undefined harms "will be generated by particular kinds of content being algorithmically amplified," she wrote. It's hard to stay agnostic when designing interventions to "de-amplify certain content, or nudge users towards other content," and it's impossible if ads and other "manipulative" speech are treated differently.
A Lummis aide pushed back on skepticism, telling Just the News the bill doesn't distinguish between messages such as "I love puppies" or "I hate puppies."
Rather, it targets "underlying design decisions" such as autoplay and "endless scroll" that keep users on social media platforms for longer periods, which harms their mental health but earns the companies more money. That's separate from the discrimination that conservative content already faces on social media, the aide said.
The bill lays out a highly deliberative process, with ongoing congressional involvement and a court option for covered platforms to challenge FTC decisions, the aide said. Lummis is not asking these companies to "roll out of bed one morning and implement everything overnight."
As for the bill's failure to define "harmful content," the aide said Lummis plans to clarify that the term doesn't refer to user-generated content, just how platforms operate.
It's important to remember the broader context for this legislation, which would have to be incorporated into a larger bill dealing with antitrust, privacy, protecting kids and the lightning rod of Section 230 immunity for platforms hosting third-party content, the aide said.
For D.C.'s leading progressive think tank, content neutrality is the problem. It's doubtful that federal agencies can single out "harmful content" without distinguishing it from material in the public interest, wrote Brookings Institution Senior Fellow Mark MacCarthy, a former Hill telecommunications policy staffer.
The FTC declined to comment on whether it has sought such authority as the bill would give it. Klobuchar's office didn't respond to queries on whether "harm" refers to purported misinformation or a broader category that includes it.
The bill name is taken from nudge theory, which was developed by University of Chicago behavioral economist Richard Thaler and first implemented as government policy by the U.K.'s Behavioral Insights Team.
The best-known success of the so-called Nudge Unit, created in 2010 and spun off in 2014, was using peer pressure to convince Britons to pay their taxes. But the organization's cofounder has recently expressed second thoughts.
"In my mind, the most egregious and far-reaching mistake made in responding to the pandemic has been the level of fear willingly conveyed on the public," Simon Ruda wrote in an Unherd essay. "Nudging made subtle state influence palatable, but mixed with a state of emergency, have we inadvertently sanctioned state propaganda?"
Nudge theory has also been applied to environmental activism. The right-leaning National Association of Scholars published a report in 2015 arguing that college campus "sustainability" programs use nudging to squelch debate and promote groupthink.
The Facts Inside Our Reporter's Notebook
Videos
Links
- TikTok to suspend new uploads
- media companies to pull their employees
- Social Media NUDGE Act (S-3608)
- joint press release
- Senate's foremost defender of cryptocurrency
- Facebook whistleblower Frances Haugen testified
- senator later asked Meta CEO Mark Zuckerberg
- The Hill
- bill analysis
- Brookings Institution Senior Fellow Mark MacCarthy
- Unherd essay
- National Association of Scholars published a report