It’s not the first time that Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey and Google CEO Sundar Pichai have been grilled by lawmakers about how they moderate content but the coronavirus pandemic and the election season has put a larger spotlight on the topic. The virtual hearing comes as US lawmakers consider new regulation that could put more pressure on online platforms to do a better job of combating lies.
The House subcommittee on communications and technology and the House subcommittee on consumer protection and commerce are holding the joint hearing.
“For far too long, big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences. Industry self-regulation has failed. We must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation,” Energy and Commerce Committee Chairman Frank Pallone Jr. (Democrat of New Jersey), Communications and Technology Subcommittee Chairman Mike Doyle (Democrat of Pennsylvania), and Consumer Protection and Commerce Subcommittee Chair Jan Schakowsky (Democrat of Illinois) said in a statement in February.
Democrats, civil rights groups, celebrities and others have scrutinized tech companies for not doing enough to address this problem. At the same time, the platforms are also trying to fend off accusations they’re censoring speech from conservatives, which they repeatedly deny.
The hearing is called Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation. Here’s what you need to know:
The hearing is scheduled on Thursday at 12 p.m. ET/9 a.m. PT.
What to expect
In prepared testimony released Wednesday, misinformation and disinformation. Their efforts probably won’t be enough to satisfy lawmakers.outlined the steps they’ve taken to curb the spread of
“The consequences of disinformation and extremist content on these platforms are apparent,” a memo from staff members of the Committee on Energy and Commerce said ahead of the hearing. “Many experts agree that disinformation about COVID-19 has greatly intensified an already deadly public health crisis. Experts also acknowledge that misinformation about the 2020 presidential election and extremist content has further divided the nation and provoked an insurrection.”
The companies have labeled misinformation and directed people to authoritative sources during the 2020 US presidential election and the pandemic, though it isn’t clear how effective these efforts have been. Facebook partners with third-party fact-checkers to flag misinformation and says it’ll show these posts lower within people’s feeds.
On Monday, Morning Consult, that “tackling misinformation actually requires addressing several challenges, including fake accounts, deceptive behavior and misleading and harmful content.” The company said it disabled more than 1.3 billion fake accounts between October and December 2020.said in a blog post, which was also published in
Twitter has been working on a new community-driven forum called Birdwatch that lets users identify misleading tweets. Google-owned YouTube said it also reduces recommendations for harmful misinformation and that “human evaluators” help determine if a claim is inaccurate or a conspiracy theory. The three platforms have also removed health misinformation if it includes false claims that could lead to physical harm.
All these efforts, though, didn’t stop misinformation from spreading. Misinformation that vaccines are toxic and the false claim that 5G caused the coronavirus can still be found widely on social media. During the election, misinformation about voter fraud, the QAnon conspiracy theory and other online lies spread on social media. Social networks have also had to grapple with fake accounts created to sow discord and disinformation during elections. And the deadly Capitol Hill riot on Jan. 6 was another reminder about how online hate can lead lead to violence in the real world.
Former President Donald Trump was also notorious for spreading misinformation on social media during the election season. All three platforms suspended Trump because of concerns about inciting violence following the deadly Jan. 6 Capitol Hill riot. Facebook’s oversight board is currently weighing whether to keep Trump’s ban in place. Twitter’s ban is permanent, and YouTube said it would lift the ban on Trump’s channel when the risk of violence decreases.
Lawmakers are also exploring potential regulation, including changes to a law called Section 230 that shields online platforms from liability for content posted by users. Pichai said in his prepared remarks that he’s concerned changes to the law could have unintended consequences such as “harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”
Zuckerberg has told lawmakers numerous times that he’s in favor of updating Section 230. “Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection — that would be impractical for platforms with billions of posts per day — but they should be required to have adequate systems in place to address unlawful content,” Zuckerberg said in his prepared remarks.