Zuckerberg and Dorsey Face Harsh Questioning From Lawmakers
Jack Dorsey defended Twitter’s moderation policies against attacks from Republicans and Democrats and urged lawmakers to focus instead on oversight of the algorithms that help moderate and recommend content.
Senators hammered Mr. Dorsey over his decision to add labels to false and misleading election-related tweets, which Republicans said displayed bias against conservatives and Democrats said had not gone far enough to check misinformation. Mr. Dorsey, who attended the hearing virtually from what appeared to be a kitchen, resisted being drawn into debate with lawmakers.
As he did in a hearing three weeks ago, Mr. Dorsey defended Twitter’s labeling tactics, although he admitted that in some cases the company had mistakenly labeled tweets that did not violate its policies. The task of moderation is incredibly challenging, Mr. Dorsey argued.
“We are facing something that feels impossible,” Mr. Dorsey said. “We are required to help increase the health of the public conversation while at the same time ensuring that as many people as possible can participate.”
He also continued his call for senators to focus on Section 230 reforms that would provide more oversight to algorithms. Algorithms, Mr. Dorsey said, should be the top priority for lawmakers, and users should be given a choice to turn them off or select alternatives.
Section 230 “has created so much goodness and innovation. If we didn’t have those protections when we started Twitter 14 years ago, we could not start,” Mr. Dorsey said. “I think we need a line around the problem we’re trying to solve.”
Midway through the hearing, Mr. Dorsey had faced a few more questions than Mark Zuckerberg, according to a tally by The New York Times.
Mr. Dorsey was subject to particular scrutiny from Senator Dianne Feinstein, Democrat of California, and Senator Ted Cruz, Republican of Texas. Ms. Feinstein argued that Twitter should have taken more direct action against President Trump’s tweets that made baseless claims of election fraud, while Mr. Cruz insisted that Twitter overstepped in its moderation.
Democrats showed no signs of letting up on criticisms of Facebook and Twitter at the hearing despite greater efforts by the companies to act on misinformation in the recent election.
Instead, several Democratic lawmakers blamed Mark Zuckerberg of Facebook and Jack Dorsey of Twitter for a surge of hate speech and election disinformation after the election. They pointed to comments on Facebook from Steve Bannon, the former senior adviser to President Trump, who called for the beheading of Dr. Anthony Fauci, and posts on and Facebook groups that spread false conspiracy theories about voter fraud.
“I think you can and must do better,” said Senator Patrick Leahy, Democrat of Vermont.
Democratic lawmakers called for a slew of legislation directed at the tech sector.
Senator Richard Blumenthal of Connecticut called for tougher data privacy laws, changes to a law that gives the companies legal protection for content posted by users, and greater antitrust action.
“You have built terrifying tools of persuasion and manipulation — with power far exceeding the robber barons of the last Gilded Age,” Mr. Blumenthal said. “You have made a huge amount of money by strip mining data about our private lives and promoting hate speech and voter suppression.”
The calls for changes could portend a legislative agenda aimed at Silicon Valley in the next Congress. Republicans have also called for reforms to the legal shield protecting platforms for third-party speech, known as Section 230 of the Communications Decency Act.
Several Democratic members pointed to calls for violence and protests on the companies’ platforms after the election. Some pro-Trump groups organized on Facebook to stop the count of voting in some states, for instance, before the groups were removed.
“What are your concerns about the spread of misinformation, like Trump’s claims about the election that may incite violence?” Ms. Feinstein asked.
Mr. Zuckerberg promised to be vigilant.
“I’m very worried about this, especially any misinformation that could incite violence in such a volatile period like this,” Mr. Zuckerberg said.
The committee’s Republican members attacked the power that social media companies have to moderate content on their platforms, accusing them of making politically slanted calls while hiding behind a decades-old liability shield.
“I don’t want the government to take over the job of telling America what tweets are legitimate and what are not,” said the panel’s chairman, Senator Lindsey Graham of South Carolina. “But when you have companies that have the power of government, have far more power than traditional media outlets, something has to give.”
President Trump and his allies have spent years attacking the Silicon Valley platforms for what they say is bias against conservatives, pointing to the liberal politics of the companies’ employees and instances of moderation that affected Republicans or conservative media. Their evidence for these claims has always been anecdotal, and many right-wing personalities have built big followings online.
Mr. Zuckerberg and Mr. Dorsey said that while their companies had sometimes made mistakes, their policies were fair and supported the best interests of their users.
Republicans spent much of their time focusing on individual decisions made by the companies. Mr. Graham took exception to the way Twitter and Facebook had initially limited the reach of a New York Post article about Hunter Biden, the son of President-elect Joseph R. Biden Jr. The article prompted the committee to demand that the chief executives of the two companies testify.
“That, to me, seems like you’re the ultimate editor,” Mr. Graham said.
Their comments reflected the way conservatives are increasingly attacking the companies for the way they’ve handled a fractious period after the presidential election, when President Trump has refused to concede despite Mr. Biden’s significant lead.
Mr. Graham questioned Twitter’s decision to label a post from a Republican politician as making a “disputed” claim about election fraud. Senator Mike Lee, a Republican from Utah, said one of his Facebook posts about the election had been labeled by the platform.
“Now, maybe these kinds of concerns are out of the mainstream in Palo Alto,” said Mr. Lee, referring to the city in Silicon Valley not far from where Facebook is based. “But they’re not out of the mainstream in the rest of America.”
Many world leaders generally have wider latitude on Twitter and Facebook because their comments and posts are regarded as political speech that is in the realm of public interest. But what will happen to President Trump’s accounts on the social media platforms when he leaves office?
At Tuesday’s hearing, Jack Dorsey, Twitter’s chief executive, said the company would no longer make policy exceptions for Mr. Trump after he leaves office in January. During Mr. Trump’s time as a world leader, Twitter allowed him to post content that violated its rules, though it began adding labels to some of the tweets starting in May to indicate that the posts were disputed or glorified violence.
“If an account suddenly is not a world leader anymore, that particular policy goes away,” Mr. Dorsey said.
In contrast, Mr. Zuckerberg said at the hearing that Facebook would not change the way it moderates Mr. Trump’s posts when he leaves office. Since Election Day, Facebook has labeled a few of Mr. Trump’s posts and has pointed users to accurate information about the results of the election, but it generally takes a hands-off approach.Facebook does not fact-check world leaders but could fact-check Mr. Trump after his term as president ends.
Most Twitter users must abide by a litany of rules, including ones forbidding threats, harassment, impersonation and copyright violations. If someone violates the rules, they are often required to delete the offending tweet or are temporarily banned.
“A critical function of our service is providing a place where people can openly and publicly respond to their leaders and hold them accountable,” a Twitter spokesman said. “With this in mind, there are certain cases where it may be in the public’s interest to have access to certain tweets, even if they would otherwise be in violation of our rules.”
A law that has legally shielded online platforms — Section 230 of the Communications Decency Act — has long been mentioned by lawmakers as a potential target for reform.
President Trump signed an executive order in May to curtail the law. And the legal shield, which largely protects tech companies from the liability for what their users post, has been the topic of other congressional hearings.
Yet when it came down to it, the debate on Section 230 has resulted in minimal concrete discussions. At a hearing last month with chief executives of the social media companies, there was little substantive debate and few suggestions about how to reform the law.
Not on Tuesday. At the Senate Judiciary Committee hearing with Mark Zuckerberg of Facebook and Jack Dorsey of Twitter, lawmakers approached Section 230 differently out of the gate. They began with a bipartisan call to change the “golden goose” legal shield, with a substantive focus on legislation that will probably take center stage in the next Congress.
Senator Lindsey Graham, Republican of South Carolina and the chairman of the Senate Judiciary Committee, opened the hearing taking direct aim at the legal shield.
“We have to find a way when Twitter and Facebook make a decision about what’s reliable and what’s not, what to keep up and what to keep down, that there is transparency in the system,” Mr. Graham said. “Section 230 has to be changed because we can’t get there from here without change.”
Republicans have pointed to the law as a crutch for online platforms to censor conservative content, claims that are not founded. Democrats have agreed that the law needs reform, but they have taken the opposite position on why. Democrats have said Section 230 has caused disinformation and hate to flourish on the social media sites.
“Change is going to come. No question. And I plan to bring aggressive reform to 230,” Senator Richard Blumenthal, a Democrat of Connecticut, said in opening remarks.
Mr. Blumenthal was a leading proponent of the first reform to Section 230 in 2018, which made the platforms liable for knowingly hosting content on sex trafficking.
But he was careful to distance himself from Republicans’ worries of censorship.
“But I am not, and nor should we be in this committee, interested in being a member of the speech police,” Mr. Blumenthal said.
Mr. Zuckerberg and Mr. Dorsey said they would be open to some reforms to the law. Mr. Zuckerberg added that he could see reform that required more transparency from the companies. Neither executive elaborated, but Mr. Dorsey’s Twitter account posted support for reforms on transparency, the ability to appeal decisions on moderation, and users having choice over the algorithms that dictate what content appears before users.
Requiring 1) moderation process and practices to be published, 2) a straightforward process to appeal decisions, and 3) best efforts around algorithmic choice, are suggestions to address the concerns we all have going forward. And they all are achievable in short order.
— jack (@jack) November 17, 2020
Mark Zuckerberg, the chief executive of Facebook, and Jack Dorsey, Twitter’s chief, are appearing before members of the Senate Judiciary Committee to defend actions by their companies to moderate speech. It is the second time in two months that the two C.E.O.s are testifying but this will probably have more fireworks than their last appearance as their companies took a central role during the recent election.
They will probably face many questions about how their social networks handled vote-related posts, videos and photos. Both companies increased their labeling of election misinformation, including posts by President Trump, while false and misleading content surged.
The committee chairman, Lindsey Graham of South Carolina, called the hearing in October after Twitter and Facebook labeled or limited the reach of a New York Post article about Hunter Biden, the son of President-elect Joseph R. Biden Jr., because of information that was leaked and misleading.
The executives, who have each appeared before Congress several times in recent years about data privacy, disinformation in the 2016 election and content moderation, will face new questions, including whether a continued ban on political ads could jeopardize the Senate runoffs in Georgia and why hateful content is still allowed on their sites.
President Trump and his Republican allies have balked at actions by Twitter and Facebook to repeatedly label and hide the president’s posts for violations of policies against spreading false and misleading information about the election. Twitter was particularly active in labeling Mr. Trump’s tweets on the day of the election and days after.
Democrats, meanwhile, say Facebook and Twitter have been too lax on disinformation and hate speech, allowing figures like Steve Bannon, who recently called for Dr. Anthony Fauci’s beheading, to maintain his Facebook account. They also will point to a rise in anti-Muslim content on Facebook and a rise in hate content across social media.
At Tuesday’s hearing on social media and misinformation, much of the discussion focused on the minutiae of how Facebook and Twitter carry out the process of moderating the billions of pieces of content regularly posted to their networks.
Both Democrats and Republicans zeroed in on the issue, according to a tally by The New York Times. Out of 127 total questions, more than half — or 67 — were about content moderation. Democrats asked 12 questions aimed at how Facebook and Twitter could increase their moderation efforts around topics like hate speech, while Republicans asked 37 questions about why some points of view were censored online and how content moderation could be decreased in some areas, according to the tally. (The remainder of the questions about content moderation did not indicate a clear desire for more or less moderation.)
In particular, Republican senators like Josh Hawley of Missouri, Mike Lee of Utah and Ted Cruz of Texas focused on the unproven idea that Facebook and Twitter unduly moderated posts by conservatives, compared with the amount of time spent labeling or taking down posts made by liberals.
That has been a recurring refrain from conservative Americans over the past few weeks as scores of people have claimed they will leave Facebook and Twitter for more permissive platforms like Parler, Rumble and MeWe. Facebook and Twitter have maintained that political affiliation has no bearing on how they enforce their rules.
On the other side of the aisle, Democrats said the companies had not gone far enough to moderate harmful content. Senator Richard Blumenthal, Democrat of Connecticut, brought up how the Facebook account of Steve Bannon, a former strategist for President Trump, was not taken down despite Mr. Bannon recently suggesting the beheading of Dr. Anthony Fauci, the nation’s top infectious disease expert.
Mr. Zuckerberg said the account was given a “strike” and the post was taken down, but that Facebook’s policies do not require the account to be immediately banned. Twitter, by contrast, permanently suspended the account.
What Republicans and Democrats agreed on was that Facebook and Twitter have enforced their policies inconsistently, and often without elucidating why they had taken the steps that they did.
“We’re going to have to have more visibility into what’s occurred, and what has produced certain outcomes,” said Senator Thom Tillis, Republican of North Carolina, who remarked on how one of his Facebook posts on Veterans Day was moderated, without a clear reason as to why.
Mr. Zuckerberg and Mr. Dorsey agreed that reform around how content is moderated should be revisited. Mr. Zuckerberg has invited a new regulatory framework that could encompass content moderation across many of the largest tech platforms. Mr. Dorsey said his focus was on giving users more tools to control the content they see, perhaps through the use of algorithms tailored to individual users’ preferences.
“A centralized global content moderation system does not scale,” Mr. Dorsey said.