HomeStrategyPoliticsThe Technology 202: Activists want to school Congress on extremism ahead of...

The Technology 202: Activists want to school Congress on extremism ahead of CEO hearing


The briefings and a flurry of new public reports about online harms reflect an effort to influence lawmakers’ line of questioning as they prepare for their first hearing with the chief executives since the Jan. 6 attacks on the Capitol. 

Lawmakers appeared to be interested in cornering the CEOs on the tech companies’ role in creating the conditions for the insurrection, said one person familiar with the conversations, who was not authorized to speak about them publicly. 

Another exchange focused on the companies’ ability to enforce their policies against vaccine misinformation, and advocates stressed the urgency for lawmakers to ensure the companies to step up action immediately to save lives during the pandemic, another person familiar with the meetings said.

There’s also a broad effort among lawmakers and their staffs to ensure that they’re nailing questions that don’t give the executives an easy out, the first person said. At past CEO hearings, members of Congress have at times struggled to pin down the executives, who often say they’ll have their staff follow up on complex or controversial matters. 

Public hearings are a key venue for lawmakers to pressure some of the world’s largest companies.  

Though CEOs have been appearing on Capitol Hill more frequently, the violent fallout of the 2020 election and the recent push to vaccinate Americans and end the public health crisis have only raised the stakes. 

“The reason we’re having this hearing is because the spread of disinformation and extremism has just been growing online, in particularly on social media where there are little or no guardrails,” said Frank Pallone Jr., the New Jersey Democrat who chairs the Energy and Commerce Committee, in an interview with The Technology 202. “This stuff doesn’t just stay online.”

Pallone said he hears concerns about disinformation and extremism from his constituents and colleagues. Pallone said the committee is considering a wide range of policy moves, including empowering the Federal Trade Commission to better protect consumers or reforming Section 230, a decades-old law that shields tech companies from lawsuits for the content that other people post on their services. 

“The purpose of the hearing is to see what legislative responses there should be,” Pallone said. 

A rush of new reports, briefing documents, letters and data reflect the issues that advocacy groups hope lawmakers address.

  • Avaaz released a new report that detailed election-related falsehoods on Facebook in the lead-up to the 2020 election. The report concluded that the company was “a significant catalyst in creating the conditions that swept America down the dark path from election to insurrection.” It called for Congress to take immediate action to rein in social media, and it warned there could be further violence without it.
  • The Anti-Defamation League released a new report on online hate and harassment, which found that 41 percent of Americans surveyedd said they had experienced online harassment over the past year, compared to the 44 percent in the same report last year. The group said the findings show that online hate and harassment isn’t being cut down enough by the companies’ attempts at self-regulation.
  • Color of Change has not publicly released any reports ahead of the hearing, but a lawmaker briefing document viewed by The Technology 202 shows that the group is focused on how misinformation and disinformation harms Black people. It detailed how misinformation could dampen their civic engagement, and also how social media amplifies conspiracy theories and white nationalism.
  • Coalition for a Safer Web shared a new complaint with The Technology 202 that was sent to Facebook’s Oversight Board. The letter called the board tasked with overseeing Facebook’s content moderation decisions to oversee Facebook’s enforcement of policies related to the QAnon conspiracy theory, after researchers said they continued to find posts and accounts that appeared to break the social network’s rules. (The Facebook Oversight board has said it isn’t set up to conduct such a review, as it reviews only specific user appeals and cases referred by Facebook. “It’s important to recognize the Board wasn’t created to be a quick-fire solution to every issue playing out on Facebook,” John Taylor, a Facebook Oversight Board spokesman said in a statement.)
  • Anti-Vax Watch is planning to release new data today related to the companies’ handling of vaccine misinformation.

The companies are on the defensive. 

Ahead of the hearing, Facebook’s vice president of integrity, Guy Rosen, released an op-ed in Morning Consult where it touted its work to reduce misinformation across its products. For instance, he wrote that the company disabled more than 1.3 billion of fake accounts between October and December of 2020. 

Facebook spokesman Andy Stone pushed back on some of the reports released ahead of the hearing. He challenged the methodology of the Avaaz report, and said it “distorts the serious work we’ve been doing to fight violent extremism and misinformation on our platform.” In response to the ADL report, Stone said the company has updated its policies to address implicit hate, and its technologies have gotten better at proactively detecting such content. In response to Coalition for a Safer Web, Facebook removed three posts related to the QAnon conspiracy theory that the researchers detected. He said the company is always working to address QAnon content breaking its rules, and that the company has removed over 3,300 Pages, 10,500 groups, 510 events, 18,300 Facebook profiles and 27,300 Instagram accounts for violating its QAnon policies. 

Twitter has also been highlighting its work to reduce extremism. Vijaya Gadde, Twitter’s legal, policy and trust & safety lead earlier this month posted a Twitter thread detailing the company’s work in the two years since the attacks in Christchurch, New Zealand. 

Our top tabs

Leaked Facebook content moderation guidelines reveal the company’s approach to harassment and violence around the world.

Moderators were told that threats of violence were permitted as long as their targets were public figures and they weren’t tagged in the posts, The Guardian’s Alex Hern reports. In another report, Hern detailed how the company allows users to praise mass murderers and “violent non-state actors” in certain situations, highlighting how it operates in repressive regimes. 

The policies inject renewed scrutiny on the company’s content moderation just days before Zuckerberg appears on Capitol Hill. 

“We think it’s important to allow critical discussion of politicians and other people in the public eye. But that doesn’t mean we allow people to abuse or harass them on our apps,” a Facebook spokesperson said. “We remove hate speech and threats of serious harm no matter who the target is, and we’re exploring more ways to protect public figures from harassment.”

The leaked policies came as Reporters Without Borders, an advocacy group, filed a lawsuit in France against the company for failing to provide a “safe” environment for its users. The company said that it has “zero tolerance for any harmful content on our platforms and we’re investing heavily to tackle hate speech and misinformation.”

Parler’s ousted CEO sued the company, saying that it took his ownership stake.

The conservative-leaning social network’s founder, John Matze, sued the company and top investors in Nevada, Rachel Lerman reports. The site came back online just over a month ago following the Jan. 6 riot at the Capitol, which resulted in tech companies shutting down services for the social media site.

Rebekah Mercer, a Republican megadonor who owns a controlling stake and has taken on a more visible role in the company, was named in the suit. Matze said in the lawsuit that Mercer and others took over his 40 percent stake in the company after arguing that it was worth just $3. The “outlandish and arrogant theft” is “the product of a conspiratorial agreement,” the suit said.

Amazon drivers must agree to get tracked by AI-enabled cameras, renewing concerns about its workplace practices.

The company’s 75,000 drivers must agree to biometric data collection, like facial recognition, or they will be fired, Motherboard’s Lauren Kaori Gurley reports. The effectively mandatory sign-up process comes nearly two months after the company announced that it would be equipping its delivery trucks with the cameras to boost driver safety.

Democratic senators have taken notice of the technology. Earlier this month, five senators sent Amazon CEO Jeff Bezos, who owns The Washington Post, more than a dozen questions about the cameras. The company did not immediately respond to a request for comment.

Rant and rave

Some people were alarmed by Amazon’s rollout of AI-enabled cameras. Internet entrepreneur Mark Ghuneim: 

Andrew J. Hawkins, a transportation reporter at The Verge, had a different reaction:

Trending

Mentions

  • Tableau CEO Adam Selipsky will return to Amazon to lead its cloud-computing division, Amazon Web Services.
  • Capitol Hill Policy Group registered to lobby for T-Mobile effective Feb. 1. James Reid and Todd Bertoson, two former aides on the Senate Commerce Committee, plan to lobby on issues including Internet service provider regulations and spectrum policy.

Daybook

  • Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey testify on misinformation before the House Energy and Commerce Committee on Thursday at noon.

Before you log off





Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read