HomeStrategyPoliticsThe Technology 202: Social networks scramble to address the aftermath of first...

The Technology 202: Social networks scramble to address the aftermath of first presidential debate


The social network said it took down the accounts based on intelligence provided by the FBI. Twitter shared some examples of the tweets, which the company said “did not make an impact on the public conversation” and had very low engagement:

The activity indicates that social networks are in for a rocky ride for the remainder of election season. 

Tech companies have been revamping their policies and aggressively hiring to address such campaigns in the wake of Russian interference on their platforms surrounding the 2016 election. Yet the companies remain exceptionally vulnerable to a range of evolving and expanding forces that could undermine the election. 

The first presidential debate also sparked an onslaught of extremism and misinformation online. And this is just the beginning: The companies have to contend with three more debate nights as well as a high-stakes Election Day and the delicate post-election period, which might be extended this year because of an expected increase in mail-in voting during the election. 

Companies are taking some additional steps to prepare for this chaos – but critics say it’s still not enough. 

Facebook yesterday announced new policies that sought to strengthen its defenses against voter suppression and other election interference on its platform. The company said it would reject ads that seek to delegitimize election outcomes, such as calling a voting method inherently corrupt or using an isolated incident of fraud to dispute election results. 

These policies will apply even to ads from politicians, who generally enjoy more lenient policies on Facebook. That’s likely to set the social network up for a battle with Trump, who has been making false claims about mail-in voting and casting doubt on the legitimacy of the election process. 

Yet Facebook’s critics noted this wouldn’t apply to regular posts that the president or other politicians might share with their millions of followers. From the Biden campaign:

Facebook also reported taking down some extremist and misleading content amid the fallout from the debate. 

Brian Fishman, the company’s director of counterterrorism and dangerous organizations, reported that the company spotted an increase in activity related to the Proud Boys, including memes that included Trump’s “stand back and stand by” language. The company is removing the memes when they are shared to support the Proud Boys or other individuals banned from the service, he said. 

Fishman noted the majority of the posts Facebook spotted were condemning the organization and Trump’s comments about it. Facebook banned the Proud Boys in October 2018. 

However he acknowledged that researchers and journalists were still able to find activity on the platform:

As Facebook and other major social networks crack down on the group, my colleagues Elizabeth Dwoskin and Craig Timberg report they’re shifting toward other channels with less oversight, including the conservative social media site Parler and also channels on the encrypted chat app Telegram.

Facebook also said it would ban ads supporting QAnon and any militarized social movements as it seeks to take a harder line on extremism. 

The company also banned misleading ads from Trump that falsely suggested that expanding entry of refugees to the United States would increase the public’s exposure to the novel coronavirus. There were more than 30 versions of the ad running on the social network, according to Facebook’s ad transparency library. It had gathered between 200,000 and 250,000 impressions.

“We rejected these ads because we don’t allow claims that people’s physical safety, health, or survival is threatened by people on the basis of their national origin or immigration status,” Facebook spokesman Andy Stone said in a statement.

However Facebook gave some misleading Trump ads a green light, including a series of ads that falsely implied Biden was wearing an earpiece. The social network’s third-party fact-checkers debunked and labeled similar claims from other accounts that weren’t operated by politicians. 

Videos spreading health misinformation about Joe Biden also were widely shared on TikTok. 

Even TikTok, which has sought to distance itself from political discourse, was grappling with misinformation after the debate. Four videos that falsely said Biden was wearing a wire during the debate gained more than half a million combined views yesterday on the short-form video sharing service, the left-leaning media watchdog group Media Matters told Elizabeth. 

TikTok told her it would remove the Biden video. The company’s policies ban misinformation that “misleads community members about elections or other civic processes.”

Our top tabs

Amazon is leaving its employees in the dark about the rate of coronavirus infection in its factories. 

NBC News spoke with 40 Amazon employees from 23 facilities who say that Amazon hasn’t been following through with the safety measures it promised earlier in the coronavirus pandemic. A lack of federal coronavirus protections for workers is contributing to the problem and making it almost impossible to track how many workers have been infected.

The information Amazon does provide workers isn’t enough to determine exposure, workers say.

“The texts we get distinguish between whether there was one case or multiple cases found that day, but that’s as specific as it gets. You don’t know whether they were on your shift or in the same section as you,” said John Hopkins, who works at the DSF4 Amazon fulfillment center in San Leandro, Calif. 

Because of the lack of information from the company about infection rates, some workers have taken to creating their own tracking systems. 

Trying to ascertain infection rates from local health agencies also proved difficult, NBC News found. Of 25 public health departments in areas with high numbers of Amazon warehouses, only five departments provided relevant records. Many states don’t require health departments to track outbreaks at places of employment. Public health officials who have worked with Amazon to collect the data report mixed experiences with the company’s willingness to cooperate.

Amazon spokeswoman Lisa Levandowski said that the company reports all cases to both workers and health departments but doesn’t release totals by facilities.

“We believe that sharing a case count is misleading, and lacks a significant amount of context,” she said.

Twitter’s two-year journey to working from home indefinitely could provide a model for the rest of the tech workforce.

The pandemic forced many workplaces to scramble to figure out remote setups. But Twitter has been experimenting with a remote work revolution since 2018, Elizabeth Dwoskin reports. 

“We’ve already been on this path, and the crisis just catapulted us into a future state,” said Twitter human resources chief Jennifer Christie, who called flexible work the “fourth industrial revolution.”

She added: “The future of work is offering employees more optionality.”

That transition goes well beyond allowing employees decamp to Hawaii or the Midwest. Twitter has experimented with creating sign-language systems and other video conference etiquette. The company is also rethinking performance review systems. Another team devised the acronym ELMO — “Enough! Let’s move on” — for chat systems when meetings veer off-topic.

Twitter, which estimates an up to 50 percent of its workforce will be remote after the pandemic, has had its own setbacks. Allowing workers to disperse across time zones makes meetings harder to schedule, and a hack this summer disrupted operations. Social connection may be a potential barrier to permanent remote work. 

“Right now, no one is in the office and we’re not missing out,” said an employee who spoke on the condition of anonymity for fear of retribution. “But FOMO starts to happen when people come back to work, and it will be harder to stay out of the office.” 

Palantir’s public listing sparked congressional scrutiny.

Rep. Alexandria Ocasio-Cortez (D-N.Y.) and Rep. Jesús “Chuy” Garcia (D-Ill.) called on the Securities and Exchange Commission to investigate the data-mining company before it began trading yesterday, Alfred Ng at CNET reports. The lawmakers expressed concerns about the company’s lack of transparency over its funding from the CIA, contracts with Qatar’s government, data protections and shareholder structure.

“Palantir reports several pieces of information about its company — and omits others — that we believe require further disclosure and examination, as they present material risks of which potential investors should be aware and national security concerns of which the public should be aware,” they wrote in a letter sent on Sept. 17 and released to reporters yesterday.

Shareholder groups and human rights organizations such as Amnesty International expressed similar concerns leading up to the company’s public filing.

“Palantir’s numerous transparency and governance issues have been flying completely under the radar as they launch their public offering,” Jacinta Gonzalez, a senior campaign organizer at immigration advocacy group Mijente, said in a statement to CNET. “We’re glad Representatives Ocasio-Cortez and Garcia tasked the SEC with investigating and are eager to find out why the agency has not done so.”

Hill happenings

Antitrust scrutiny of the tech industry gets another day on the Hill.

The House Judiciary antitrust subcommittee will hear ideas from an array of academics and policy experts today on how to restore competition online and possible avenues to strengthen antitrust law. The hearing comes as the House Judiciary Committee’s long-awaited antitrust report is expected to be released as soon as next week.

The Senate Commerce Committee also wants to put Big Tech in the hot seat.

U.S. Sen. Roger Wicker (R-Miss.) will hold a vote this morning to authorize the subpoena of Twitter chief executive Jack Dorsey, Google chief executive Sundar Pichai and Facebook chief executive Mark Zuckerberg for a hearing on a decades-old communications law that prevents tech companies from being held liable for content posted by users. President Trump has pushed Congress to update the law, Section 230 of the Communications Decency Act.

Meanwhile, the only Section 230-related legislation to get out of the gate has finally found bipartisan sponsors for a House companion bill. Rep. Sylvia Garcia (D-Tex.) and Ann Wagner (R-Mo.) introduced a House version of the Eliminating Abuse and Rampant Neglect of Interactive Technologies (EARN IT) Act. The bill, which would strip tech companies of their legal protections if they don’t follow government-issued guidelines to crack down on online child abuse, already passed out of the Senate Judiciary Committee.

Trending

Mentions

  • The Center for Democracy and Technology has hired Iverna McGowan to direct its European Union office. McGowan joins CDT from U.N. Human Rights.

Daybook

  • The House Judiciary Committee will hold a hearing on proposals to strengthen antitrust laws and restore competition at 1 p.m.
  • New Americas Open Technology Institute will hold a virtual panel exploring how Internet platforms are addressing the spread of election-related misinformation at 1:30 p.m.

Before you log off

The Washington Post is launching its first investigative podcast. You can subscribe below:



Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read