HomeStrategyPolitics50,000 Signed Letter Calling for AI Advancement Pause, Nonprofit Says

50,000 Signed Letter Calling for AI Advancement Pause, Nonprofit Says



A recent open letter calling for a pause on artificial intelligence advancement has been signed by more than 50,000 people, including over 1,800 CEOs and over 1,500 professors, according to the nonprofit that issued it.

“The reaction has been intense,” said the Future of Life Institute (FLI), a nonprofit seeking to mitigate large-scale technology risks, on its website.

“We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers.”

Some prominent figures have added their names under the letter, including Tesla and SpaceX founder Elon Musk, inventor of associative neural networks John Hopfield, as well as Yoshua Bengio, scientific director of the Montreal Institute for Learning Algorithms, and Stuart Russell, professor and AI researcher at University of California, Berkeley.

The letter says that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and should be developed with sufficient care and forethought.

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” it says, asking AI developers to pause “training” of AI systems more advanced than the OpenAI’s recently released GPT-4.

“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it argues.

Research doesn’t need to stop, but rather steer away “from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter reads.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

The letter’s webpage showed some 2,200 signatures as of the afternoon of March 31. Many signatories didn’t identify as experts in the AI field. FLI said it has slowed adding new names to the list so it can vet them.

Missing from the list are executives of all the top AI developers, be it Alphabet’s DeepMind, ChatGPT developer OpenAI, or other big players, such as Meta, Amazon, and Microsoft. Also missing are virtually all heads of universities’ top AI research departments.

It’s not clear whether any of these individuals are among those thousands of signatures not yet added to the list. FLI didn’t respond to emailed questions.

On a March 31 FAQ page, the nonprofit acknowledged that initially, some signatures on the letter were fake.

“Some individuals were incorrectly and maliciously added to the list before we were prepared to publish widely,” the page says. “We have now improved our process and all signatories that appear on top of the list are genuine.”

The sheet likens the call for AI advancement pause to the 1975 Asilomar Conference on Recombinant DNA.

“The conference allowed leading scientists and government experts to prohibit certain experiments and design rules that would allow actors to safely research this technology, leading to huge progress in biotechnology,” it says.

“Sometimes stepping back to reassess and reevaluate can engineer trust between scientists and the societies they operate in and subsequently accelerate progress.”

Musk’s AI Concerns

Musk has long been vocal about the dangers posed by advanced AI.

During previous talks, he opined that as AI develops, it’s likely to far surpass human intelligence. At that point, even if it turns out to be benevolent, it may treat humans as a lower life form.

“We’ll be like the house cat,” he said at the Recode’s Code Conference in 2016.

In 2015, Musk cofounded OpenAI, but is no longer associated with it.

He recently said that some of his actions may have exacerbated the AI problem.



Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read