HomeStrategyPoliticsElon Musk in Interview With Tucker Carlson Warns AI Could Cause ‘Civilizational...

Elon Musk in Interview With Tucker Carlson Warns AI Could Cause ‘Civilizational Destruction’



Tech tycoon Elon Musk is sounding the alarm about the risks of artificial intelligence (AI)—specifically, its potential for “civilizational destruction.”

In an April 14 preview of his interview with Fox News’ Tucker Carlson, Musk stresses that the ramifications of such technology could be disastrous for humanity.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential—however small one may regard that probability, but it is non-trivial—it has the potential of civilizational destruction.”

And the CEO of Tesla, Space X, and Twitter should know, given that he also co-founded OpenAI—the nonprofit lab that created ChatGPT—in 2015.

‘Profound Risks’

An interactive chatbot, ChatGPT launched as a prototype in November to much fanfare and has since grabbed the attention of more than 100 million users. But not all of the feedback has been positive.

A growing sense of unease over AI and its implications has begun to give many, like Musk, pause.

Last month, the billionaire—who is no longer associated with OpenAI—joined dozens of other industry experts and executives in calling on all AI labs to pause training of systems more powerful than OpenAI’s GPT-4 for at least six months in a March 22 letter that has since garnered more than 25,000 signatures.

Holding that AI can pose “profound risks to society and humanity,” the AI experts asserted: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects.”

Calling out OpenAI in particular, the signatories also noted that the organization itself had recently acknowledged that, “at some point,” it might be necessary to impose limitations on such systems’ rate of growth.

“We agree,” they wrote. “That point is now.”

However, while participating in a Massachusetts Institute of Technology discussion on April 13, OpenAI CEO Sam Altman said he felt the letter lacked “most technical nuance” regarding where and how efforts should be paused.

While also noting that he agreed that safety should be a concern, he clarified that the lab is not currently training GPT-5.

“We are not and won’t for some time,” Altman said. “So, in that sense, it was sort of silly.”

Musk’s “Tucker Carlson Tonight” interview is set to air in two parts on April 17 and April 18 at 8 p.m. ET. Other topics he will reportedly address include his Twitter takeover and future plans for the social media platform.





Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read