Artificial intelligence (AI) could evolve to kill “many humans” in two years’ time, unless properly controlled and regulated, a government tech adviser has warned.
The near-term risks posed by AI are “pretty scary,” said the prime minister’s adviser Matt Clifford, speaking to TalkTV on Monday. AI could be used to create biological weapons or launch large-scale cyberattacks, Clifford said, with the long-term risk of creating a new species surpassing human intelligence.
Clifford serves on the UK government’s Foundation Model Taskforce, aimed at the development of safe and reliable AI and AI foundation models, such as ChatGPT and Google Bard.
Asked about the threat of AI wiping out humanity, Clifford echoed Elon Musk’s view, saying the risk was “not zero.”
“If we go back to things like the bio weapons or cyber [attacks], you can have really very dangerous threats to humans that could kill many humans—not all humans—simply from where we would expect models to be in two years’ time. I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t,” Clifford said.
Following the interview, Clifford took to Twitter to say that while short and long-term risks of AI are real and “it’s right to think hard and urgently about mitigating them,” there are “a wide range of views and a lot of nuance here.”
He warned against over-regulation of narrow AI, a type of AI which focuses on solving a specific single problem.
“I want it to be that you can go to a hospital and have a human radiologist but with an AI co-pilot who is really good at detecting cancers that the radiologist might miss. I think it would be a real disaster to regulate that into oblivion and make those benefits not possible,” Clifford said.
He said in general, risk and opportunity need to be balanced.
“General AI done right would be the best technological breakthrough our species has ever made,” Clifford said.
Guardrails for AI
Clifford, who chairs the Advanced Research and Invention Agency, argued that AI should be regulated on both a national and global scale.
This comes after senior AI experts, including those at Google DeepMind and Anthropic, signed a letter earlier this month saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
In response, Prime Minister Rishi Sunak said that the government was “looking very carefully” at the issue.
“Last week I stressed to AI companies the importance of putting guardrails in place so development is safe and secure,” Sunak said.
On May 4, the Competition and Markets Authority launched a review of the AI market to ensure that innovations in AI serve the UK economy, consumers, and businesses without breaching appropriate transparency and security.
During his week to the United States this week, Sunak plans to raise the issue of AI regulation and security with President Joe Biden. The prime minister’s spokesman said that that he didn’t want to “preempt” the conversation between the two leaders but suggested that Britain could become a global leader in new AI tech and regulatory systems.
Meanwhile, the Labour Party holds the view that tech developers should be banned from working on advanced AI, unless they have been licensed. The government should introduce strict regulations for companies using foundation models like ChatGPT, Lucy Powell, Labour’s digital spokeswoman, told The Guardian.
Speaking to the TechUK Tech Policy Leadership Conference on Tuesday, government ministers Chloe Smith and Paul Scully spoke of the opportunities offered by AI technology when seized “safely and responsibly” and the government ensures it is done “right for the whole of society.”