British MPs have raised concerns about the recent “explosion” of artificial intelligence (AI) and have asked what the government has done to prevent misuse of the fast-developing technology.
At a debate on AI regulation in the House of Commons on Wednesday, Tory former minister Tim Loughton said that AI is open to abuse by criminals and “malign players” such as the Chinese Communist Party (CCP).
He told MPs: “When advances in medical technology around genetic engineering, for example, raise sensitive issues, we have debates on medical ethics, we adapt legislation and put in place robust regulation and oversight.
“The explosion in AI potentially poses the same level of moral dilemma and it is open to criminal use, for fraud, impersonation and by malign players such as the Chinese government for example.
“As leaders in AI, what should the UK be doing to balance safety with opportunity and innovation?”
Science Secretary Chloe Smith, who is standing in for Michelle Donelan who is on maternity leave, said the government recognises “many technologies can pose a risk when in the wrong hands.”
But she added: “The UK is a global leader in AI, with the strategic advantage that places us at the forefront of these developments.
“Now, through UK leadership, including at the OECD and the G-7, the Council of Europe, and more, we are promoting our vision for a global ecosystem that balances innovation and the use of AI underpinned by our shared values, of course, of freedom, fairness, and democracy. Our approach will be proportionate, pro-innovative and adaptable.”
She also said that the government’s “integrated review” of the UK’s security and foreign policy “recognises the challenges that are posed by China.”
Election Integrity
Labour’s Darren Jones raised concerns over the potential for AI to be used to the detriment of the country’s democratic process.
Jones, who chairs the Business, Energy and Industrial Strategy Committee, asked ministers what steps they are currently taking to protect the integrity of elections in light of AI’s ability to create convincing images, audio, and video hoaxes.
Smith replied: “I can understand his concerns and the anxiety that sits behind his question.
“We have a fully developed regime of electoral law that already accounts for election offences such as false statements by candidates, but in addition to the existing regulations we are setting out an approach on AI that will look to regulators in different sectors to apply the correct guidance.
“We will also add a central coordinating function that will be able to seek out risks and deal with them flexibly, appropriately, and proportionately.”
Global Standards
Tory former Cabinet minister Greg Clark, chairman of the Science and Technology Committee, spoke about the need for the UK to set out an international approach to the standards in AI.
Clark said, “At its best, Britain has been highly influential in setting international standards, combining confidence with security.”
He asked if the minister agreed that “the UK should now seize the initiative and set out an international approach to the standards in AI so that we can gain all of the benefits that come from AI but make sure we don’t suffer the harms attendant on it.”
Smith responded, “I think the short answer there is yes.”
She said that the UK does have a “global leadership position” and will therefore “seek a leadership role so any regulation of AI that may be needed reflects our values and strikes the correct balance.”
Risk to Humanity
AI has been on the rise, with ChatGPT coming to prominence in recent months after a version was released to the public last year.
Following the launch of the latest version of ChatGPT in March, some AI professionals signed an open letter, written by the nonprofit Future of Life Institute, warning that the technology poses “profound risks to society and humanity.”
Tesla CEO Elon Musk, who was among the signatories, has been outspoken about his concerns with AI in general, holding that it poses a serious risk to human civilization.
“AI is perhaps more dangerous than, say, mismanaged aircraft design, or production maintenance, or bad car production, in the sense that it is, it has the potential—however small one may regard that probability, but it is nontrivial—it has the potential of civilizational destruction,” he told Fox News in a recent interview.
Another fear Musk revealed is the worry that AI is being trained in political correctness, which he maintained is just a form of deception and “saying untruthful things.”
Geoffrey Hinton, the British computer scientist who has been called the “Godfather of AI,” recently left his position as a vice president and engineering fellow at Google so he could join the dozens of other experts in the field speaking out about the threats and risks of AI.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton, 75, told The New York Times in an interview.
Samantha Flom and PA Media contributed to this report.