The UK and US have signed a landmark agreement to collaborate on testing advanced artificial intelligence (AI).
The agreement signed on Monday says the two countries will work together to develop “robust” methodologies for assessing the safety of AI tools and the systems that support them.
This is the first bilateral agreement of its kind.
The UK’s technology secretary, Michelle Donnellan, said this was “the defining technology challenge of our generation”.
“We have always been clear that ensuring the safe development of AI is a shared global challenge,” she said.
“Only by working together can we tackle technology risks head-on and harness its huge potential to help us all live easier and healthier lives.”
The Secretary of State for Science, Innovation and Technology added that the agreement builds on commitments made at the AI Safety Summit at Bletchley Park in November 2023.
Events attended by AI leaders including OpenAI Sam AltmanGoogle DeepMind Demis Hassabis and tech billionaire Elon Musk have seen both the UK and US establish AI Safety Institutes aimed at evaluating open and closed-source AI systems.
While things feel like things have been quiet on the AI safety front since the summit, the AI sector itself has been very busy.
Competition among the biggest AI chatbots, such as ChatGPT, Gemini, and Claude, remains fierce.
So far, the almost exclusively US-based companies that are supporting all this activity are still cooperating with the concept of regulation, but regulators are still trying to figure out what these companies are trying to accomplish. Not suppressed.
Similarly, regulators aren’t demanding access to information that AI companies don’t want to share, such as the data used to train their tools or the environmental costs of running them..
The EU’s AI law is on its way to becoming law, and once it comes into force, developers of certain AI systems will be required to be upfront about risks and share information about the data used.
This is significant after OpenAI recently announced that it would not release the voice cloning tool it had developed, citing the “serious risks” the technology poses, especially during an election year.
In January, a fake AI-generated robocall claiming to be from US President Joe Biden urged voters to skip the New Hampshire primary.
Currently, most AI companies in the US and UK are self-regulating.
Concerns about AI
Currently, the majority of AI systems are only capable of performing a single intelligent task that is typically performed by humans.
Known as “narrow” AI, these tasks range from quickly analyzing data to providing desired responses to prompts.
But there are concerns that more intelligent “general purpose” AI tools, which can complete a variety of tasks normally performed by humans, could put humanity at risk.
“Like chemical, nuclear and biological sciences, AI has the potential to be weaponized and used for good or ill,” Professor Sir Nigel Shadbolt told the BBC’s Today programme.
But the Oxford University professor said concerns around the existential risks of AI are “sometimes a little overblown”.
“We really have to support and appreciate efforts to get great AI capabilities to think and study what’s at risk,” he said.
“We need to understand how susceptible these models are and how powerful they are.”
U.S. Secretary of Commerce Gina Raimondo said the agreement will allow governments to better understand and better guide AI systems.
“It will accelerate both institutes’ efforts to address all risks, whether to national security or to broader society,” she said.
“Our partnership makes clear that we are not running away from these concerns, but rather running towards them.”