TThe British and U.S. governments announced Monday that they will collaborate on safety testing of the most powerful artificial intelligence models. The agreement, signed by UK Secretary of State for Science, Innovation and Technology Michelle Donnellan and US Secretary of State for Commerce Gina Raimondo, sets out plans for cooperation between the two governments.
“I suppose [the agreement] This is the next chapter in our journey on AI safety in collaboration with the U.S. government,” Donnellan told TIME in an interview at the British Embassy in Washington, D.C., on Monday. “I see the role of the US and UK as the real driver of what will ultimately become a network of institutes.”
The UK and US AI Safety Institutes were established just one day apart, before and after the first AI Safety Summit hosted by the UK Government at Bletchley Park in November 2023. The collaboration between the two organizations was announced at the time of their launch, and Mr Donnellan said: The new agreement will “formalize” that cooperation and “put flesh on its bones.” She also said, “It provides an opportunity for them (the U.S. government) to rely on us a little bit as they set up and formalize their labs, because our labs are up and running. and is fully functional.”
The two AI safety testing organizations will develop a common approach to AI safety testing that uses the same methodology and underlying infrastructure, according to a news release. The two agencies aim to exchange personnel and share information with each other “in accordance with national laws, regulations and contracts.” The release also states that the two agencies will conduct joint testing of AI models that will be available to the public.
“The UK and US have always been clear that ensuring the safe development of AI is a shared global challenge,” Raimondo said in a press release accompanying the partnership announcement. “Reflecting the importance of continued international cooperation, in today’s announcement the two countries will share critical information on capabilities and risks associated with AI models and systems, as well as fundamental technical research on AI safety and security. I will.”
Safety tests, such as those being developed by the UK and US AI Safety Associations, will play a key role in efforts by lawmakers and tech executives to reduce the risks posed by rapidly advancing AI systems. Become. OpenAI and Anthropic, developers of chatbots ChatGPT and Claude, respectively, have announced detailed plans for how they hope safety testing will impact future product development. The recently passed EU AI law and US President Joe Biden’s executive order on AI both require companies that develop powerful AI models to disclose the results of safety tests.
read more: No one knows how to test AI for safety
The UK Government, led by Chancellor Rishi Sunak, has played a leading role in bringing together an international response to the most powerful AI models, often referred to as ‘frontier AI’, and convened the first AI Safety Summit, announcing the UK AI Safety Summit. Sexual association. But despite its economic power and the fact that almost all major AI companies are based on American soil, the United States has so far contributed only $10 million to the American AI Safety Institute. doing. (The National Institute of Standards and Technology, the government agency that oversees the U.S. AI Safety Laboratory, suffers from chronic underinvestment.) Donnellan denies that the U.S. is failing to play its part. and argued that $10 million was not a fair representation. Percentage of resources spent on AI across the U.S. government.
“They are investing time and energy into this topic,” Donnellan said shortly after his meeting with Raimondo, adding that Donnellan “recognized the need for us to work together to understand the risks in order to seize the opportunities.” I am fully aware of it.” In addition to $10 million in funding for the National AI Safety Institute, Donnellan argues that the U.S. government is also “leveraging the wealth of expertise that already exists across government.”
Despite taking the lead on some aspects of AI, the UK government has decided not to pass legislation to reduce the risks posed by frontier AI. Mr Donnellan’s opponent, Peter Kyle, UK Labor’s shadow minister for science, innovation and technology, said a Labor government would introduce legislation that would force tech companies to share the results of AI safety tests rather than relying on the government. He has repeatedly stated that he will pass the bill. Voluntary agreement. However, Mr Donnellan said the UK would refrain from regulating AI in the short term to avoid stifling the industry’s growth or passing laws that were made obsolete by technological advances.
“I don’t think it’s right to rush into legislation. We’ve been very outspoken about that,” Donnellan told TIME. “That’s where we differ from the EU. We want to encourage innovation and we want to grow this sector in the UK.”
The memorandum commits both countries to developing similar partnerships with other countries. “Many countries are considering or considering establishing their own institutes,” Donnellan said, without specifying which ones. (Japan announced the establishment of its own AI Safety Research Institute in February.)
“AI doesn’t respect geographic boundaries,” Donnellan says. “To really make sure this is a force for good for humanity, we must collaborate internationally on this topic, share information and share expertise.”