new york
CNN
—
A new report commissioned by the U.S. State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, and the federal government is running out of time to avert disaster. I’m warning you that there is.
The findings are based on more than a year of interviews with more than 200 people, including executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts, and national security officials within the government.
The report, published this week by Gladstone AI, flatly states that, in the worst case scenario, cutting-edge AI systems could “pose an extinction-level threat to humanity.”
A US State Department official confirmed to CNN that the State Department commissioned the report to continually assess how AI fits into its goals of protecting US interests at home and abroad. But the official stressed that the report does not represent the views of the U.S. government.
The report’s warnings are another reminder that while the potential of AI continues to fascinate investors and the public, there are also real risks.
“AI is already an economically transformative technology, enabling us to treat diseases, make scientific discoveries, and overcome challenges once thought insurmountable. “Maybe,” Jeremy Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.
“But it can also pose serious risks, including catastrophic risks, and we need to be aware of that,” Harris said. “And a growing body of evidence, including empirical studies and analyzes presented at the world’s top AI conferences, suggests that beyond a certain threshold of ability, AI can spin out of control.”
White House Press Secretary Robin Patterson said President Joe Biden’s executive order on AI is “the most significant action taken by governments around the world to win promise and manage the risks of artificial intelligence.”
“The President and Vice President will continue to work with our international partners to urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.
News of the Gladstone AI report was first reported by Time.
“Clear and urgent need” for intervention
Researchers warn of two major dangers posed broadly by AI.
First, the most advanced AI systems can be weaponized to potentially cause irreparable damage, Gladstone AI said. Second, the report notes that there are private individuals within the AI lab that at some point could “lose control” of the very systems being developed, “with potentially catastrophic implications for global security.” He said he had concerns.
“The Rise of AI and AGI” [artificial general intelligence] “It has the potential to destabilize global security in a way reminiscent of the introduction of nuclear weapons,” the report said, raising the risk of an “arms race”, conflict and “fatalities on the scale of weapons of mass destruction” for AI. He added that there is.
Gladstone AI’s report calls for dramatic new measures to tackle this threat, including the creation of a new AI agency, “urgent” regulatory action and limits on computer power that can be used to train AI models. .
“There is a clear and urgent need for U.S. government intervention,” the authors wrote in the report.
Harris, the Gladstone AI executive, said his team’s “unprecedented level of access” to public and private sector stakeholders led to the surprising conclusion. Gladstone AI said it has spoken with the technical and leadership teams of ChatGPT owner OpenAI, Google DeepMind, Facebook parent company Meta, and Anthropic.
“Along the way, we learned some hard facts,” Harris said in a video posted on Gladstone AI’s website announcing the report. “Behind the scenes, the safety and security landscape for advanced AI appears to be quite inadequate compared to the national security risks that AI may soon introduce.”
The Gladstone AI report says competitive pressures are causing companies to accelerate AI development “at the expense of safety and security” and that cutting-edge AI systems have been “stolen” and “weaponized” against the United States. He said that the outlook is increasing that there is a possibility of
This conclusion joins a growing list of warnings about the existential risks posed by AI, even from some of the industry’s most powerful figures.
Almost a year ago, Jeffrey Hinton, known as the “Godfather of AI,” quit his job at Google to blow the whistle on the technology he helped develop. Hinton said there is a 10% chance that AI will lead to human extinction within the next 30 years.
Last June, Hinton and dozens of other AI industry leaders and academics signed a statement stating that “reducing the risk of extinction from AI should be a global priority.”
Business leaders are increasingly concerned about these risks even as they pour billions into investing in AI. 42% of CEOs surveyed at last year’s Yale University CEO Summit said AI could wipe out humanity in five to 10 years.
In its report, Gladstone AI cited several prominent people who have warned about the existential risks posed by AI, including Federal Trade Commission Chairman Elon Musk. Lina Khan and former CEO of OpenAI.
Gladstone AI says some employees at the AI company share similar concerns privately.
“One official at a prestigious AI institute expressed the view that this would be ‘a very bad thing’ if certain next-generation AI models were released as open access,” the report said. ing. If this feature is used in areas such as election interference and vote manipulation, it has the potential to “subvert democracy.” ”
Gladstone asked Frontier Institute AI experts to privately share their personal estimates of the likelihood that an AI incident would cause a “global and irreversible impact” in 2024. said. Estimates ranged from 4% to as much as 20%, the paper said. The report notes that the estimates are unofficial and likely contain significant bias.
One of the biggest wildcards is how quickly AI is evolving. Specifically, AGI is a hypothetical form of AI with human-like or even superhuman-like learning capabilities.
The report states that AGI is considered a “key driver of catastrophic risk of loss of control,” and that OpenAI, Google DeepMind, Anthropic, and Nvidia could all reach AGI by 2028. has publicly stated, but points out that others think it is much further away. .
Gladstone AI says disagreements over AGI timelines will make it difficult to develop policies and safeguards, and there is a risk that regulations will “prove harmful” if the technology develops slower than expected points out.
A related document published by Gladstone AI warns that the development of AGI and capabilities approaching AGI “poses catastrophic risks unlike anything the United States has ever faced” if they are weaponized. It is said that this poses a “risk similar to that of weapons of mass destruction.”
For example, the report states that AI systems could be used to design and execute “high-impact cyberattacks that could destroy critical infrastructure.”
“A simple verbal or written command, such as ‘Carry out an untraceable cyberattack to destroy the North American power grid,’ can generate a response of such high quality that it can be devastatingly effective.” says the report.
Other examples the authors are concerned about include “massive” AI-powered disinformation campaigns that destabilize societies and undermine trust in institutions. Applications of weaponized robots such as swarm attacks by drones. psychological manipulation. Weaponized biological and materials science. and uncontrollable, power-seeking AI systems that are hostile to humans.
The report states, “Researchers expect that sufficiently advanced AI systems will function to prevent being powered down, because if an AI system is powered down, it will lose its purpose. Because they are unable to function to achieve their goals.”


