The document attributes the various uses of AI to two Chinese state-linked hacker groups and one group each from Russia, Iran, and North Korea, the four countries of most concern to Western cyber defenders. There is.
“A variety of AI technologies will emerge to help cybercrime groups, nation-state threat actors, and other adversaries understand the potential value to their activities and the security controls that may need to be circumvented. “We’re constantly exploring and testing it,” Microsoft wrote in the summary. of the findings.
Microsoft said it had cut off the group’s access to tools based on OpenAI’s ChatGPT. The company said it will notify creators of other tools it has seen used and will continue to share which groups are using which technologies.
The company said it has not found any large-scale AI-powered attacks, but has identified early-stage research into specific security flaws, defenses, and potential targets.
Sherrod DeGrippo, director of threat intelligence strategy at Microsoft, said the company doesn’t necessarily know all of its findings, and that even if some accounts are blocked, attackers still create new ones. I acknowledged that it would not deter me from doing so.
“Microsoft doesn’t want to encourage attackers to run campaigns against anyone,” she said. “Our role is to attack them as they evolve.”
State-sponsored hacking groups identified in the report include:
- A top Russian team associated with military intelligence agency GRU used AI to study satellite and radar technology that could be relevant to conventional war in Ukraine.
- North Korean hackers used AI to poll experts on the country’s military to learn more about publicly reported vulnerabilities, including a 2022 vulnerability in Microsoft’s own support tools. .
- Iran’s Islamic Revolutionary Guard Corps team has sought the help of AI to find new ways to electronically deceive people and develop ways to evade detection.
- One Chinese government group is considering leveraging AI to create programs and content, and another is considering using AI to create programs and content, while another says it could be used to “potentially sensitive topics, high-profile individuals, regional geopolitics, and U.S. influence.” “We are evaluating LLM’s effectiveness in obtaining information on military and domestic affairs issues,” Microsoft wrote. .