The promise and dangers of advanced artificial intelligence technologies were laid bare this week at a meeting hosted by the Department of Defense to consider the military’s future use of artificial intelligence. Government and industry stakeholders discussed how tools such as large-scale language models (LLMs) can be used to maintain the U.S. government’s strategic lead over rivals, particularly China.
In addition to OpenAI, Amazon and Microsoft were also among the companies that demonstrated the technology.
Not all of the issues raised were positive. Some speakers drew attention to the introduction of a system that researchers are still working to fully understand.
“There are pressing concerns about the potential for catastrophic accidents due to AI malfunction and the risk of significant damage from hostile attacks targeting AI,” South Korean Army Lt. Col. Kang-min Kim said at the symposium. “Therefore, it is of paramount importance that AI weapon systems be carefully evaluated from the development stage.”
He told Pentagon officials that they needed to address the issue of “liability in the event of an accident.”
Craig Martell, director of the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO), told reporters Thursday that he is aware of such concerns.
“If you’re shipping something that you don’t know how to evaluate, you’re moving too fast,” he says. “I don’t think you should ship something you don’t know how to evaluate.”
Although LLMs like ChatGPT are commonly known as chatbots, industry experts say the military is unlikely to use chat. They are more likely to be used to complete tasks that would take too long or be too complex for a human to do. This means that they may be used by trained professionals to utilize powerful computers.
“Chat is a dead end,” said Shyam Sankar, chief technology officer at Pentagon contractor Palantir Technologies. “Instead, we’ve reimagined his LLMs and prompts as being for developers rather than end users. … It even changes what you use them for.”
The background to the symposium was the US technological competition with China, which was becoming more reminiscent of the Cold War era. The United States remains firmly in the lead in AI, researchers say, and the US is hindering China’s progress through a series of sanctions. But U.S. officials are concerned that China has already achieved sufficient AI proficiency to increase its intelligence gathering and military capabilities.
Although Pentagon leaders were reluctant to discuss China’s AI levels when asked several questions from the audience this week, some industry experts invited to speak responded to the question. Some responded positively.
Alexander Wang, CEO of San Francisco-based Scale AI, which works with the Department of Defense on AI, said Thursday that just a few years ago, China was far behind the U.S. in LLM. , said it has made up much of the difference through billions of dollars in investments. investment dollars. He said the US appears poised to maintain its lead unless it makes unenforceable mistakes, such as not investing enough in AI applications or introducing LLM in the wrong scenario. Ta.
“This is an area where we as Americans have to win,” Wang said. “If we try to utilize technology in scenarios where it’s not suitable for use, it’s going to fail. We’re going to shoot ourselves in the foot.”
Some researchers are warning against the temptation to push emerging AI applications out into the world before they are ready simply because of concerns about China catching up.
“What we’re seeing is a fear of falling behind and falling behind. This is the same dynamic that led to the development of nuclear weapons and later the hydrogen bomb,” said a person at the symposium. said John Wolfsthal, director of global risks at the Federation of American Scientists. “Perhaps these moves are inevitable, but we have not become sufficiently sensitive to these risks, both in government and within the AI development community, and we are not sensitizing these new capabilities to our most sensitive systems.” It doesn’t factor into decisions about how much to integrate into.”
Rachel Martin, director of the Pentagon’s Maven program, which analyzes drone surveillance video, high-resolution satellite imagery and other visual information, said program experts have analyzed “millions to billions” of videos and other visual information. He said he is asking LLM for help in sorting through the images. Photo – “I think it’s probably unprecedented in the public sector.” The Maven program is run by the National Geospatial-Intelligence Agency and his CDAO.
Martin said it remains unclear whether commercial LLMs trained on public Internet data are best suited for Maven research.
“There’s a big difference between cat pictures on the internet and satellite images,” she says. “I’m not sure how useful a model trained on those kinds of internet images would be for us.”
In particular, there was a lot of interest in Mr. Knight’s presentation on ChatGPT. Last month, OpenAI removed restrictions on military use from its usage policy, and the company began working with the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA).
Knight said LLM is well-suited for conducting advanced research across languages, identifying vulnerabilities in source code, and performing needle-in-the-haystack searches that would take a lot of effort for humans. Ta. “Language models don’t get tired,” he said. “They could do this all day long.”
Knight also said LLM could aid in “disinformation operations” by generating sock puppets, or fake social media accounts, embedded with “someone’s baseball card bio.” He pointed out that this would be a time-consuming task if done by humans.
“If you get a sock puppet, you can simulate them arguing,” Knight said, showing a model of a phantom right-winger and left-winger debating.
U.S. Navy Capt. M.
“If someone doesn’t want that underlying model to be used by the Department of Defense, it won’t be,” Lugo said.
The CDAO, the office chairing this week’s symposium, was created in June 2022 when the Department of Defense merged four departments related to data analytics and AI. CDAO Deputy Director Margaret Palmieri said centralizing AI resources in a single office reflects the Department of Defense’s interest in not only experimenting with these technologies but also deploying them broadly.
“We look at the mission through a different lens, and that lens is scale,” she said.