The Army this week announced steps it is taking to strengthen its ability to effectively employ artificial intelligence under its 500-day plan to keep troops safe.
The Army’s Acquisition, Logistics and Technology (ALT) Agency announced two new initiatives on Wednesday, “Break AI” and “Counter AI,” that will test evolving AI technologies for reliable use in the field and protect against adversarial use of AI against the United States, Federal News Network reported this week.
The Army is not only looking at how to safely implement AI across the force, but also how to work with external parties to safely develop it.
How artificial intelligence will change modern warfare
“One of the obstacles to adoption is how do we look at the risks around AI? We need to look at issues like tainted data sets, adversarial attacks, Trojan horses,” Young Van, principal deputy to the assistant secretary of the Army for ALT, was reported as saying at a technology conference in Georgia on Wednesday.
“If you develop in a controlled, trusted environment, [the Department of Defense] Or the Army owns it, and we’ll do all of that,” he added. “But this is really looking at how we can bring third-party or commercial vendor algorithms into our program so that we don’t have to compete with them.”
“We want to adopt them.”
Vann’s announcement comes as the Army wraps up a 100-day effort to explore how to incorporate AI into its acquisition process.
The goal, Federal News Network reported, was for the Army to explore ways to develop its own AI algorithms while also working with trusted third parties to develop the technology as safely as possible.
The Army is now using what it learned in the 100-day sprint to test and secure its AI implementations across the board, strengthening defenses against adversarial use of AI as well as developing the systems the Army will use.
US holds meeting with dozens of allies on military AI use to determine “responsible” use
The “Break AI” initiative will focus on how AI will evolve in the area known as artificial general intelligence (AGI) – the development of software that aims to match or exceed human cognitive abilities, a technology that has the potential to harness advanced decision-making and learning capabilities.
The technology is not yet fully realised, but aims to improve on current AI software, which can only generate predictive outcomes based on the data provided to it.
But in this next phase, which will require not only developing this nebulous technology but also defending against it, the Army has a lot of work to do.
“This is about the concept of how do you actually test and evaluate artificial intelligence,” Bang was quoted as saying. “As we move towards AGI, how do you actually test something where you don’t know what the outcome or behavior is going to be?”
“You can’t test it in the same way you test a deterministic model, so we need industry collaboration.”
The second part of the Army’s 500-day plan is a bit simpler, explained Jennifer Swanson, assistant secretary of the Army for Data, Engineering and Software Directorate.
What is Artificial Intelligence (AI)?
Click here to get the FOX News app
“We want to make sure that our platforms, our algorithms, our capabilities are secure from attacks and threats, but it’s also about how we counter what our adversaries have,” she was quoted as saying. “We know we’re not the only ones investing in this. There’s a lot of investment being made in countries that pose significant adversarial threats to the United States.”
Army officials have remained tight-lipped about specific details about how the service will work to develop AI capabilities because the effort is a sensitive operational security issue.
But, Swanson said, “As we learn and begin to understand what we need to do, we’ll have some things to share.”