A new kind of cold war is here.
Military forces around the globe are in a covert arms race to develop terrifying new AI weaponry, a new documentary exploring the future of artificial intelligence in battle reveals.
“World leaders in Russia and China, people in the US military have said, whoever gets the advantage in AI is going to have an overwhelming technical advantage in war,” Jesse Sweet, director of “UNKNOWN: Killer Robots, premiering Monday on Netflix, told The Post.
“This revolution is happening now, but I think our awareness [is] lagging behind,” Sweet, an Emmy Award-winning filmmaker and producer, warned. “Hopefully, it doesn’t think a mushroom cloud to make us realize, ‘Oh man, this is a pretty potent tool.’”
Weapons-grade robots and drones being utilized in combat isn’t new, the documentary shows. But AI software is, and it’s enhancing — in some cases, to the extreme — the existing hardware, which has been modernizing warfare for the better part of a decade.
Now, experts say, developments in AI have pushed us to a point where global forces now have no choice but to rethink military strategy — from the ground up.
“It’s realistic to expect that AI will be piloting an F-16 and will not be that far out,” Nathan Michael, Chief Technology Officer of Shield AI, a company whose mission is “building the world’s best AI pilot,” says in the episode.
However, the filmmakers express concern — like many working in the field of AI — over rapid robotic militarization, essentially warning that we don’t truly comprehend what we’re creating.
“The way these algorithms are processing information, the people who programmed them can’t even fully understand the decisions they’re making,” Sweet said. “It gets moving so fast that even identifying things like, ‘Is it supposed to kill that person or not kill that person?’ [It’s] this huge conundrum.”
There are also fears that a comfortable reliance in the technology’s precision and accuracy — referred to as automation bias — may come back to haunt, should the tech fail in a life or death situation.
One major worry revolves around AI facial recognition software being used to enhance an autonomous robot or drone during a firefight. Right now, a human being behind the controls has to pull the proverbial trigger. Should that be taken away, militants could be misconstrued for civilians or allies at the hands of a machine, the director warns.
“[AI is] better at identifying white people than non-white people,” Sweet said. “So it can easily mistake people with brown skin for each other, which has all sorts of horrifying implications when you’re in a battle zone and you are identifying friend or foe.”
And remember when the fear of our most powerful weapons being turned against us was just something you saw in futuristic action movies?
With AI, that’s very possible, said Sweet. The mere thought is already causing “tension within the military,” according to the director.
“There is a concern over cybersecurity in AI and the ability of either foreign governments or an independent actors to take over crucial elements of the military,” he said. “I don’t think there’s a clear answer to it yet. But I think everyone’s aware that the more automation goes into military, the more room there is for bad actors to take advantage.”
The lack of sophistication needed now to pull off a breach of such magnitude should worry us all, say those in the know.
“It used to be that you had to be a computer genius to do that. Like in the 80s movies, the kid would have to be some sort of prodigy,” Sweet said. “But now you could be kind of like a B student who downloaded the YouTube video that’s going to show you how.”
And while AI is making strides in medical and pharmaceutical technologies to cure and treat disease, scientists warn that something as simple as changing a zero to a one in a computer can create chemical weaponry, through running thousands of simulations that wind up yielding a toxic composition.
Dr. Sean Ekins, CEO of Collaborations Pharmaceuticals, tells a story in the film about how in 2021, he was tasked by a Swiss AI watchdog group to experiment with the possibility of designing a biological weapon.
“We’ve been building a lot of machine learning models to try to predict whether a molecule was likely to be toxic. We just flip that model around and say ‘well, we’re interested in designing toxic molecules,’ Ekins told The Post.
“Literally, we did flip the switch on one of the models and overnight, it generated [chemical weapon] molecules…a small company, doing it on a 2015 desktop Mac.”
Among the produced models were similar compositions to VX — one of the most deadly nerve agents known to the world.
“We were using generative technologies to do it, but they were pretty rudimentary generative tools,” the CEO added. “Now, nearly two years later, I think what we did is kind of baby steps compared to what would be possible today.”
Ekins is fearful that “one rogue scientist” or even someone less qualified could have the means to create homemade variations to VX and other bioweapons, through AI “lowering the barrier.”
“I think the very real danger is to get to the point where you come up with new molecules that are not VX that are much easier to synthesize,” he said. “That’s really worth worrying about. What we showed was that we could very readily come up with lots of — tens of thousands — of molecules that were predicted to be more toxic.”
While Ekins and his team have published a paper on the potential, deadly misuse and have sounded the alarm to create sophisticated checks and balances, the cries have fallen on deaf ears, he said.
“The industry hasn’t responded. There’s been no push to sort of set up any safeguards whatsoever,” Ekins added. “I think to not realize the potential danger there is foolish…I just don’t think the industry, in general, is paying much heed to it.”
He compared the rapid acceleration of machine learning in his field to that of the scientists responsible for the atomic bomb, who were “not thinking about the consequences” nearly 85 years ago.
“Even the godfathers of the technologies, as we call them, are now only realizing there’s a potential genie that they’ve let out,” Ekins said. “It’s going to be very difficult, I think, to put it back into the bottle.”