Will Artificial intelligence (AI) take humans out of the loop in planning and waging war? Can humans muddy the sources of information that AI relies on in order to make it commit errors faster than humans could?
A point of weakness to attack military AI, no?
A new study by researchers from Rice University and Stanford University in the US offers evidence that when AI engines are trained on synthetic, machine-made input rather than text and images made by actual people, the quality of their output starts to suffer.
People keep saying persistent battlefield surveillance makes achieving surprise impossible. I think every new advance in surveillance or communications leads people to believe that. As planes and radios did prior to World War II. But as I’ve long observed, surprise is in the minds of the enemy commanders. If they wrongly believe what they see conforms to their existing ideas of what an enemy plans to do, one can achieve surprise. As Ukraine did prior to their Kursk incursion.
The Russians had reports of Ukraine preparing to attack:
A report was submitted to Russian military leadership about a month before the attack saying that “forces had been detected and that intelligence indicated preparations for an attack,” Andrei Gurulyov, a prominent member of Russia’s parliament and a former high-ranking army officer, said after the incursion.
“But from the top came the order not to panic, and that those above know better,” Gurulyov lamented on national television.
Any movement could have been misconstrued as a new defensive posture. ...
Ukraine shuffled parts of brigades into the Sumy area under the pretenses of training and picking up new equipment, said one brigade’s deputy commander[.]
The Ukrainians definitely didn't make the Russians' analysis job easier.
New means of watching and distributing that information certainly make it harder to hide what your are doing. It was far easier when a hill or tree line hide your movements from the Mark I eyeball. But the flood of information can make you miss the forest for the trees.
The Army has a strategy for adopting AI. The Army has high hopes, as Deputy Defense Secretary Kathleen Hicks stated on its release nearly a year ago:
From the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders' decisions and improve the quality and accuracy of those decisions, which can be decisive in deterring a fight and winning in a fight.
Accelerating the adoption of these technologies presents an unprecedented opportunity to equip leaders at all levels of the Department with the data they need, as well as harness the full potential of the decision-making power of our people.
Sounds awesome! So awesome in its speed and authority that competing information that contradicts what the AI conveys may be disregarded by commanders. Hell, information to sow seeds of doubt might lag so far behind the AI that commanders never even see it.
Army headquarters might need staff officers purely focused on that Mark I information dribbling in while the commanding officer plans his battlefield Triumph, like a slave whispering into the ear of a conquering Roman commander as he received the adulation of the crowds, "Remember, thou art mortal information art digital."
If you believe AI will help you accurately sort out that rampaging flow of information, the surprise the enemy achieves by corrupting what the AI passes on to the meat sack commanders will make the surprise all the more complete—and deadly.
The speed of FUBAR could be simply awesome.