Technology

According to Study, Military cannot rely on AI for Strategy or Decision

According to Study, Military cannot rely on AI for Strategy or Decision

AI will almost certainly play role in future military applications. It has many applications in which it will increase productivity, reduce user workload, and operate faster than humans. Continuous research will improve its capability, explainability, and resilience.

Using artificial intelligence (AI) for warfare has long been a promise of science fiction and politicians, but new research from the Georgia Institute of Technology contends that only so much can be automated and demonstrates the value of human judgment.

“All of the hard problems in AI are really judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. “You need human sense-making and moral, ethical, and intellectual decisions in an incredibly confusing, fraught, and frightening situation.”

All of the hard problems in AI are really judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war.

Jon Lindsay

Data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action are the four key components of AI decision-making. Machine learning advancements have made predictions easier, increasing the value of data and judgment. Although AI can automate everything from commerce to transportation, human judgment is required. Lindsay and University of Toronto Professor Avi Goldfarb wrote in the journal International Security about “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.”

Many policymakers believe that human soldiers can be replaced by automated systems, reducing militaries’ reliance on human labor and increasing battlefield effectiveness. This is known as the substitution theory of AI, but Lindsay and Goldfarb argue that AI should be viewed as a supplement to existing human strategy rather than a replacement.

“Machines are good at prediction,” he said, “but they rely on data and judgment, and the most difficult problems in war are information and strategy. Because of its unpredictability, the conditions that make AI work in commerce are the conditions that are most difficult to meet in a military environment.”

Military cannot rely on AI for strategy or judgment, study suggests

Lindsay and Goldfarb cite Rio Tinto as an example of a company that uses self-driving trucks to transport materials, lowering costs and risks for human drivers. Unless there are road closures or obstacles, there is an abundance of data traffic patterns and maps that require little human intervention.

War, on the other hand, often lacks abundant unbiased data, and judgments about objectives and values are inherently contentious, but that doesn’t make it impossible. According to the researchers, AI would be best used on a task-by-task basis in bureaucratically stable environments.

“All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you’re really dependent on creativity and interpretation,” Lindsay said. “But what we should be looking at is personnel systems, administration, logistics, and repairs.”

According to the researchers, using AI has consequences for both the military and its adversaries. If humans are central to deciding when to use AI in warfare, then military leadership structure and hierarchies may shift depending on who is in charge of designing and cleaning data systems and making policy decisions. This also implies that adversaries will seek to compromise both data and judgment, as both will have a significant impact on the war’s trajectory. Competing against AI may encourage adversaries to manipulate or disrupt data, making sound judgment even more difficult. Human intervention will be even more necessary as a result.

Yet this is just the start of the argument and innovations.

“If AI is automating prediction, then judgment and data become extremely important,” Lindsay explained. “We’ve already automated a lot of military action with mechanized forces and precision weapons, then data collection with intelligence satellites and sensors, and now prediction with AI. So, when are we going to automate judgment, and what aspects of judgment cannot be automated?”

Until then, however, tactical and strategic decision making by humans remains the most important aspect of warfare.