Select Language

English

Down Icon

Select Country

Germany

Down Icon

How is artificial intelligence changing warfare?

How is artificial intelligence changing warfare?
The pilot sees the video image from the FPV drone on glasses. AI is also used in drone control. Photo taken in Eastern Ukraine, 2025.

Last fall, Mark Milley, former Chairman of the Joint Chiefs of Staff of the United States Armed Forces, and Eric Schmidt, former CEO of Google, made puzzling statements in the American magazine Foreign Affairs. They first wrote that new technologies such as artificial intelligence and drones were changing the nature of war. Later in the same article, they stated that new technologies were only fundamentally influencing the character, or shape, of war, not its nature.

NZZ.ch requires JavaScript for important functions. Your browser or ad blocker is currently preventing this.

Please adjust the settings.

What now? Nature or form, or both? The contradiction between Milley and Schmidt demonstrates how confusing the current debate is. As so often, the 19th-century war theorist Carl von Clausewitz can provide some guidance. After all, he was the one who most prominently defined the distinction between the nature and form of war.

What has changed?

For Clausewitz, the form of war can be found in every single conflict in human history: in the Peloponnesian War, the Thirty Years' War, the Second World War, the Ukraine War, and so on.

The nature of war, however, includes what Clausewitz calls "the trinity." First, war is like a wrestling match in which two parties attempt to impose their will on each other. War is physical coercion, antagonism, and is driven by a blind, natural urge to destroy the enemy. Second, war is not an isolated system, but the continuation of politics by other means. Third, war is like a card game, meaning it is also subject to probability and luck. It requires talented, experienced, and courageous commanders and soldiers with what is called a "lucky hand" to prevail in its complex situations.

For Clausewitz, the distinction between the nature and form of war is a heuristic tool. It allows one to contrast the appearance of many concrete wars in history with abstract definitions, so that one can then ask what has actually changed.

Seen in this light, a drift in the nature of war only occurs when, for example, the combat aspect of war per se disappears—or some other fundamental aspect of the Trinity. Anyone who still talks about war then doesn't understand the definition of the term. After all, what is war without violence, political decision-makers, an armed people, or commanders? For Clausewitz, the nature of war is unlikely to change.

However, the form of war is different. For Clausewitz, war is a chameleon. It changes in appearance—also due to advances in technology and science. Thus, when war changes its form, the way people wage war changes. Tactical combat procedures are developed, the materials and technology used, as well as the structure of larger operations, which are ideally framed by strategic objectives, adapt. This was recently seen in the Ukrainian drone attack on air bases in Russia.

Ukrainian soldier with a combat drone in the Donetsk region, May 2025.

Ukrainian Armed Forces/Reuters

Artificial Intelligence during World War II

AI is, in Clausewitz's sense, both a technology and a science. Its partnership with the military is by no means new. It dates back to World War II, when cryptoanalysts at Bletchley Park, England, were working on deciphering the German Enigma encryption machine. The Allies attempted to intercept the encrypted radio messages to stop attacking submarines and bombers.

Alan Turing was one of these crypto-analysts and developed his own electromechanical machine, the Turing Bombe, which, due to its automated operation, gave him the idea of ​​asking whether machines can think, i.e. are intelligent.

Two years after Turing's death, in 1956, the term "artificial intelligence" was first used at the Dartmouth Conference. This marked the beginning of symbolic AI. As the name suggests, these machines are capable of manipulating abstract symbols, i.e., notations of formal logic.

The American Defense Advanced Research Projects Agency (DARPA), which conducts research projects for the armed forces, got involved. The Dart program was created, which helped solve logistical supply chain problems during the first Gulf War and saved the American military millions.

Also funded by DARPA, the company iRobot developed the PackBot robot in the 1990s. After 9/11 and Fukushima, PackBot helped search the rubble. In Iraq and Afghanistan, it assisted in defusing booby traps.

While these examples demonstrate that AI was used in a variety of ways during and after World War II, this technology did not fundamentally change the nature or shape of war. Rather, it enhanced existing tactical and logistical processes, such as message decryption, supply chain management, and bomb disposal.

The same applies to the use of AI on today's battlefields. The Israeli Defense Forces use AI-assisted detection and recommendation systems to identify Hamas targets. The Ukrainian army uses AI-assisted drones that autonomously detect and attack targets. In hybrid warfare, Russia uses AI for deepfakes in disinformation campaigns, just as terrorist non-state groups such as al-Qaeda and ISIS use generative AI for propaganda and recruitment.

AI is a general-purpose technology. It can embed itself in any digital system, amplifying it and making it more efficient. But disinformation, propaganda, recruitment, and bombing are not new inventions at the tactical level.

Drone footage of an area in the Donetsk region.
Ethical problems

The numerous ethical problems surrounding AI are often cited to claim that AI is rapidly changing not the shape but even the nature of war. However, this is questionable, as new technological inventions certainly have the potential to increase the brutality of war. This, however, is precisely what makes war, in its very nature, a physical coercion, realizing itself. It is becoming more intense, but not fundamentally different.

In terms of AI, for example, the question arises as to whether automated weapons systems can distinguish between civilians and combatants, a distinction so important in international humanitarian law. This is especially true because AI is known to make simple errors in image recognition. Therefore, AI's decisions are not always reliable.

But humans also have weaknesses. For example, the so-called "automation bias," which refers to the fact that people place excessive trust in automated decision-making tools. Often, machines are blindly trusted. Moreover, if the machine acts autonomously, the question arises as to who is actually responsible for its actions: the developer and manufacturer of the machine, the commander, or even the machine itself?

Under international humanitarian law, responsibility lies with the commander. However, even though this issue appears to be settled, there are still legal problems. There is no international agreement, such as the one for nuclear weapons, that uniformly regulates autonomous weapons systems. Legislation in this regard is the responsibility of individual states. The EU's AI Act also reveals gaps when it comes to regulating autonomous weapons systems.

These still-resolved ethical and legal issues invite the creation of threatening fictional scenarios. For example, there is talk of a singularity battlefield, a kind of hyperwarfare in which only machines make decisions, resulting in completely automated warfare.

However, the possibility of such a scenario occurring is highly questionable. According to Clausewitz, war is always simple and logical on paper. The use of maximum force is sufficient to subdue the enemy. In extremis, AI would therefore be used mercilessly, without any handbrake.

In reality, however, things are different. War is characterized by ignorance and uncertainty, coincidences and accidents. A fog surrounds the war, new frictions constantly arise, and resistance arises that delay the implementation of plans: misunderstandings, weather, dangers, mistakes.

Nothing is more dangerous than war; its course always remains unpredictable. Uncontrolled escalation harms one's own ranks and political interests. A singularity battlefield would be military, political, and social suicide – just like a nuclear war with strategic nuclear weapons.

The dangers of AI are real, they are already present, and they will increase in the future. However, neither the nature nor the form of war currently appear to be significantly changed by them. Rather, AI will be integrated into existing tactical procedures and ethical and legal regulations.

If Eric Schmidt, in particular, wants to see the shape or nature of war changed, it's probably because his new startup, White Stork, specializes in AI-assisted military drones. His interest lies in armament and profit maximization. Where he's right, however, is that AI is here to stay in the military.

Olivier Del Fabbro works as a senior assistant at the Chair of Philosophy at ETH Zurich. His interests include complex systems and artificial intelligence, the philosophy of medicine, and the philosophy of war.

nzz.ch

nzz.ch

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow