Viewpoint: Should we trust machines to fight wars?

Autonomous weapons, with artificial intelligence, are portrayed as "the third revolution in warfare," following gunpowder and nuclear arms. The moral stakes are high as autonomous systems reshape the world’s arsenals.

Will these weapons challenge our ethics and accountability thresholds? Most likely. But let’s explore a few of the considerations, moral and legal, through the prism of how people will be increasingly removed from battlefield decision-making as conflict unfolds at machine speed.

Militaries define autonomous weapons as "systems that, once activated, can select and engage targets without further intervention by a human operator."

So-called "intelligentized" systems that longer term will evolve to independently surveil, spot, identify, engage, and precisely target the adversary. And to do that with better ethical outcomes than when people are at the controls.

The core of these weapons’ “brain” is advanced artificial intelligence. The marrying of AI algorithms, deep machine learning, and massive data sets along with sophisticated technology will transform the world’s arsenals. The science and caution over removal of human operational control may zigzag, but the allure of intelligent, nimble, precise, fast and cheaper systems will prove irresistible.

Russia’s president Putin made no bones about this transformation, purportedly saying in a 2017 broadcast that "whoever becomes the leader in [AI] will become the ruler of the world." Hyperbole? Maybe; maybe not. Regardless, the force-multiplying intersection of artificial intelligence and weapons functionality will prove consequential.

Avoiding adversaries acquiring a monopoly on autonomous weapons will lead to the competitive leapfrogging of weapons design with which we’re historically familiar. A technological vaulting across military domains: land, ocean surface, undersea, air, and space. Nations will feel compelled not to cede ground to adversaries.

All the more reason we can’t lose sight of the ethical issues in this arena, where utilitarianism is definable as measures built into the decision loop to avoid or minimize harm to civilians’ lives and property. Yet, some people may view automated weapons as existential.

The question often asked is: Ought we trust machine autonomy to do war-fighting right, upholding our values? Maybe, however, the more pertinent question is this: Ought we continue to trust people to do war-fighting right, given the unpredictability of human decision-making and behavior?

The assumption is that humans are prone to errors exceeding those of a smart autonomous weapon. It’s more likely that a human controller will make assessments and miscues resulting in civilian casualties or attacks against hospitals, schools, homes and buildings of worship. Modern history is replete with such incidents, violating humanitarian law.

Machine precision, processing speed, analytical scope, ability to deconstruct complexity, handling of war’s chaotic nonlinearity, and ability to cut through war’s fog and friction intersect with ‘just-war doctrine’ to govern how to conduct war according to moral and legal principles — all of which matter greatly.

Human agency and accountability will transect decisions around how to design, program and deploy autonomous weapons, rather than visceral decisions by combatants on the battlefield. New grounds and precedents as to who’s responsible for outcomes.

Accountability will also be bound by the Geneva Conventions’ Martens Clause, which says this: “[C]ivilians and combatants remain under the protection and authority of . . . international law derived from established custom, from the principles of humanity and from the dictates of public conscience.”

There are no moral take-backs. Avoiding faulty calls, with unintended harm, is critical in calculating the appeal of replacing hands-on humans with the unbiased automaticity of machines. Autonomous weapons will outperform humans in regards to consistently implementing the ethical and legal imperatives whether conflicts are fought justly.

Such imperatives include discrimination to target only combatants; proportionality in line with the advantage; accountability of participants; and necessity in terms of the least-harmful military means chosen, like choice of weapons, tactics, and amount of force applied.

Treaty bans on systems’ development, deployment and use likely won’t stick, given furtive workarounds and the enticement of geostrategic advantage. Regulations, developed by multidisciplinary groups, to include ethicists along with technologists, policymakers and international institutions, are expected only spottily to slow the advance.

Ethics must be scrupulously factored into these calculations from the start — accounting for "principles of humanity and the dictates of public conscience" — so that nations make policy with their moral charters intact.

Keith Tidman is an author of essays on social, political and scientific opinion.