Artificial Intelligence on the Front Lines
Autonomous technologies are rapidly advancing in multiple industries; from transport to manufacturing, artificial intelligence is gaining steam. Military AI, however, faces a unique set of challenges: the lethal capabilities of drones pose moral, ethical, and legal questions. How does the international community regulate the use of AI in conflicts?
Lethal autonomous weapon systems (LAWS)—better known as killer robots—do not exist yet, but as the Open Letter from AI and Robotics Researchers points out, even their prospective existence is of great concern. But the concern about the new weapons systems may be misguided because it does not encompass the threat posed by the artificial intelligence (AI) programs behind their operation. International efforts to ban LAWS have been side-tracked by the concept of killer robots, when it is the intelligence of the robots that is the real issue.
The true nature of LAWS is contested. Working definitions range from remotely operated drones through to machines that can move independently, select targets without human intervention and discharge lethal materiel at will. The scale and variety of definitions has much to do with various interest groups seeking definitions that either patently breach or comply with international humanitarian law (IHL).
A LAWS definition that breaches IHL will more easily convince the UN that the weapons need to be banned. Arguments for pre-emptively banning LAWS stem from the imagined inevitability that these weapons will breach IHL. However, as the French working paper at the April 2016 Meeting of Experts on LAWS explained, the assumption cannot be dismissed that LAWS may be better at complying with IHL than humans. These machines will not feel fear, panic or a desire for vengeance, and that may make them far more effective at respecting human rights than breaching them.
Broadly speaking, there are two types of AI, narrow AI and general AI. Narrow AI attempts to pre-empt every conceivable variable within an anticipated context. The problem is that humans can’t conceive of every variable ahead of time and definitely can’t program responses for them. In conflict the context changes rapidly—goals change, targets change, objectives change—humans are good at dealing with this, narrow AI is not. Narrow autonomous systems therefore have humans in the decision making loop (flying the drone) or on the decision making loop (able to disengage targeting). For example, the program enabling a drone to maintain flight during a communications cut-out is bespoke engineered for each type of drone.
Over the next 5 to 10 years, the expectation is that many military systems will stop being narrowly programmed for specific purposes and will begin to use general AI. General AI uses artificial neural networks and reward-based learning to train AI in specific capabilities. It attempts to mimic the human brain and over the past two years it has become increasingly effective. More importantly, general AI can learn whereas narrow AI cannot.
The movement towards general AI is going ahead while international negotiations continue over LAWS and the Convention on Certain Conventional Weapons. The argument is that if blinding lasers and anti-personnel mines can be banned, so should LAWS. This argument is intimately bound up with IHL; it’s emotive and linked to a fear of killer robots.
One of the problems that has popped up in research into general AI has been termed the ‘reward hack’. Programmers set a reward structure for an AI and then let the intelligence devise the best way to reward itself, thereby achieving the programmer’s objectives more effectively and efficiently than they could imagine. For drones this is pretty simple, punish the AI if it crashes and reward it if it flies safely. It doesn’t take long for an AI to self-level a drone with this basic mechanism.
The ‘reward hack’ is an error that occurs when the AI finds a quick and simple way to achieve its reward contrary to the programmer’s intent. For example, a drone AI might decide that if it never takes off, it will never crash. The ‘reward hack’ becomes far more problematic in lethal systems: what happens if an AI ‘reward hacks’ how it kills or who it kills?
These are systems that constantly learn and constantly experiment with new techniques for achieving greater rewards. Even a system that had proven itself stable for 20 years could not be guaranteed against a ‘reward hack’ in a highly compromising way. More importantly, how do you reward a lethal system for killing humans in a way that can’t be hacked by that same system, let alone by a tactical cyber-attack that might interfere with the reward structure?
The movement to ban LAWS is just one step. An important discussion we need to have is how much autonomy should be delegated to general AI at all. There is a very strong argument that all AI should be banned from taking human life, and more specifically, humans should be banned from coding AI that is rewarded for killing humans. Such a ban would need to be across all platforms and encompass any reward structure designed for human targets. Such a ban would necessarily need to capture the AIs that will assist in the command and control of warfare, not just the operation of unmanned assets.
The emotive fear of killer robots will distract international efforts from the real issue. LAWS do not currently exist, general AI does. Any international regulatory movement should be aimed at the programs that operate equipment with lethal capabilities, rather than a holistic concept of killer robots. Quite simply, we need to regulate the AI, not the robot.
Thom Dixon is a councillor with AIIA NSW and a young leader with the Pacific Forum CSIS’s Young Leaders Program. He works as the project officer within the Office of the Deputy Vice-Chancellor at Macquarie University.
This article is published under a Creative Commons Licence and may be republished with attribution.