Command and [Robot] Control: Why Everyone May Be Getting Killer Robots Wrong

Policy arguments over autonomous weapons really boil down to disagreements over how to best achieve “meaningful human control” over weapon systems in hazardous scenarios. One side argues that this can be best achieved through adding a machine into the mix, the other frets about the difficulty of controlling the machine in battle. One argues that the only way to achieve meaningful human control in future battlefields is to take the human (to varying degrees) out of the picture, the other argue that the act of automation will be the last step before any semblance of humanity is severed from warfare.

They both may be wrong. What if autonomous systems aren’t as militarily effective as everyone believes them to be? Military weapons are ultimately instruments to achieve political goals. If they cannot be controlled, channeled, and guided military strategists and political leaders often think twice about relying on them. If they can’t be controlled in a way that accomplishes strategic objectives, then regulating them heavily or out of existence altogether may be easier than many believe. If they can be utilized as meaningful tools to accomplish political policies, however, the skies of future battlegrounds will be dark with the shadows of drone swarms. The answer lies somewhere in between dud and killer app — the challenge lies in how to find it.