Arms Control for Killer Robots
As dangerous as nuclear weapons, but better at chess. It’s time for a programmable Geneva Convention
Science fiction got one thing right: ‘killer robots’ (properly called Lethal Autonomous Weapon Systems, or LAWS) are not easy to stop. Ten years ago, military uses for artificial intelligence were sporadic and experimental. Ten years from now, it’s likely that every facet of modern warfare will incorporate autonomous machine thinking.
Where reality departs from science fiction is the cause of this rapid automation. We — humans, that is — are disempowering ourselves from the nasty business of fighting wars. The world’s major powers are deep in an arms race over military AI, driving a rapid advance of increasingly sophisticated technology.
Out of the Loop (and a few other problems)
Critics warn that humanity is teetering close to a dangerous precedent, with technology close to enabling LAWS to make its own targeting decisions autonomously, without human input (also known as having humans “out of the loop”). The argument goes that letting robots make these decisions crosses a moral threshold, by empowering a machine to decide whether a human lives or dies.