After more than 250 posts, I think “Reblog It” is a nice alternative. This is from The Duck of Minerva.
My comment is: Can we take the way we would program robots to be humanitarian (normative) and provide that information to the non-profit sector in developing countries to advance humanitarianism (empirical)?
The “Evitability” Of Killer Robots
A common refrain from critics of the campaign to ban autonomous weapons is that these weapons are “inevitable.” If that’s true, then efforts to mitigate or pre-empt their use are not just a waste of time but a dangerous distraction from the real issue: staying ahead in an impending robot arms race or, at least, making sure that the weapons (which will definitely be built and deployed) have the best “humanitarian” programming possible. My colleague Michael Horowitz made the former argument recently in Foreign Policy. Kenneth Anderson and Matthew Waxman (Waxman is slated to speak tomorrow at the Experts’ Meeting on Autonomous Weapons in Geneva, are known for making the latter argument. According to these commentators, resistance is futile.
But as I’ve written before, resistance is actually not entirely futile. Indeed a fair amount of humanitarian disarmament history demonstrates that just because countries can develop and deploy a weapon doesn’t mean they necessarily will. Indeed, Mines Action Canada points out the same in a Memorandum to CCW Delegates circulating at the meeting. Therein, they remind diplomats that a barely-deployed weapon has been banned before: blinding lasers were banned in the early 1990s in response to humanitarian concerns about the superfluous injury caused by such weapons, before they were widely used, despite the fact that many nations had money in the blinding laser game and despite the fact that the weapons had considerable military utility.
MAC’s memo focuses on blinding lasers because of the diplomatic analogies between today’s meeting and that earlier CCW process, but there are even earlier examples of emerging military technologies being banned early and quickly for humanitarian reasons. In fact, the first weapon ever to be subject to a multi-lateral treaty ban was banned before it was widely deployed: the expanding or “dum-dum” bullet. Dum-dums were designed to flatten upon impact and thus created superfluous wounds. Exploding bullets, whose horrors were evident after the American Civil War, were banned even earlier with the St. Petersburg Declaration. Flattening bullets or “dum-dums” were developed in the late 19th century and were quickly banned outright by the Hague convention of 1899 before they were widely used on the international battlefield (though they had been used in British colonial wars). According to Robin Copeland and Dominique Loye, the ban has been widely adhered to, though according to Wikipedia they remain in use for hunting and (perhaps ironically) in police operations. At any rate, they too represent a case of an emerging military technology with clear utility that was abandoned through international declaration before they were widely used.
In short, no military technology is “inevitable.” And neither are killer robots. Whether we plunge forward simply because we can or establish a precautionary principle or red-line ban against their use will be entirely a matter of political will and imagination.