San Francisco canceled its killer robot plan

A week is long in politics, especially when it comes to determining whether it is acceptable to grant robots the right to kill humans on the streets of San Francisco.

At the end of November, the city’s supervisory board gave the local police the right to kill a criminal suspect using a remote-controlled robot, if he believes that failure to act would endanger members of the public or the police. The rationale used for the so-called “killer robot plan” is that it would prevent atrocities like the 2017 Mandalay Bay shootings in Las Vegas, which killed 60 victims and injured more than 860 others, to perform in San Francisco.

Yet just over a week later, those same lawmakers reversed their decision, sending the plans back to a committee for further review.

The reversal is partly due to the huge public outcry and lobbying that resulted from the initial approval. Concerns have been raised that removing humans from key issues related to life and death was a step too far. December 5a protest took place outside San Francisco City Hall, while at least one supervisor who initially approved of the decision later said he regretted his choice.

“Despite my deep concerns about the policy, I voted for it after adding additional guardrails,” said Gordon Mar, San Francisco’s Fourth District Supervisor. tweeted. “I regret that. I’ve grown increasingly uncomfortable with our vote and the precedent it sets for other cities without such a strong commitment to police accountability. I don’t think giving back more distant, distant and less humane state violence is a step forward.

The question being asked by supervisors in San Francisco is fundamentally about the value of a life, says Jonathan Aitken, senior university professor of robotics at the University of Sheffield in the UK. “The act of applying lethal force is always a profound consideration, both in police and military operations,” he says. Those deciding whether or not to pursue an action that could take a life need significant contextual information to make that judgment thoughtfully – context that may be lacking when operating remotely. “The small details and elements are crucial, and the spatial separation removes them,” says Aitken. “Not because the operator cannot take them into account, but because they may not be contained in the data presented to the operator. This can lead to errors.” And errors, when it s acts of lethal force, can literally mean the difference between life and death.

“There are many reasons why arming robots is a bad idea,” says Peter Asaro, an associate professor at The New School in New York who studies automation in police departments. He believes the move is part of a larger move to militarize the police. “You can design a potential use case where it’s useful in the extreme, like hostage taking, but there are all sorts of mission drifts,” he says. “It hurts the public, and especially communities of color and poor communities.”

Asaro also plays down the suggestion that guns on robots could be replaced by bombs, saying the use of bombs in a civilian setting could never be justified. (Some police forces in the United States currently use bomb-carrying robots to intervene; in 2016, Dallas police used a bomb-carrying robot to kill a suspect in what experts called it an “unprecedented” moment.)

Comments are closed.