'Very urgent': Activists want global treaty to ban killer robots by 2019

From CBC - April 10, 2018

Pitted against the glacial pace of the UN's discussion process, activists hoping for an international ban on killer robots have repeatedly been left fuming and frustrated.

Pitted against each other in the battlefield, lethal autonomous weapons systemsor LAWScould in short order cause "absolute devastation," according to one of those activists.

That scenario, says the activist, Prof. Noel Sharkey of the University of Sheffield, is not as farfetched as it might have been even five years ago, when he helped found the Coalition to Stop Killer Robots, a group of 64 NGOs dedicated to the cause.

And it's that belief that brings him and other academics, scientists and activists back to Geneva this week to yet more discussions involving more than 80 countries.

Their hope is that the UN process moves from discussion to formal direct negotiations by next year to produce a pre-emptive treaty banning killer robots by 2019.

The activists' chief concern is not the military's delegation of tasks to autonomous machineswhich can be useful in search and rescue and bomb disposal and myriad other tasks too dangerous or too onerous for humans.

Instead, the coalition and others pushing for a treaty specifically want to ban LAWS with the "critical functions" of selecting a target and then killing, without meaningful human control.

More autonomy

"I think it's very urgent that we do this now," says Sharkey, describing the UN process as "frustrating." Countries that do not want a ban just keep slowing it down, he says.

"Our mandate is to get a treaty for emerging weaponsso if they slow us down long enough, they will have emerged and we will have no chance."

Thus far, no fully autonomous weapons are known to have been unleashed in the battlefield, although the development of precursors is well underway, with growing degrees of autonomy and intelligenceeven the ability to learn.

In this video below, the Coalition to Stop Killer Robots make its case to ban autonomous weapons.

Recently, such development has stirred controversy. At Google, staff wrote an open letter last week to management demanding they suspend work on a U.S. military project that involved drones and artificial intelligence capability.

And also last week, dozens of scientists and academics wrote a letter to the Korea Advanced Institute of Sceince and Technology in Seoul threatening a boycott for a project developing artificial intelligence for military use. The university has since promised it would not produce LAWS.

Still, Sharkey goes as far as describing what is happening now as a new arms race underway as militaries and companies compete to acquire increasingly autonomous and smarter weapons.

Since the UN discussions started back in 2014, lightening-fast advances in the fields of robotics and artificial intelligence have made it possible to build LAWS in short order, according to experts.

Beyond science fiction

"You could build an autonomous weapon system with open source technology nowthe question is if it's good enough to meet our standards as advanced nations," says Ryan Gariepy, CEO of Clearpath Robotics, a Canadian firm that was the first company to endorse a ban on killer robots.

So the near future, says Sharkey, could see wars starting automatically with battlefields too fast for the pace of human decision-making, where "war starts automatically, 10 minutes later, there's absolute devastation, nobody's been able to stop it."

That's most dangerous of all, he says.

"I am not talking about science fiction here. I am not talking about AI [artificial intelligence] suddenly becoming conscious," he said in an interview.

"I am talking about stupid humans developing weapons that they ca not control."

There are ample examples out there of the growing role of autonomous functions in military and policing.

Autonomous fire

Put aside for a moment the Terminator idea of human-like soldiers and consider the Samsung Techwin SGR-A1.

It patrols the South Korean border and has the ability to autonomously fire if it senses an infiltrator. Right now, it prompts an operator first.

Or what about the Russian semi-autonomous T14 Armatatank or the British BAE Systems' Taranisaircraft, both human-controlled but both also capable of semi-autonomous operation. Kalashnikov has also built some prototypes with "neural networks" modelled on the human brain.

Is that a real weapon?

Compromise instead of treaty?


Continue reading at CBC »