³ÉÈË¿ìÊÖ

Call to ban killer robots in wars

  • Published
Attack drone artworkImage source, Getty Images
Image caption,

This is not about terminator robots but "conventional weapons systems with autonomy"

A group of scientists has called for a ban on the development of weapons controlled by artificial intelligence (AI).

It says that autonomous weapons may malfunction in unpredictable ways and kill innocent people.

Ethics experts also argue that it is a moral step too far for AI systems to kill without any human intervention.

The comments were made at in Washington DC.

Human Rights Watch (HRW) is one of the 89 non-governmental organisations from 50 countries that have formed the Campaign to Stop Killer Robots, to press for an international treaty.

Among those leading efforts for the worldwide ban is HRW's Mary Wareham.

"We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy," she told ³ÉÈË¿ìÊÖ News.

"They are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly and land on their own; robotic sentries that can identify movement. These are precursors to autonomous weapons."

Ryan Gariepy, chief technological officer at Clearpath Robotics, backs the ban proposal.

His company takes military contracts, but it has denounced AI systems for warfare and stated that it would not develop them.

"When they fail, they fail in unpredictable ways," he told ³ÉÈË¿ìÊÖ News.

"As advanced as we are, the state of AI is really limited by image recognition. It is good but does not have the detail or context to be judge, jury and executioner on a battlefield.

"An autonomous system cannot make a decision to kill or not to kill in a vacuum. The de-facto decision has been made thousands of miles away by developers, programmers and scientists who have no conception of the situation the weapon is deployed in."

According to Peter Asaro, of the New School in New York, such a scenario raises issues of legal liability if the system makes an unlawful killing.

"The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death.

"So it may well be that the people who made the autonomous weapon are responsible."

Follow Pallab