Why ‘killer robots’ are becoming a real threat – and an ethics test

Nations are busy putting guns into the hands of robots.

Generals find that attractive for many reasons. Smart machines can take on the dull and dangerous work that soldiers now do, like surveillance and mine-removal, without getting bored or tired. In combat, they can reduce the costs of war, not only in terms of dollars but also in fewer human casualties.

But many governments and artificial intelligence (AI) researchers are worried. The threat at present is not that robots are so smart that they take over the world Hollywood-style. It’s that today’s robots won’t be smart enough to handle the new weapons and responsibilities they’re being given. And because of the rapid advances in AI, experts worry that the technology will soon cross a line where machines, rather than humans, decide when to take a human life.

Recommended: Passcode How secure is your data? Take our quiz and find out

This was supposed to be the year when governments began to address such concerns. After three years of discussing putting limits on military robots, some 90 countries were expected in August to formalize the debate under the aegis of the United Nations. And in the United States, the Trump administration was due to update an expiring Obama-era directive on autonomous weapons.

Instead, the UN canceled the inaugural meeting set for this month because Brazil and a few other smaller countries had not paid their contributions to the UN Convention on Conventional Weapons. The lack of payment also imperils a scheduled November meeting as well. Meanwhile, due to an administrative change, the Pentagon has eliminated its deadline, leaving the current directive in place, despite criticisms that the language is too ambiguous.

The private sector has stepped into this vacuum, warning in an open letter to the UN on Aug. 21 that “lethal autonomous weapons threaten to become the third revolution in warfare [following firearms and nuclear weapons]. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

The letter, signed by 126 founders of robotics and artificial intelligence companies from 28 countries, asks the new UN group on autonomous weapons to “find a way to protect us all from these dangers.”

WHAT’S UNDER WAY

Killer robots – as opponents of the technology like to call them – are already being tested and deployed. For example:

•On the southern edge of the Korean Demilitarized Zone,…

Read the full article from the Source…

Leave a Reply

Your email address will not be published. Required fields are marked *