TMCnet News

Mint, New Delhi Leslie D'Monte column
[June 01, 2013]

Mint, New Delhi Leslie D'Monte column


Jun 01, 2013 (Mint - McClatchy-Tribune Information Services via COMTEX) -- Humans have a sense of right and wrong. They can take moral decisions and if humans are kept out of the loop, as would be the case with autonomous robots, it could lead to indiscriminate killings, especially on battlefields.



Moreover, if humans err in judgement, they can be pulled up by the courts. They can be punished.

But what does one do with autonomous robots, unless states evolve some legislation? This line of thinking is central to any discussion on autonomous robots even as the thought of replacing robots in a war-like situation to have fewer human casualties remains an attractive proposition.


So it's no surprise that the United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, has called for a global pause in the development and deployment of lethal autonomous robots (LARs), to allow "serious and meaningful international engagement on this issue before we proceed to a world where machines are given the power to kill humans". "While drones still have a 'human in the loop' who takes the decision to use lethal force, LARs have on-board computers that decide who should be targeted," said Heyns in a 30 May media statement.

At an April presentation to the UN Human Rights Council, Heyns said if machines take the decision "on who is alive or dies", it "may make it easier for states to go to war; and raises the question whether they can be programmed to comply with the requirements of international humanitarian law, especially the distinction between combatant and civilians and collateral damage".

Heyns' report followed the "Campaign to Stop Killer Robots" in April by an international coalition of non-governmental organizations (NGOs) working to ban fully autonomous weapons which included the Human Rights Watch.

Heyns acknowledges that while much of the development of LARs is shrouded in secrecy, robots with full lethal autonomy have not yet been deployed. However, robotic systems with varying degrees of autonomy and lethality are currently in use, and Heyns has cited many such examples in his report.

Take the case of unmanned combat aerial vehicles (UCAVs), commonly known as drones, which enable those who control lethal force not to be physically present when it is deployed, but rather to activate it while sitting behind computers in faraway places, and stay out of the line of fire. LARs, if added to the arsenals of states, would add a new dimension to this distancing, in that targeting decisions could be taken by the robots themselves, argues Heyns.

The US Phalanx system for Aegis-class cruisers, for instance, automatically detects, tracks and engages anti-air warfare threats such as anti-ship missiles and aircraft. The US Counter Rocket, Artillery and Mortar (C-RAM) system can automatically destroy incoming artillery, rockets and mortar rounds.

Israel's Harpy is a "fire-and-forget" autonomous weapon system designed to detect, attack and destroy radar emitters.

The UK Taranis jet-propelled combat drone prototype can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It can also defend itself against enemy aircraft.

The Northrop Grumman X-47B is a fighter-size drone prototype commissioned by the US Navy to demonstrate autonomous launch and landing capability on aircraft carriers and navigate autonomously.

The Samsung Techwin surveillance and security guard robots, deployed in the demilitarized zone between North and South Korea, detect targets through infrared sensors. They are currently operated by humans but have an "automatic mode".

Jeffery S. Thurhner, a member of the faculty in the international law department at the US Naval War College, said in a December media statement issued by the National Defence University Press that "...The US will likely face asymmetric threats in military campaigns of the future. Whether the threat is the substantial jamming and cyber-attack capabilities of the People's Republic of China or the legions of swarming Iranian patrol boats, LARs may provide the best way to counter it." LARs have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communication links have been severed. Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the US should fully commit itself to harnessing the potential of fully autonomous targeting." He pointed out that from 2000-2010, the number of US unmanned aerial vehicles (UAVs) proliferated from fewer than 50 to over 7,000, "with similarly astounding increases among land- and sea-based unmanned systems".

But he added a rider that LARs should be required to have some version of a human override, sometimes referred to as software or ethical "brakes".

"The ethical problem is that no autonomous robots or artificial intelligence (AI) systems have the necessary skills to discriminate between combatants and innocents," argued Noel Sharkey, a Professor of AI and Robotics and Professor of Public Engagement EPSRC Senior Media Fellow at the University of Sheffield, in an October 2008 paper.

Kenneth Anderson, American University-Washington College of Law, and Matthew C. Waxman of Columbia Law School, in a 28 April paper, argued that advances in sensor and computational technologies will supply "smarter" machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one's own personnel and civilian persons and property will demand continuing research, development, and deployment.

"The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe -- but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them -- US policy for resolving those dilemmas should be built on these assumptions," they said.

However, not all talk about robots need to be grim.

Robots help in surgeries. They help the elderly, and act as companions. They automate tasks and ease the burden of humans. Robots such as snake bots, aerial vehicles and rovers today search for and rescue survivors from debris caused by earthquakes, especially from areas humans can't reach. These robots also have names. Quince, for instance, is being deployed to sense chemical, biological, radiological or nuclear dangers in areas that firefighters can't reach.

Developed by the Chiba Institute of Technology and Tohoku University, it has a camera, microphone, position sensitive detector (PSD) sensor, laser range finder, Wi-Fi and infrared thermography camera and carbon-dioxide sensors to locate survivors by detecting body heat and breathing. It can move on its wheels at about 5.2 feet per second.

India's Defence Research and Development Organisation (DRDO) has its own "Sanjeevani" (meaning 'life giver') for similar situations, the initial version of which was used to detect survivors after the Gujarat earthquake in 2002. Available as a portable pack, the robot's probe head can be used in air, water or mud.

These robots are categorized as "assistive" robots, similar to the ones which help humans in cleaning rugs (like Roomba), washing windows or minimizing human accidents by doing dangerous, repetitive tasks on industrial shop-floors.

However, they look like machines while another category of "social", humanoid (even androids) robots have started and behaving more like humans. Some even serve as human companions like eldercare bots and nanny bots. And there have been talks about "marrying robot companions" too. To be categorized as "social", these robots need autonomy (not remote-control). Examples include Kismet, Leonardo, Maggie, Domo, Mertz, AIBOs, Furbies, Cog and Paros. Honda's ASIMO, also referred to as a humanoid robot, even celebrated its 10th birthday on 31 October 2010.

Other examples include Vikia from the Carnegie Mellon University (CMU) that had a flat-screen monitor, which displayed an animated computer graphics model of a female face. Then followed Valerie, touted as one of the world's first storytelling roboceptionists with sensors to alert it (she) to the presence of people to whom it would offer assistance and directions. Valerie also spent time on the telephone, imitating a human receptionist. Tank was a later version of Valerie.

With robots gaining a more human-like appearance and becoming more social with advancements in AI over the past 50 years, there are also more challenges.

Sherry Turkle, an MIT professor, points out one such dilemma in her book, Alone Together -- Why We Expect More from Technology and Less from Each Other: "...These days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time...it can happen when interacting with a robot." In sum, robots can be social, lovable and lethal, too. But human beings ought to have a say in the behaviour. Else, things could go out of hand.

___ (c)2013 the Mint (New Delhi) Visit the Mint (New Delhi) at www.livemint.com Distributed by MCT Information Services

[ Back To TMCnet.com's Homepage ]