News Article Release
Professor Addresses Ethical Issues Posed by Military Robotics
Posted on: January 24, 2013 12:00 EST by MC2 Alexia Riveracorrea
While many countries are rushing to develop military robots capable of autonomous combat operations, one robotics expert is considering the legal and moral implications of this technology and sharing his research with future military leaders.
Professor Noel Sharkey of Sheffield University, United Kingdom, spoke to Naval Academy midshipmen, faculty and staff about ethical issues posed by military robotics during a lecture Jan. 23.
He focused on the concept of robots firing weapons without a human in the decision chain.
“Autonomous robots can’t reason or distinguish civilians from combatants,” said Sharkey.
This violates the law set down in the Geneva Convention under the Principle of Distinction and Proportionality, two cornerstones of Just War Theory, he said.
Sharkey has worked closely with policy makers and the military to create awareness about the limitations of artificial intelligence and the dangers of automated warfare. As a professor of AI robotics and public engagement, Sharkey has moved freely across academic disciplines, lecturing in engineering, philosophy, psychology, cognitive science, linguistics, artificial intelligence, computer science and robotics. He holds a doctorate degree in experimental psychology. Sharkey’s core research interest is now in the ethical application of robotics and AI in areas such as military, child care, elder care and law enforcement.
“I like talking to the midshipmen because they are the most junior officers,” said Sharkey. “I like to put these ideas to young people because essentially they are the ones who are going have to deal with this.”
Sharkey became concerned about these issues when researching plans outlined by the U.S. and other nations to robotize their military forces. More than 4,000 semi-autonomous robots are already deployed by the U.S. in Iraq. Other countries, including several European nations, Canada, South Korea, South Africa, Singapore and Israel are developing similar technologies, he said.
“Governments and robotics engineers should re-examine current plans, and perhaps consider an international ban on autonomous weapons for the time being,” Sharkey said. “Having robots decide for themselves who to kill is too dangerous, and they are just not adequate to do that.”