Love robotics? Fill out the form below to stay
abreast of the latest news, research, and business
analysis in key areas of the fast-changing
robotics industry
Subscribe to Robotics
Trends Insights


 
Sponsored Links

Advertise with Robotics Trends
[ view all ]
Security and Defense
Bookmark and Share
STORY TOOLBOX Print this story  |   Email to a friend  |   RSS feeds
US Military Wants Battlefield Robots with Morals
The United States military will award $7.5 million in grant money over five years to university researchers to explore how to build a sense of moral consequence into autonomous robotic systems.
By Steve Crowe - Filed May 15, 2014

More Security and Defense stories
The debate over the morality of autonomous "killer robots" continues. Early in May 2014, critics were urging Canada to ban autonomous weapons and the United Nations (UN) in Geneva is currently meeting with the goal of reaching an international agreement on prohibiting killer robots.

Now the US military is joining the campaign to develop killer robots with ethical reasoning, reports Defense One. The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

"Even though today's unmanned systems are ‘dumb' in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we've seen before," Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. "For example, Google's self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake."

The US military prohibits lethal fully autonomous robots. And semi-autonomous robots can't "select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator," even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.

"Even if such systems aren't armed, they may still be forced to make moral decisions," Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. "While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can't take the idea of in-theater robots completely off the table," Bello said.

Source: Defense One


Bookmark and Share
STORY TOOLBOX Print this story  |   Email to a friend  |   RSS feeds
  FOLLOW US
Facebook
Now you can follow Robotics Trends and
Robotics Trends Business Review on Facebook