Skip to main content

What can 'RoboCop' teach us about robot ethics?


February 19, 2014

The recent remake of the classic action film "RoboCop" provides a neat case study of how to get robot ethics completely wrong, argues Miles Brundage, a doctoral student at ASU’s Consortium for Science, Policy and Outcomes.

Brundage observes that the robots designed by OmniCorp, a fictional corporation in the world of the film, routinely break the five ethical principles for robotics developed by researchers at the United Kingdom’s Engineering and Physical Sciences Research Council in 2010-2011. These laws, designed to keep humans safe from the robots we create to help us, include:

• Robots should not be designed as weapons, except for national security reasons.
• Robots should be designed and operated to comply with existing law, including privacy.
• It should be possible to find out who is responsible for any robot.

Brundage concludes by considering the broader implications of "RoboCop’s" fictional world: “With Google reportedly setting up an ethics board to address the societal aspects of the AI technologies it’s developing, RoboCop’s release and the issues it touches on are timely. It may not win any awards, but it does, like some of the best science fiction, present a vivid demonstration of the sort of future we should try to avoid.”

To learn more about what "RoboCop" can teach us about designing robots that enrich rather than endanger our lives, read the full article at Future Tense.

Future Tense is a collaboration among ASU, the New America Foundation and Slate magazine that explores how emerging technologies affect policy and society.

Article source: Slate magazine

More ASU in the news

 

Arizona State University helping prepare people for careers in growing semiconductor industry

Matthew McConaughey and ASU are helping an Arizona school district. Here's how

We need to address the generative AI literacy gap in higher education