Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception

Ronald Craig Arkin, Patrick Ulam, Alan R. Wagner

Research output: Contribution to journalArticle

67 Scopus citations

Abstract

As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.

Original languageEnglish (US)
Article number6099675
Pages (from-to)571-589
Number of pages19
JournalProceedings of the IEEE
Volume100
Issue number3
DOIs
StatePublished - Mar 1 2012

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this