Skip to yearly menu bar Skip to main content



Abstract:

This short paper explores the possibility that the appropriate decision procedures for artificial moral agents (AMAs) to utilize in their ethical decision-making are importantly different from the ones that are appropriate for human moral agents. It argues that the appropriate type of decision procedure for a given moral agent depends on the nature of the agent’s capacities, and thus certain kinds of AMAs should employ different decision procedures than the ones humans should use. If this conclusion is correct, then it has significant consequences for a number of issues, including the design of ethical artificial intelligence, the paradox of hedonism (and related puzzles), and the concept of virtue as it relates to AMAs. It is concluded that our commonsense views about certain ethical topics should be reconsidered in light of the relevant differences between artificial and human moral agents.

Chat is not available.