Abstract and Keywords
This chapter focuses on the production of normative codes or standards in response to ethical issues concerning artificial intelligence (AI). Codes can be a useful part of ethics, but have limits and dangers. Standards can be especially useful in technical achievement of goals and exploring possibilities. However, codes of ethics are embedded within far wider questions of value—values which may not be explicitly included in the codes themselves, but which are assumed or referenced within wider societal values and norms within which the codes are nested. These values themselves can evolve. As such, when it comes to AI, people need to be prepared for even larger shifts in how they think of value. Moreover, given the power of AI to augment or replace human thought and human agency, people need to consider basic philosophical questions about human nature in order to assess how humans might fare in response to AI.
Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
If you have purchased a print title that contains an access token, please see the token for information about how to register your code.