Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 08 May 2021

Abstract and Keywords

This chapter focuses on the production of normative codes or standards in response to ethical issues concerning artificial intelligence (AI). Codes can be a useful part of ethics, but have limits and dangers. Standards can be especially useful in technical achievement of goals and exploring possibilities. However, codes of ethics are embedded within far wider questions of value—values which may not be explicitly included in the codes themselves, but which are assumed or referenced within wider societal values and norms within which the codes are nested. These values themselves can evolve. As such, when it comes to AI, people need to be prepared for even larger shifts in how they think of value. Moreover, given the power of AI to augment or replace human thought and human agency, people need to consider basic philosophical questions about human nature in order to assess how humans might fare in response to AI.

Keywords: normative codes, normative standards, artificial intelligence, ethics, codes of ethics, societal values, societal norms, human nature, human thought, human agency

Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.

Please subscribe or login to access full text content.

If you have purchased a print title that contains an access token, please see the token for information about how to register your code.

For questions on access or troubleshooting, please check our FAQs, and if you can''t find the answer there, please contact us.