Abstract and Keywords
This chapter argues that international human rights standards offer the most promising basis for developing a coherent and universally recognized set of standards that can be applied to meet any of the normative concerns currently falling under the rubric of AI (artificial intelligence) ethics. It then outlines the core elements of a human rights–centered design, deliberation, and oversight approach to the governance of AI. This approach requires that human rights norms are systemically considered at every stage of system design, development, and deployment, drawing upon and adapting technical methods and techniques for safe software and system design, verification, testing, and auditing in order to ensure compliance with human rights norms. The regime must be mandated by law and relies critically on external oversight by independent, competent, and properly resourced regulatory authorities with appropriate powers of investigation and enforcement. However, this approach will not ensure the protection of all ethical values adversely implicated by AI, given that human rights norms do not comprehensively cover all values of societal concern. As such, a great deal more work needs to be done to develop techniques and methodologies that are robust—reliable yet practically implementable across a wide and diverse range of organizations involved in developing, building, and operating AI systems.
Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
If you have purchased a print title that contains an access token, please see the token for information about how to register your code.