Giancarlo Frosio and Martin Husovec
This chapter summarizes recent developments in intermediary liability theory with special emphasis on the emergence of voluntary measures and private ordering. Looking at the legal liability rules always tells only half of a story. Legal rules are often only basic expectations which are further developed through market transactions, business decisions, and political pressure. Therefore, the real responsibility landscape is equally determined by a mixture of voluntary agreements, self-regulation, corporate social responsibility, and ad hoc deal-making. Accountability schemes can differ significantly, ranging from legal entitlements to request assistance in enforcement to entirely voluntary private-ordering schemes. This chapter provides a mapping of these basic approaches in order to illustrate the richness and trade-offs associated with such measures. Miscellaneous policy and enforcement tools, such as monitoring and filtering, graduated response, payment blockades and follow-the-money strategies, private denial of service (DNS) content regulation, and online search manipulation, are discussed to complement the typical legal liability view of the regulation of intermediaries. The discussion of these enforcement strategies will be framed within the investigation of notions such as market and private ordering, corporate social responsibility, assistance in enforcement of a innocent third party made accountable although not liable and public deal-making.
Joshua A. Kroll
This chapter addresses the relationship between AI systems and the concept of accountability. To understand accountability in the context of AI systems, one must begin by examining the various ways the term is used and the variety of concepts to which it is meant to refer. Accountability is often associated with transparency, the principle that systems and processes should be accessible to those affected through an understanding of their structure or function. For a computer system, this often means disclosure about the system’s existence, nature, and scope; scrutiny of its underlying data and reasoning approaches; and connection of the operative rules implemented by the system to the governing norms of its context. Transparency is a useful tool in the governance of computer systems, but only insofar as it serves accountability. There are other mechanisms available for building computer systems that support accountability of their creators and operators. Ultimately, accountability requires establishing answerability relationships that serve the interests of those affected by AI systems.
The reach of privately ordered online content regulation is wide and deepening. It is deepening with reference to the internet’s protocol stack, migrating downward from the application layer into the network’s technical infrastructure, specifically, the Domain Name System (DNS). This chapter explores the recent expansion of intellectual property enforcement in the DNS, with a focus on associated due process and expressive harms. It begins with a technical explanation of the operation and governance of the DNS. It goes on to discuss existing and proposed alternative dispute resolution (ADR) regimes for resolving intellectual property complaints involving domain names. In doing so, it compares the long-running Uniform Dispute Resolution Policy (UDRP) for adjudicating trademark cybersquatting claims to newer ADR programmes targeting copyright infringement on websites underlying domain names.
Alessandro Cogo and Marco Ricolfi
In recent years, the number of countries which have opted for the involvement of administrative bodies in the enforcement of copyright has increased as a result of the remarkable difficulties that the enforcement of copyright faces in a digital environment. This chapter describes first the European landscape of administrative bodies entrusted with the enforcement of copyright infringement online, with special emphasis on Greece, Italy, and Spain. Secondly, the chapter considers the legal framework where these administrative bodies operate with emphasis on the TRIPs Agreement, EU law, and the EU Charter of Fundamental Rights. Thirdly, the chapter looks into the essential features of these administrative enforcement systems, including procedural rules, allocation of costs, remedies, transparency, and safeguards against abuse. Finally, the chapter elaborates on the implementation of the AGCOM Regulation in practice, providing data from the case law developed so far.
This chapter focuses on how technologies used in the management of migration—such as automated decision-making in immigration and refugee applications and artificial intelligence (AI) lie detectors—impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as states single out the migrant population as a viable testing ground for new technologies. Making migrants more trackable and intelligible justifies the use of more technology and data collection under the guide of national security, or even under tropes of humanitarianism and development. Technology is not inherently democratic, and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognizing potential harms, because technology and its development are inherently global and transnational. Ultimately, more oversight and issue specific accountability mechanisms are needed to safeguard fundamental rights of migrants, such as freedom from discrimination, privacy rights, and procedural justice safeguards, such as the right to a fair decision maker and the rights of appeal.
This chapter details how AI affects, and will continue to affect, the Global South. The term “South” has a history connected with the “Third World” and has referred to countries that share postcolonial history and certain development goals. However, scholars have expanded and refined on it to include different kinds of marginal, disenfranchised populations such that the South is now a plural concept—there are Souths. The AI-related risks for Southern populations include concerns of discrimination, bias, oppression, exclusion, and bad design. These can be exacerbated in the context of vulnerable populations, especially those without access to human rights law or institutional remedies. The chapter then outlines these risks as well as the international human rights law that is applicable. It argues that a human rights–centric, inclusive, empowering context-driven approach is necessary.
John Basl and Joseph Bowen
This chapter evaluates whether AI systems are or will be rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. Current AI systems are not bearers of well-being, and so fail to meet the necessary condition for being rights-holders. This argument is robust against a range of different objections. However, the chapter also shows why difficulties in assessing whether future AI systems might have interests or be bearers of well-being—and so be rights-holders—raise difficult ethical challenges for certain developments in AI.
AI Governance by Human Rights–Centered Design, Deliberation, and Oversight: An End to Ethics Washing
Karen Yeung, Andrew Howes, and Ganna Pogrebna
This chapter argues that international human rights standards offer the most promising basis for developing a coherent and universally recognized set of standards that can be applied to meet any of the normative concerns currently falling under the rubric of AI (artificial intelligence) ethics. It then outlines the core elements of a human rights–centered design, deliberation, and oversight approach to the governance of AI. This approach requires that human rights norms are systemically considered at every stage of system design, development, and deployment, drawing upon and adapting technical methods and techniques for safe software and system design, verification, testing, and auditing in order to ensure compliance with human rights norms. The regime must be mandated by law and relies critically on external oversight by independent, competent, and properly resourced regulatory authorities with appropriate powers of investigation and enforcement. However, this approach will not ensure the protection of all ethical values adversely implicated by AI, given that human rights norms do not comprehensively cover all values of societal concern. As such, a great deal more work needs to be done to develop techniques and methodologies that are robust—reliable yet practically implementable across a wide and diverse range of organizations involved in developing, building, and operating AI systems.
There is an ongoing move towards privatization of law enforcement online through algorithmic tools. This chapter discusses algorithmic accountability and its relevance for intermediary liability and human rights. First, the chapter looks into open issues related to the specific nature of accountability within the context of algorithmic accountability, especially regarding ‘to whom’ and ‘for what’ algorithms should be accountable. In doing so, the chapter considers algorithmic accountability to users, listing a number of technical, organizational, and regulatory challenges to make accountability possible in ensuring access to data. Considering intermediary liability and algorithmic accountability more closely, the chapter describes specific provisions for ensuring algorithmic accountability by online intermediaries and platforms, contextualizing them within a proposal in which adherence to algorithmic accountability would lower liability of intermediaries and contribute to more effectively ensuring compliance with human rights.
Ifeoma Ajunwa and Rachel Schlund
This chapter argues that the proliferation of automated algorithms in the workplace raises questions as to how they might be used in service of the control of workers. In particular, scholars have noted machine learning algorithms as prompting a data-centric reorganization of the workplace and a quantification of the worker. The chapter then considers ethical issues implicated by three emergent algorithmic-driven work technologies: automated hiring platforms (AHPs), wearable workplace technologies, and customer relationship management (CRM). AHPs are “digital intermediaries that invite submission of data from one party through preset interfaces and structured protocols, process that data via proprietary algorithms, and deliver the sorted data to a second party.” The use of AHPs involves every stage of the hiring process, from the initial sourcing of candidates to the eventual selection of candidates from the applicant pool. Meanwhile, wearable workplace technologies exist in a variety of forms that vary in terms of design and use, from wristbands used to track employee location and productivity to exoskeletons used to assist employees performing strenuous labor. Finally, CRM is an approach to managing current and potential customer interaction and experience with a company using technology. CRM practices typically involve the use of customer data to develop customer insight to build customer relationships.
This chapter examines the legitimacy of utilizing human biomedical interventions for regulatory purposes, drawing on regulatory governance scholarship, bioethical debates about human enhancement, and constitutional scholarship concerning fundamental rights. It considers whether the use of biomedical techniques to pursue regulatory and other public policy purposes is ethically equivalent to the use of traditional techniques that target the design of the social environment, including the alleged ethical ‘parity’ between social and biological interventions into the human mind. It argues when contemplating these techniques, we must consider who is seeking to utilize them, for whom, for what purpose, for whose benefit, and at what cost (and to whom). In wrestling with these questions, we must also attend to the social meanings associated with particular ends–means relationships, what is it that we value in human nature, and different understanding of ideas of human flourishing and the good life.
This chapter looks at the risks to sentient AIs from their human creators. Sentient AIs would represent, by all reasonable accounts, a new form of autonomous life. If that is so, they would presumptively command status as persons. The chapter then outlines four plausible future scenarios in which sentient AIs come into being, and then poses deep questions for traditional philosophical discourses concerning nonhuman or nonstandard entities. These scenarios should give one pause about the personhood of generalized, potentially autonomous AIs. Indeed, the four scenarios bear on the question of rights for generalized autonomous AIs under existing property, torts, and rights law.
This chapter looks at the challenges, opportunities, and tensions facing the equitable development of artificial intelligence (AI) in the MENA region in the aftermath of the Arab Spring. While diverse in their natural and human resource endowments, countries of the region share a commonality in the predominance of a youthful population amid complex political and economic contexts. Rampant unemployment—especially among a growing young population—together with informality, gender, and digital inequalities, will likely shape the impact of AI technologies, especially in the region’s labor-abundant resource-poor countries. The chapter then analyzes issues related to data, legislative environment, infrastructure, and human resources as key inputs to AI technologies which in their current state may exacerbate existing inequalities. Ultimately, the promise for AI technologies for inclusion and helping mitigate inequalities lies in harnessing grounds-up youth entrepreneurship and innovation initiatives driven by data and AI, with a few hopeful signs coming from national policies.
The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation
Joanna J. Bryson
This chapter provides an overview of the nature and implications of artificial intelligence (AI), with particular attention to how they impinge on possible applications to and of law. Any artifact that transforms perception to more relevant information, including action, is AI. There is no question that AI, and digital technologies in general, are introducing massive transformations to society. Nevertheless, these impacts should be governable by less transformative legislative change. Indeed, the vast majority of AI—particularly where it has social impact—is and will remain a consequence of corporate commercial processes, and as such subject to existing regulations and regulating strategies. However, it is critical to remember that what is being held accountable is not machines themselves but the people who build, own, or operate them—including any who alter their operation through assault on their cybersecurity. It is thus important to govern the human application of technology—the human processes of development, testing, operation, and monitoring.
Audience Constructions, Reputations, and Emerging Media Technologies: New Issues of Legal and Social Policy
Nora A. Draper and Joseph Turow
This chapter traces how changes in media and surveillance technologies have influenced the strategies producers have for constructing audiences. The largely unregulated practices of information gathering that inform the measurement and evaluation of audiences have consequences for how individuals are viewed by media producers and, consequently, for how they view themselves. Recent technological advances have increased the specificity with which advertisers target audiences—moving from the classification of audience groups based on shared characteristics to the personalization of commercial media content for individuals. To assist in the personalization of content, media producers and advertisers use interactive technologies to enlist individuals in the construction of their own consumer reputations. Industry discourse frames the resulting personalization as empowering for individuals who are given a hand in crafting their media universe; however, these strategies are more likely to create further disparity among those who media institutions do and do not view as valuable.
Amber Marks, Ben Bowling, and Colman Keenan
This chapter examines how forensic science and technology are reshaping crime investigation, prosecution, and the administration of criminal justice. It highlights the profound effect of new scientific techniques, data collection devices, and mathematical analysis on the traditional criminal justice system. These blur procedural boundaries that have hitherto been central, while automating and procedurally compressing the entire criminal justice process. Technological innovation has also resulted in mass surveillance and eroded ‘double jeopardy’ protections due to scientific advances that enable the revisiting of conclusions reached long ago. These innovations point towards a system of ‘automatic justice’ that minimizes human agency and undercut traditional due process safeguards that have hitherto been central to the criminal justice model. To rebalance the relationship between state and citizen in a system of automatic criminal justice, we may need to accept the limitations of the existing criminal procedure framework and deploy privacy and data protection law.
This chapter highlights historic and contemporary efforts to engineer artificial intelligence (AI) capable of producing artifacts previously associated with the creative arts. While creativity and artistic origination have historically been tied to art’s social value, AI art has recently begun to sell for high prices, and the promise of automating artistic production heralds new sectors of economic profit. Yet AI creativity also highlights the changing status of human innovation, origination, and newness in ways that deserve careful thought. The chapter then explores how future research in the field of computational creativity might benefit from a more robust appreciation for the uniquely nonhuman qualities of AI’s creativity, rather than its ability to imitate the human. AI’s exponentially intensifying capabilities raise meaningful and urgent questions of how to relate to a technology that, in many meaningful senses, invents itself. As automation, origination, and creativity cross-pollinate in unknowable ways, an intelligence both truly other and yet still conversant with human categories emerges. Thinking from the perspective of the humanities can help one negotiate this unprecedented challenge.
This chapter assesses the concept of the autonomous AI—AI that is self-governing. Unease regarding autonomous AI is most vividly expressed in the vision of an artificial superintelligence whose self-generated goals and interests diverge radically from those of humankind, and which thus places humankind’s well-being—and maybe even their survival—at risk. As such, the chapter examines the conditions that would need to be met by an intelligent machine in order for that machine to exhibit the kind of autonomy that is operative in this dystopian scenario. However, there is arguably a more pressing concern regarding a different class of AI systems, those that are autonomous in only the milder sense that—in their domains of operation—humans are ceding, or will cede, some significant degree of control to them. Systems of this kind include self-driving cars and autonomous weapons systems. The chapter then evaluates whether these already-in-the-world autonomous AI systems are a genuine cause for concern. A key issue here concerns the properties of so-called deep learning networks.
This chapter discusses contemporary debates regarding the use of artificial intelligence as a vehicle for criminal justice reform. It closely examines two general approaches to what has been widely branded as “algorithmic fairness” in criminal law: the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions; and the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency, and validity in these systems. Attempts to render AI-branded tools more accurate by addressing narrow notions of bias miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The chapter then calls for a radically different understanding of the role and function of the carceral state, as a starting place for re-imagining the role of “AI” as a transformative force in the criminal legal system.
Christophe Geiger and Elena Izyumenko
In the past few years, the practice of enforcing intellectual property by ordering internet access providers to block infringing websites has been rapidly growing, especially in Europe. European Courts, such as the CJEU and ECtHR, advance several factors to inform—from the perspective of different fundamental rights—the website-blocking practices for copyright enforcement in Europe. This chapter provides an overview of these factors, starting with the freedom of expression framework for website blocking and the rather revolutionary, at least for the European judiciary, concept of user rights that has being construed under it. It then proceeds to discuss the limits of intermediaries’ involvement in digital enforcement dictated by the EU-specific freedom to conduct a business. The required efficacy of the blocking resulting from the human right to property framework for intellectual property is also examined. Potential effects on the website-blocking practices of the recent EU copyright reform are then discussed before concluding.