This article discusses the concept of institution by examining the components of an institution and the way in which institutionalization can increase or decrease. It considers the place to be given to organizations and to procedures based on the definition of institutions. It reveals the major differences across the social sciences and in particular political, social, and economic fields. The article is also concerned with institutionalization, and reveals marked differences among the social sciences.
Steven Rathgeb Smith
Accountability in nonprofits is complicated and multi-faceted. Nonprofits can also be sites of vibrant civic engagement, community governance, and providers of valuable local services. Contemporary accountability regimes emphasize organizational maintenance, competition, entrepreneurship, and sustainability. Civic engagement in the governance and operations of local nonprofits can be time-consuming, albeit very valuable. To achieve accountability and citizen engagement, nonprofits need to consult with their key stakeholders and think comprehensively and strategically about their mission. Government and private funders also need to approach accountability broadly and consider the different programmatic and community benefits of nonprofit programs.
This chapter explores the ways in which public standards of accountability are brought to bear on a nominally private institution: the commercial corporation. It considers several classic arguments in favor of widening the set of interests in society that the corporation should serve. These classic positions, it is argued, fail to capture the range of social issues facing the company. A different way of identifying those issues is proposed. This in turn permits one to identify three types of interest that stakeholders have in the company. With these distinctions in place, a map of different types of corporate accountability is drawn, aimed at underpinning policies shaping corporate governance.
Accumulation—as both a process and a concept—has been of central importance to International Political Economy (IPE), especially for those approaches informed by Marxism and critical political economy. Whilst the concept has its roots in some of the key contributions to classical political economy, and especially that of Marx, it remains central to a rich stream of research within the field of IPE. This chapter sets out the importance and meaning of accumulation as a concept within IPE, both tracing its roots within the critical “wing” of the field, before proceeding to consider ways in which it informs ongoing research and scholarship. Accumulation, the chapter argues, is necessarily central to any attempt to delineate the scope of IPE, rightly drawing our attention towards a (the?) fundamental mechanism driving the uneven, changing, and contested expansion of global capitalism, as it develops over time.
Maxwell McCombs and Sebastián Valenzuela
This chapter discusses contemporary directions of agenda-setting research. It reviews the basic concept of agenda setting, the transfer of salience from the media agenda to the public agenda as a key step in the formation of public opinion, the concept of need for orientation as a determinant of issue salience, the ways people learn the media agenda, attribute agenda setting, and the consequences of agenda setting that result from priming and attribute priming. Across the theoretical areas found in the agenda-setting tradition, future studies can contribute to the role of news in media effects by showing how agenda setting evolves in the new and expanding media landscape as well as continuing to refine agenda setting’s core concepts.
Global assessments have become central to international debates on a range of key policy issues. They attempt to combine “expert assessment” with processes of “stakeholder consultation” in what are presented as global, participatory assessments on key issues of major international importance. This chapter focuses on the IAASTD—the International Assessment of Agricultural Knowledge, Science and Technology for Development—through a detailed analysis of the underlying knowledge politics involved, centered particularly on the controversy over genetically modified crops. Global assessments contribute to a new landscape of governance in the international arena, offering the potential for links between the local and the global and new ways of articulating citizen engagement with global processes of decision making and policy. The chapter argues that in global assessments the politics of knowledge need to be made more explicit and that negotiations around politics and values must be put center stage. The black-boxing of uncertainty, or the eclipsing of more fundamental clashes over interpretation and meaning, must be avoided for processes of participation and engagement in global assessments to become more meaningful, democratic, and accountable. A critique is thus offered of simplistic forms of deliberative democratic practice and the need to “bring politics back in” is affirmed.
Amelia C. Arsenault and Sarah E. Kreps
In light of contemporary technological advancements in the fields of artificial intelligence (AI) and machine learning, coupled with significant investment in AI research and development by a range of state actors, scholars have begun considering the impact that this technology will have on international and domestic politics. While scholars express divergent views regarding the consequences of AI proliferation, this technology is likely to have a broad range of applications for international politics, from military and defense to trade and diplomacy. Recognizing the increasingly prominent role of AI in global politics, this chapter aims to provide a comprehensive overview of the opportunities and risks that the proliferation of AI technology holds for international politics by examining the factors motivating the global pursuit of this technology and evaluating its effects on authoritarianism and liberal democracy, the global balance of power, and warfare. This chapter argues that AI is best understood as an accelerating and enabling force that is less likely to produce drastic, unforeseen transformations in domestic and international politics as it is to accelerate and exacerbate trends that were already underway. As an increasingly critical tool of contemporary governance, AI is going to play a central role in future relations between international actors; it is therefore incumbent upon scholars, states, and the global community more broadly to begin preparing for international politics in the era of AI.
As has been the case for previous technological revolutions, AI will have economic and informational effects that may impact the nature and stability of democracy. In advanced democracies, AI may lead to economic transformations (such as a growing division between capital and high-skilled labor on the one hand and the rest of labor on the other, wage inequality, etc.) that may result in rising social tensions and democratic instability. However, a rise in productivity and overall growth plus the capacity of democratic governments to respond to those challenges may mitigate the negative effects of AI. AI’s effects are likely to be more strongly de-democratizing in emerging and peripheral economies. With fewer resources to adopt those new technologies, emerging and peripheral economies may be hit by the reshoring of production to advanced economies and fast deindustrialization. That may in turn reduce the kind of economic conditions (development, equality) that nurture democratic stability. AI’s economic effects may be compounded by the informational consequences of AI, which seem to reinforce the monitoring and repressive capabilities of states. After assessing the different channels through which AI could impact democracy, the chapter concludes by discussing a set of policy interventions to reconcile AI and democracy.
Jess Whittlestone and Samuel Clarke
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing. These applications will increase as AI capabilities continue to progress, which has the potential to be highly beneficial for society, or to cause serious harm. The role of AI governance is ultimately to take practical steps to mitigate this risk of harm while enabling the benefits of innovation in AI. This requires answering challenging empirical questions about current and potential risks and benefits of AI: assessing impacts that are often widely distributed and indirect, and making predictions about a highly uncertain future. It also requires thinking through the normative question of what beneficial use of AI in society looks like, which is equally challenging. Though different groups may agree on high-level principles that uses of AI should respect (e.g., privacy, fairness, and autonomy), challenges arise when putting these principles into practice. For example, it is straightforward to say that AI systems must protect individual privacy, but there is presumably some amount or type of privacy that most people would be willing to give up to develop life-saving medical treatments. Despite these challenges, research can and has made progress on these questions. The aim of this chapter will be to give readers an understanding of this progress, and of the challenges that remain.
K. Gretchen Greene
This chapter offers reflections and advice on AI ethics and governance from a year spent leading Partnership on AI’s multi-stakeholder Affective Computing and Ethics project, involving more than 200 engineers, scientists, lawyers, privacy and civil rights advocates, bioethicists, managers, executives, journalists, and government officials, mostly in the United States, United Kingdom, and European Union, in discussions about AI related to emotion and affect and its potential impacts on civil and human rights, with a goal of developing industry best practices and better technology policy. The author’s reflections from that year draw lessons on convening, multi-disciplinary collaboration, and affective computing and AI ethics and governance. The chapter offers a blueprint for creating the shared knowledge foundation non-technical participants need to apply their expertise. It presents question exploration as a tool for evaluating ethics risk and using “What is notice good for?” shows how a well-chosen question can serve as a catalyst for group or individual exploration of the issues, leading to insights and answers. It includes a curated list of 42 questions, catalysts for AI and ethics discussion for industry, university, news media, and policy teams; multi-disciplinary, multi-stakeholder AI ethics and governance convenings; and for individual writing and thinking, to take scholars and practitioners a step beyond or to the side of what they have been thinking about—about AI and ethics.
Laurin B. Weissinger
Regulating and governing AI will remain a challenge due to the inherent intricacy of how AI is deployed and used in practice. Regulation effectiveness and efficiency are inversely proportional to system complexity and the clarity of objectives: the more complicated an area is and the harder objectives are to operationalize, the more difficult it is to regulate and govern. Safety regulations, while often concerned with complex systems like airplanes, benefit from measurable, clear objectives and uniform subsystems. AI has emergent properties and is not just “a technology.” It is interwoven with organizations, people, and the wider social context. Furthermore, objectives like “fairness” are not only difficult to grasp and classify, but they will change their meaning case-by-case. The inherent complexity of AI systems will continue to complicate regulation and governance; however, with appropriate investment, monetary and otherwise, complexity can be tackled successfully. Due to the considerable power imbalance between users of AI in comparison to those AI systems are used on, successful regulation might be difficult to create and enforce. As such, AI regulation is more of a political and socio-economic problem than a technical one.
Matthijs M. Maas
How do we regulate a changing technology, with changing uses, in a changing world? This chapter argues that while existing (inter)national AI governance approaches are important, they are often too siloed. Often, technology-centric approaches focus on individual AI applications, while law-centric approaches emphasize AI’s effects on pre-existing legal fields or doctrines. This chapter argues that to foster a more systematic, functional, and effective AI regulatory ecosystem, policy actors should instead complement these approaches with a regulatory perspective that emphasizes how, when, and why AI applications enable patterns of “sociotechnical change.” Drawing on theories from the emerging field of “techlaw,” it explores how this perspective can provide informed, more nuanced, and actionable perspectives on AI regulation. A focus on sociotechnical change can help analyze when and why AI applications actually create a meaningful rationale for new regulation—and how they are consequently best approached as targets for regulatory intervention, considering not just the technology, but also six distinct “problem logics” that accompany AI issues across domains. The chapter concludes by briefly sketching concrete institutional and regulatory actions that can draw on this approach to improve the regulatory triage, tailoring, timing and responsiveness, and design of AI policy.
Richard R. John
This essay traces the long and productive relationship between two genres of historical writing: American political development (or APD) and American political history. It is written primarily for political scientists; a secondary audience is historians who wish to become more familiar with APD. Its focus is on the period before the adoption of the federal Constitution in 1788 and the end of the Second World War in 1945, an epoch that has long been recognized as not only formative, but also distinct from the epoch that it followed and preceded. It is, in addition, an epoch that has spawned a dialogue between APD and political history that had proved to be particularly fruitful.
Joel H. Silbey
This article provides a sweeping analysis of the history of American political parties. It specifically uses the lens of critical election theory to explore the scholarly treatment of the development of parties as institutions, of the relationship between parties and the electorate, of the means that parties have used to communicate with and build relationships with the electorate, and of the existence and definition of party systems. The Democrats' administrative state grew during the Second World War and was reinforced and further expanded during the Cold War that followed. There was increased partisan polarization in the 1990s as the Republicans regained control of the House of Representatives and vigorously set themselves against a Democratic president.
Historical institutional scholars can analyze politics as it happens, not just developments long past. A powerful theoretical approach should give clear guidance about questions worth asking and pinpoint factors that need to be taken into account to explain current and possible future developments. Historical institutional analysis stresses timing and sequence, institutional contexts, and policy feedbacks – factors that are crucial for deciphering immediately unfolding political transformations. To illustrate the point, this chapter dissects the early Obama presidency, examining why its reformist goals succeeded in some policy areas but fell short in others. In addition, the chapter explores how and why the Tea Party erupted and pushed the Republican Party further to the extreme right during the Obama presidency
Jeffery A. Jenkins
Rational choice and American political development (APD) both emerged as responses to (perceived) limitations with the dominant behavioral tradition. While their critiques were based on very different research traditions, similarities were also present; in particular, both rational choice and APD approaches focused on the importance of institutions for studying political outcomes. Over time, rational choice and APD research has converged to a significant degree, as scholars in both traditions have increasingly been exposed to different theoretical and methodological perspectives and thus become consumers of each other’s work. This chapter documents how and why rational choice research has moved in an APD direction.
This chapter evaluates the achievements and limitations of the United Nations (including the Conference on Disarmament) in the field of disarmament, emphasizing the UN’s role as part of broader efforts to control arms as a means to achieve international peace and security. It presents an overview of UN disarmament efforts and discusses specific cases where progress was achieved, such as the Nuclear Non-Proliferation Treaty (NPT), the Chemical Weapons Convention (CWC), the Arms Trade Treaty, and efforts to tackle the problems of anti-personnel land mines and small arms and light weapons. Finally, it draws out the implications for international relations of the UN experience with formal multilateral arms control, disarmament and security-building processes by evaluating its role as a negotiating forum, a norm setter, an implementing agency, or an instrument of great power security governance.
This article begins by discussing the four kinds of development that helped change the expectations, objectives, and conduct of modern disarmament diplomacy: (i) transformative advances in networked communications and weapons technologies; (ii) transnational criminals who include sensitive materials and weapons procurement among their trafficking activities; (iii) broader civil society networks linked transnationally and motivated by humanitarian, environmental, and anti-militarist concerns; and (iv) changes in public attitudes towards international security, warfare, and ‘acceptable’ versus ‘unacceptable’ means for achieving national and international policy objectives. This is followed by discussions of humanitarian-centred disarmament and integrative diplomacy, and distributive and integrative tactics in disarmament diplomacy.
Kurt Glaze, Daniel E. Ho, Gerald K. Ray, and Christine Tsang
Despite widespread skepticism of data analytics and artificial intelligence (AI) in adjudication, the Social Security Administration (SSA) pioneered path-breaking AI tools that became embedded in multiple levels of its adjudicatory process. How did this happen? What lessons can we draw from the SSA experience for AI in government? We first discuss how early strategic investments by the SSA in data infrastructure, policy, and personnel laid the groundwork for AI. Second, we document how SSA overcame a wide range of organizational barriers to develop some of the most advanced use cases in adjudication. Third, we spell out important lessons for AI innovation and governance in the public sector. We highlight the importance of leadership to overcome organizational barriers, “blended expertise” spanning technical and domain knowledge, operational data, early piloting, and continuous evaluation. AI should not be conceived of as a one-off IT product, but rather as part of continuous improvement. AI governance is quality assurance.
Nakul Aggarwal, Michael E. Matheny, Carmel Shachar, Samantha X.Y. Wang, and Sonoo Thadaney-Israni
Artificial intelligence (AI) is poised to significantly impact healthcare systems, including clinical diagnosis, healthcare administration and delivery, and public health infrastructures. In the context of the Quintuple Aim of healthcare (patient outcomes, cost reduction, population impact, provider wellness, and equity/inclusion), this chapter discusses the current state of AI in healthcare, focusing on issues that may inform the development of adaptive, efficient, and equitable governance frameworks for AI in healthcare. The chapter introduces prominent examples of clinical AI applications in recent years, highlighting their successes and extant limitations. It emphasizes the processes of clinical AI algorithm development, implementation, and provider adoption, noting important policy considerations for active maintenance and updating of such algorithms. It also focuses on the issue of bias in AI algorithms for healthcare by (1) illustrating how unrepresentative and/or inappropriate datasets can exacerbate health disparities and inequities, and (2) emphasizing the need for diversity, transparency, and accountability in algorithm development. It provides an overview of current national and international regulatory approaches for AI-driven medical devices. It concludes with recommendations of strategic goals for developers, healthcare providers, and governmental agencies to work towards cooperatively in building a productive and equitable future for AI in healthcare.