Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE ( © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 19 January 2019

Governance and Learning

Abstract and Keywords

This article discusses learning in the context of governance, and the two-way relationship between them. It explains the concept of four types of learning – reflexive social learning, instrumental learning, political learning, and symbolic learning – and identifies theoretical problems that emerge when learning is used in the context of governance. The article also mentions that while governance has influenced and rekindled the discussion on learning, there is also evidence that theoretical and empirical research on learning has potentially useful lessons for those working on governance and governments.

Keywords: learning, governance, reflexive social learning, instrumental learning, political learning, symbolic learning

The governance turn in political science (Peters, Chapter 2, this volume; Bevir 2010) refers, implicitly or explicitly, to the mechanism of learning. Horizontal arrangements and multi-level settings operate using a logic that is different from the traditional hierarchic logic used by government. One theme in the governance turn is that the relatively decentralized network-based interaction of constellations of public and private actors, given certain institutional properties, triggers socialization, problem-solving, reflexivity, and deliberation—and possibly even collective learning processes (Checkel 1998; Scott 2010; Sanderson 2002, 2009). Governance by epistemic communities and knowledge-based actors (Stone, Garnett, and Denham 1998, Schrefler 2010) is also intimately connected to learning (Boswell 2008), specifically in relation to how rationality, science, and experts’ advice bring about change in public policy, and what type of instruments, organizational settings, or institutional devices enable learning to operate.

Turning to learning, recent reviews (Freeman 2006; Grin and Loeber 2007; Dobbin, Simmons, and Garrett 2007) contain only a handful of empirical studies on learning, thus making it difficult to assess what we have collectively and cumulatively learned about this topic. This perhaps explains the frustration expressed by authors such as James and Lodge (2003) and Volden, Ting, and Carpenter (2008) about the overall theoretical leverage of research in this field (although James and Lodge's sharpest criticisms are confined to the subfield of policy transfer). In the meantime, international relations scholars have connected with learning through very different strands of research, including socialization, critical realism, and diffusion. But this has not led to greater conceptual clarification; rather it has opened up a Pandora's box of endless discussions on ontology and epistemology.

In this chapter we discuss learning in the context of governance, and the two-way relationship between them. The next section puts forward four types of learning, namely reflexive social learning, instrumental learning, political learning, and symbolic (p. 156) learning. Then we discuss a series of theoretical problems that emerge when learning is used in the context of governance, and especially the difficulty of moving from the micro to the meso or macro level. The following section considers the challenges that researchers face when studying learning empirically, both qualitatively and quantitatively. Finally, we discuss the relationship between governance and learning, and conclude with the normative implications (Rothstein, Chapter 10, this volume). Perhaps paradoxically, whilst the governance turn was initially concerned with reflexivity and the withering away of government, the theoretical and empirical analysis of learning has brought back hierarchy and the role of governments in political learning. More recently, the field of learning has yet again crossed paths with governance theorists who have rediscovered the role of power and hierarchy, even within network-based governance (Héritier and Rhodes 2011). The challenge for future research is to specify the conditions for different usages of modes of governance (such as deliberation and reflexivity, but also coercion, pressure, and legitimacy) and different types of learning, from learning how to improve on public policy to more political and less benevolent types of learning.

Four types of learning

The literature has drawn attention to four ideal types of learning, namely reflexive social learning, instrumental learning, political learning, and symbolic learning (Grin and Loeber 2007; May 1992; Sanderson 2002).

Reflexive social learning involves society-wide paradigmatic changes affecting not only public policy, but also fundamental social interaction and institutional behavior. By far the most cited article on paradigmatic change is Hall's study on economic policy-making (1993; 488 citations in June 2010, source: Social science citation index). Reflexive learning about governance (Sanderson 2002) results from advanced, sophisticated, participatory usages of governance tools, but also requires dense socialization or other triggers of communicative rationality. Sanderson draws on complexity theory to show how this type of learning emerges, although it is not easy to distinguish between the positive and normative aspects of his research program (Sanderson 2009). Sabel and Zeitlin (2010: ch. 1), instead, draw on theories of experimental constitutionalism and social networks to associate reflexivity with modes of governance that are not based on hierarchy. Their analysis dovetails with the literature on the so-called new modes of governance (Héritier and Rhodes 2011). Although most of these modes are not “new” (e.g. soft law and self-regulation), recent research has drawn attention to the open method of coordination in the European Union—which has similarities with policy benchmarking in international organizations. The open method is a type of international coordination of public policy based on the diffusion of information among social and policy networks at different levels of governance. Its instruments are benchmarking, peer review, common indicators, and iterative appraisal of plans and achievements of the member states. The capacity to learn from local, diffuse innovation and problem-solving is key to the logic of the open (p. 157) method. These are the ambitions and the logic. However, when we move from “how things should work” to “how they actually work” we find a body of empirical research arguing that the open method has generated much more learning from the top than learning from society as a whole and from the local level of innovation (Radaelli 2008). More fundamentally, the case of open coordination in the European Union shows a more general characteristic of the relationship between modes of governance and learning. In fact, the same mode of governance (in our case the open method) can be used for different objectives: one is reflexive social learning, but evidence shows other objectives, such as creating a pressure to converge, and increasing the effectiveness of hard law by accompanying it with softer modes of regulation (Borrás and Radaelli 2010).

Reflexivity is also found in legal systems, as shown by the theory of responsive regulation and more recent advances in legal scholarship on norms (Lenoble and De Schutter 2010). In turn, the explanation of how norms emerge and how they are interpreted, shared, and contested has made its way onto the research agenda in international relations (Wiener 2008). In policy theory, Campbell (1998) has introduced the distinction among programs, paradigms, frames, and public sentiments. Since reflexivity is also about how “ideas” have an effect on policy, Campbell's typology is useful in order to distinguish between different types of ideas. One problem for reflexive scholars, however, is to explain why an organization or a political system would engage with social learning. A possible answer to this question is a legitimacy crisis that induces a profound reassessment of core beliefs: as sociologists have emphasized, conformity with socially valued practices is an important strategy through which actors can protect themselves from criticism (DiMaggio and Powell 1991). Another is socialization: even if it does not lead to a full internalization, standards of accepted behavior may emerge out of the repeated interaction of policy-makers in intergovernmental organizations and other networks.

The second type is instrumental learning (Radaelli 2009; Gilardi 2010). It is variably informed by (more or less bounded) rational policy-making, evidence-based policy agendas, and less normative theories of the bureaucracy, and it can be characterized as an updating process based either on Bayesian rationality or cognitive shortcuts. From the Bayesian perspective, estimates of a given quantity of interest are reached via a combination of prior beliefs and evidence, which produce so-called posterior beliefs. Because prior beliefs are revised in the light of new information, this process is also called “Bayesian updating.” To illustrate, imagine that policy-makers want to stop an oil spill in the Gulf of Mexico and are considering whether to adopt a specific technique that might work but might also worsen the problem. Based on prior experience, expert advice, and degree of risk aversion, some are quite optimistic about the operation's chances of success, while others are more skeptical, but in the end a series of trials are allowed. After these trials, policy-makers update their beliefs on the effectiveness of the technique and need to decide whether a large-scale operation can be attempted. All policy-makers are exposed to the same evidence, but their posterior beliefs will vary depending on their prior beliefs. Thus, stronger evidence is needed for a skeptical policy-maker to change their mind than for someone who was already relatively confident that the technique could work. Specifically, in Bayesian inference “strong” evidence (p. 158) means that there are many data points (many trials, in this example) and that the variability of results is low. If there are only a few examples and/or if these point to different conclusions, then prior beliefs play an important role and will be updated only marginally.

An alternative, and equally powerful, take on instrumental learning comes from cognitive psychology, especially the influential work of Kahneman and Tversky (2000) and its application to political science by Weyland (2007). Similar to the Bayesian approach, actors want to improve their assessment of some state of the world. Contrary to Bayesian inference, however, actors update their beliefs not in accordance with statistical laws but following “cognitive shortcuts” such as availability and representativeness. First, availability means that not all information carries the same weight. Particularly vivid examples or “success stories” are more influential than equally relevant but less striking cases. Continuing the oil spill example, a major catastrophe, even if it were an isolated accident, can negate many low-key successes by shaping policy-makers’ minds regarding the effectiveness of certain types of drilling. By contrast, in Bayesian updating a single outlier will be properly discounted. Second, representativeness refers to the tendency to draw disproportionate conclusions from a limited empirical basis. Short-term trends or a handful of successful examples are interpreted as conclusive evidence, while in Bayesian updating sample size is a critical parameter. Thus, if initial attempts to contain the oil leak seem to work, policy-makers tend to focus on this restricted basis and be more optimistic than they should be if they updated their beliefs following statistical laws.

At the international level, instrumental learning is concerned with drawing lessons from the experience of others. The rationale for this type of learning is straightforward. Policy-makers learn, by trial and error, from their own experiences. They use their policies as experiments. They make trials, observe the mistakes, and improve. However, by borrowing from others policy-makers do not have to wait for their own errors and crises to show the way ahead. This explains why there is an intense international activity concerned with “lesson-drawing.” Richard Rose is the political scientist who has developed this field (Rose 1991). Others prefer the notion of policy transfer (Dolowitz and Marsh 1996). In lesson-drawing, policy-makers dissatisfied with the status quo scan the practices in effect in other units (such as countries, regions, or cities) and assess to what extent they could successfully implement them in their jurisdiction. Alternative ways of drawing a lesson exist: from copying a policy almost exactly to just taking inspiration from it, with different degrees of adjustment in between. Lesson-drawing has been criticized by some scholars as being too broad and almost indistinguishable from any type of rational decision-making (e.g. James and Lodge 2003). Thus, it may be useful to define the concept more narrowly.

Policy change is often the product of elected politicians’ authority to change legislation and policy, and bureaucratic learning with knowledge about solutions and their feasibility. To make sense of this distinction, it is useful to go back to a classic article by May (1992), who distinguished between instrumental and political learning, which is our third ideal type of learning. Broadly speaking, the latter refers to the strategies that help (p. 159) policy-makers control a policy domain. If it is difficult to monitor and attribute responsibility for performance, organizations tend to become “political” rather than seek policy improvement (Brunsson 1989). Qualifying May's original insights, political learning can lead to the following usages of knowledge (Boswell 2008): “strategic” (i.e. to increase the control of elected policy-makers on non-elected regulators, or to increase the popularity of the incumbent in election years), “substantiating” (i.e. to support a prefabricated position, for example for or against asylum seekers and migrants; see Boswell 2008), and “symbolic” (i.e. using governance tools to send signals to the business community or for blame-shifting purposes). Politically competitive environments will ensure that organizations learn how to use governance innovation to exercise control, implement broad political trajectories such as deregulation, increase popularity in the polls, and shift power. Thus, Gilardi (2010) showed that governments are more likely to implement unemployment benefits cuts if other governments did not suffer major electoral losses after doing so.

Further, an organization under pressure to become or remain a respectable member of international environments will engage in symbolic learning, which is our fourth ideal type of learning. Organizational theory has long shown that in symbolic usages of learning what matters is legitimacy, not policy performance (DiMaggio and Powell 1991). The distinction between types of learning is important not only analytically but also when it comes to assessing the consequences of learning from a normative perspective, as we discuss below. Different types also suggest variation in micro-foundations and implications for the use of knowledge, as shown in Table 11.1 below.

Theoretical problems

When scholars set out to examine learning in the context of governance, they typically conceptualize learning as a process of updating beliefs. One difference in the literature is between policy studies and the field of “modes” of governance.

In comparative public policy, this updating refers to the classic components of public policy, such as problem definition, the results achieved at home or abroad, goals, and knowledge—either knowledge about institutional/policy performance or about the volitions and beliefs of the actors involved in strategic interaction. In studies of institutional architectures of governance dealing with more abstract notions such as hierarchy, the shadow of the law, facilitated coordination, and voluntarism, updating is essentially about forms of compliance, emulation, and herding, or, on the other hand, dense communication and reflexivity. Public policy and modes of governance are often combined in studies on how the political economy of specific regimes affects certain policy areas, as is shown by the literature on the European Employment Strategies of the European Union (e.g. Heidenreich and Bischoff 2008). In both cases (i.e. policy and modes), updating is grounded in social interaction within the constellation of actors involved either in the policy-making or in the modes of governance.

(p. 160) (p. 161) One problem with this approach is that it takes a concept of updating that is rooted in individual behavior and transposes it to the macro level of policy or forms of governance. However, political scientists working on governance tend to work on macro–macro or meso–meso relations. They observe meso phenomena such as policy change, and explain it in terms of institutional variables. Or they are concerned with the emergence of complex macro architectures—such as network governance in industrial districts (Piore and Sabel 1984) or the open method of coordination in the European Union (EU)—and (for their explanations) look at other macro variables such the limitations of hierarchy, and slow institutional adjustment to the environment.

Table 11.1 Types of learning, mechanisms, micro-foundations, use of knowledge, and goals of learning

Type of learning

Mechanism: updating beliefs on the basis of …


Use of knowledge

Goals of learning

Reflexive social learning

Change of preferences of actors

Legitimacy crisis; identity transformation; socialization

Reflexive, broad social usage

Learning how to learn


Evidence about policy: what seems to work at home or elsewhere

Organizations under pressure to deliver

Fully or bounded rational updating of beliefs

Improving policy effectiveness

→organizations focus on analysis in order to improve on policy performance

→organizations seek to draw lessons from abroad


Evidence and conjectures about the strategies pursued by other actors

Organizations that compete for control of an organizational field in an environment where it is difficult to measure and attribute responsibility for performance

– Strategic

Gaining political advantages, such as winning elections

– Substantiating

– Symbolic


What seems to provide legitimacy

Dense, institutional international environments wherein organizations seek legitimacy


Gaining legitimacy

One way or the other, this way of using learning does not stand up to rigorous scrutiny. Following Coleman (1990), a macro–macro (or meso–meso) explanation consists of three steps: (a) the micro-foundations of the macro-level independent variable; (b) the micro–micro mechanisms that characterize interaction in the constellation of individual actors; and (c) the micro-to-macro relations of aggregation, that is, the mechanisms that transform individual phenomena and social interaction into macro (or meso) outcomes of the dependent variable. Although Coleman's bathtub has been criticized, it remains a rigorous standard for explanation. However, one would struggle to find research projects on learning that truly match this standard.

Arguably, this is the root of the frustration of political scientists when they reflect on the state of play in the field of learning (James and Lodge 2003; Grin and Loeber 2007). On micro-foundations, we have to consider individuals or organizations, depending on the unit of analysis. At the individual level, there has been an explosion of projects in the area of cognitive psychology, looking at bounded rationality, heuristics, and the interplay between emotions and calculation. An early study of heuristics and their implications for learning, public policy, and governance is that by Schneider and Ingram (1988), which also ties in nicely with the discussion on lesson-drawing.

The study of micro-foundations has even reached an inner level of analysis, with projects on models of the mind and neuro-politics. This is as far as we can go in exploring the origin of learning and biases in learning processes. The key question remains “Why would an organization or an individual want to learn?” Organizational theory provides different answers (Brunsson 1989) depending on whether the actors are appraised on the basis of their performance or not. Many organizations in the public sector are “political” in Brunsson's sense, that is, they are not judged on their performance because they are not responsible for implementation or there are problems in measuring final policy outcomes as a result of what the organization did or did not do.

With relation to micro–micro interaction, scholars of diffusion have pointed out at pathways in the epidemiology of governance innovations. But we can also go back to Moscovici and other pioneers of social psychology and their insights on how—assuming dense social interaction—individuals respond to the opinion of the majority and the minority in their groups. This strand could usefully tie in with studies on the role of information as symbol, signal, and cognitive device (Levitt and March 1988; March 1981). Finally, the micro–micro exploration of learning has been improved by our understanding of how a constellation of actors gets locked into mechanisms of increasing (p. 162) returns and therefore deviates from its optimal path. One striking observation, however, is that we still do not know much about how communities of social actors—especially policy-makers—learn, as shown by Richard Freeman's comprehensive review of learning in public policy (Freeman 2006). Methods such as participant observation may assist in this direction—a formidable example is Sharon Gilad's study of the ombudsman (Gilad 2009).

Aggregation—the third element of Coleman's explanation—remains a problem. All too often political scientists use the insights of social psychology or the economics of innovations to jump to conclusions about institutions and governance. It is true that individuals think and interact. It is more problematic to show that institutions, like the National Health Service or the World Bank, have learned. On the one hand, there are those who assign sui generis cognitive capabilities to institutions, following social anthropologist Mary Douglas (1986). On the other, governance scholars still find it difficult to prove institutional and organizational learning empirically.

The empirical analysis of learning

As we have argued above, learning has been over-conceptualized and under-researched empirically. One reason is that the analysis of governance informed by learning is riddled with problems and trade-offs in research design. To begin with, if the null hypothesis of not-learning is not specified, pretty much everything can become “learning” and the measurement of the dependent variable is biased (Radaelli 2009). Second, the time-dimension matters and can bias empirical findings. If one examines learning over a fairly narrow time frame, one may not see the learning buds that are about to blossom. Yet if one takes the long view, it is almost impossible not to find instances of learning. Organizations, political systems, pressure groups, and policy officers must learn, at least if they are to survive in a dynamic environment. Third, when measurement is carried out via qualitative interviewing—which is still a classic tool in comparative policy studies—the officers in charge of policy tend to overestimate the role of learning as opposed to coercion and classic notions of power politics. For an officer, it is easier to justify organizational behavior in terms of learning from experience or best practice, rather than disclosing features of power politics and conflict. There is also a more subtle syndrome. When organizations experience a crisis, such as mad cow disease or the transfusion scandals in Europe, they use the “we have learned” mantra as a way to re-establish their control and legitimacy in policy fields. Learning (and ironically the crisis too) is therefore used to protect an organization from criticism: “we have learned—hence we are fine now, and the case is closed” is a typical response, as argued by Thomas Alam (2007) in his dissertation on the mad cow disease crisis. If interviewers do not search for the larger picture and “determine” the existence of learning on the basis of their interviews, they may draw grossly biased inferences about governance and power relations.

(p. 163) On the qualitative side, we have recent examples of studies that define their micro-foundations and elaborate on their expectations about evidence for one type of learning or another (Boswell 2008 on migration; Radaelli 2009 on regulatory reform). Schrefler (2010) illustrates the logic behind different types of learning using a knowledge utilization framework. Qualitative studies on hormone growth producers by Dunlop (2010) and Dunlop and James (2007) connect the debate on epistemic communities mentioned above to principal–agent analysis, thus showing empirically how to hybridize models of ideational politics, often linked to reflexive governance, to the rational choice template of delegation models. In her attempt to document empirically the explanatory power of discursive institutionalism, Vivien Schmidt (2008) has provided a template for the empirical analysis of ideational politics in institutional settings. Empirically, learning is a manifestation of the transformative power of discourse. In his book on the diffusion of health and pension reforms, Weyland (2007) has shown how to test alternative causal mechanisms. The advocacy coalition framework has produced both qualitative and quantitative projects (Weible, Sabatier, and McQueen 2009). Peter Hall's concept of third-order, paradigmatic change has been looked at from different angles of historical neo-institutionalism (Beland 2005). Oliver and Pemberton have re-examined economic policy in Britain. They found that policy learning does not necessarily lead to policy change, due to the capacity of institutions to channel forces of change. A paradigm may fail without being necessarily replaced wholesale (Oliver and Pemberton 2004).

On the quantitative side, scholars have generally proceeded from a narrow definition of learning based on belief updating. The first step is the identification of a relevant and measurable policy outcome that can be used to define “success.” Then, the main hypothesis is that the adoption or level of the policy in a given country is influenced by that of other countries, with more “successful” countries being more influential. This approach can be implemented relatively straightforwardly in a standard regression framework, with the computation of so-called “spatial lags,” or weighted averages of policies elsewhere, in which weights correspond to different degrees of success. Alternatively, information on success can be used in a dyadic framework adapted from the international relations literature. For instance, Volden (2006) studied state children's health insurance policies in the US and found that states were more likely to imitate the programs of other states that had managed to increase the number of insured children, which was one of the main objectives of the policy. Another strategy is to measure the correlation of policies and outcomes and include it as an explanatory variable in the regression. For example, Gilardi, Füglister, and Luyet (2009) showed that specific hospital-financing instruments were more likely to be adopted when they were associated with a slower increase of health-care expenditure abroad. In her study of economic policies, Meseguer (2009) operationalized learning by directly following the Bayesian updating model and showed that, as expected theoretically, both average results abroad and their variability influence the willingness of governments to adopt the same policies. These approaches have contributed to closing the gap between theory and evidence and do not suffer from the looseness of the qualitative approaches discussed above, which is no little achievement; nevertheless they are not always practicable. The identification and (p. 164) measurement of relevant outcomes is often difficult because most policies have multi-dimensional goals, which can be loosely defined, thus complicating the analysis considerably. While this complexity is interesting in itself (Gilardi 2010), practically it makes quantitative analyses of learning feasible only in specific cases. Moreover, quantitative analysis in general has its share of problems, such as a number of simplifying assumptions (i.e. additivity and linearity) that can be at odds both with the theory and the nature of the phenomenon under study, and a limited capacity to identify causal relations (as opposed to correlations) in observational (i.e. not experimental) settings.

Conclusions and normative implications

In conclusion, if the governance turn has influenced and rekindled the discussion on learning, there is also evidence that theoretical and empirical research on learning has potentially useful lessons for those working on governance and governments. As shown by the “new” modes of governance (Borrás and Radaelli 2010), whilst governance theorists design modes based on a logic of reflexivity and open, decentralized social learning, learning scholars point out that the same mode can be used both reflexively and in more coercive or hierarchical ways (Héritier and Rhodes 2011). This observation can be taken to a more abstract level: whilst the thrust of the reaction to “government” was an emphasis on private–public relationships and learning in dense networks of interaction, learning studies distinguish between different goals of learning. There are constellations of actors and institutions conducive to genuine improvement and reflexivity, albeit that policy improvement and “better policy” mean different things to different actors (Radaelli 2005). But the literature on learning has also found constellations of actors and institutions leading to more political goals, such as “learning how to gain popularity and win elections.” This way, learning scholars have also reminded governance theorists that governments and elected politicians still matter in public policy. Organizational scholars of learning have drawn attention to legitimate symbolic goals of learning and mechanisms of hope that constrain the potential of different types of governance. In the future, the cross-fertilization between governance and learning will continue, as evidenced by recent studies on “new” governance that stress the role of hierarchy (Héritier and Rhodes 2011) and the tradition of actor-centered institutionalism (Scharpf 1997). The emerging research agenda is about setting the conditions for different usages of governance (deliberative but also coercive) in the context of instrumental, political, and symbolic learning.

This leads us to a final observation on the normative implications. Ultimately, this question is crucial for normative assessments of governance structures. Knowing if and under what conditions learning can occur, and to what extent, has sweeping implications for the desirability of alternative forms of governance along a centralized–decentralized (or coordinated–uncoordinated, hierarchical–horizontal) continuum. The literature still has to make significant progress on this issue. A first problem is that learning tends to be unambiguously considered a positive phenomenon. However, we (p. 165) should ask what political actors learn. As Hall (1993: 293), emphasized, “[j]ust as a child can learn bad habits, governments, too, may learn the ‘wrong’ lessons from a given experience.” More than that, political actors may even actively learn how to pursue the “wrong” goal more effectively; terrorists are a case in point (Horowitz 2010).

More realistically, politicians may learn about the policies that help them get re-elected rather than solve problems, as we discussed above. Elected policy-makers are not primarily interested in truth, reflexivity, and “what works.” They primarily seek power, bureau expansion, popularity, reputation, and other goals. Knowledge can be used to gain legitimacy for ill-planned policy reforms or to justify prefabricated opinions. For this reason, learning may not be beneficial to politics and public policy-making. Some authors have noted that learning that leads to genuine policy solutions may still occur, but it is unintended. Under these circumstances, policy-makers seek power and other goals, but their interaction produces learning as an unintended effect.

Multi-level governance settings can be equally ambiguous. Majone (2000, 2002) argued that regulatory networks help improve the accountability of independent regulators through peer pressure and reputational incentives that keep them under control. In other words, networks lead to a learning process in which regulators conform to emerging professional norms. While it is possible that this will result in practices that are seen as desirable from the point of view of democratic principles, it is also possible that networks will merely reinforce the autonomy of regulators and their insulation from democratic processes, leading to an increased democratic accountability problem. More generally, networks can fulfill a socialization function and promote the development of shared meaning and values and common definitions of problems and solutions (Börzel and Heard-Lauréote 2009: 142). Again, however, the outcome may or may not be desirable from a normative standpoint. As sociologists have shown, institutional isomorphism is disconnected from the objective characteristics of practices, so that policies can spread regardless of their actual consequences, and even despite their ineffectiveness.

In other words, the literature should overcome the presumption that learning, to the extent that it actually occurs, invariably leads to normatively desirable results. Of course, what is desirable and what is not are fundamentally contested and vary across political actors and over time. Acknowledging the normative ambiguity of learning does not lead to a dilution of the concept but, through new research perspectives, to a sharpening of it. It may even make learning more palatable to researchers who find the original concept too technocratic.


Claudio M. Radaelli wishes to acknowledge his grant, awarded by the European Research Council on Analysis of Learning in Regulatory Governance, ALREG ( Fabrizio Gilardi acknowledges the support of the Swiss National Science Foundation (NCCR Democracy).


Alam, T. 2007. Quand la vache folle retouve son champ : Une comparaison trasnationale de la remise en ordre d’un secteur d’action publique. PhD dissertation, CERAPS, Lille.Find this resource:

    Beland, D. 2005. Ideas and social policy: An institutionalist perspective. Social Policy & Administration 39(1): 1–18.Find this resource:

      Bevir, M. (ed.) 2010. The Sage Handbook of Governance. London: Sage.Find this resource:

        Borrás, S. and Radaelli, C. M. 2010. Recalibrating the Open Method of Coordination: Towards Diverse and More Effective Usages. Stockholm: Swedish Institute for European Policy Studies, Sieps WP 2010, 7.Find this resource:

          Börzel, T. A. and Heard-Lauréote, K. 2009. Networks in EU multi-level governance: concepts and contributions. Journal of Public Policy 29: 135–151.Find this resource:

            Boswell, C. 2008. The political functions of expert knowledge: Knowledge and legitimation in European Union immigration policy. Journal of European Public Policy 15: 471–488.Find this resource:

              Brunsson, N. 1989. The Organization of Hypocrisy: Talk, Decisions and Actions in Organizations. Chichester and New York: John Wiley and Sons.Find this resource:

                Campbell, J. L. 1998. Institutional analysis and the role of ideas in political economy. Theory and Society 27: 377–409.Find this resource:

                  Checkel, J. 1998. The constructivist turn in international relations theory. World Politics 50: 324–348.Find this resource:

                    Coleman, J. S. 1990. Foundations of Social Theory. Cambridge, MA: The Belknap Press of Harvard University Press.Find this resource:

                      DiMaggio, P. J. and Powell, W. W. 1991. The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. In P. J. DiMaggio and W. W. Powell (eds.), The New Institutionalism in Organizational Analysis. Chicago and London: University of Chicago Press, 63–82.Find this resource:

                        Dobbin, F., Simmons, B. and Garrett, G. 2007. The global diffusion of public policies: Social construction, coercion, competition, or learning? Annual Review of Sociology 33: 449–472.Find this resource:

                          Dolowitz, D. and Marsh, D. 1996. Who learns from whom: A review of the policy transfer literature. Political Studies 44: 343–357.Find this resource:

                            Douglas, M. 1986. How Institutions Think. London: Routledge and Kegan Paul.Find this resource:

                              Dunlop, C. 2010. Epistemic communities and two goals of delegation: Hormone growth promoters in the European Union. Science and Public Policy 37: 205–217.Find this resource:

                                Dunlop, C. and James, O. 2007. Principal–agent modelling and learning: The European Commission, experts and agricultural hormone growth promoters. Public Policy and Administration 22: 403–422.Find this resource:

                                  Freeman, R. 2006. Learning in public policy. In M. Moran, M. Rein, and R. E. Goodin (eds.), Oxford Handbook of Public Policy. Oxford: Oxford University Press, 367–388.Find this resource:

                                    Gilad, S. (2009). Juggling conflicting demands: The case of the UK financial ombusdman service. Journal of Public Administration Research and Theory 19: 661–680.Find this resource:

                                      Gilardi, F. 2010. Who learns from what in policy diffusion processes? American Journal of Political Science 54: 650–666.Find this resource:

                                        Gilardi, F., Füglister, K. and Luyet, S. 2009. Learning from others: The diffusion of hospital financing reforms in OECD countries. Comparative Political Studies 42(4): 549–573.Find this resource:

                                          Grin, J. and Loeber, A. 2007. Theories of policy learning: Agency, structure, and change. In F. Fischer, G. J. Miller, and M. S. Sidney (eds.), Handbook of Public Policy Analysis: Theory, Politics, and Methods. Boca Raton, FL: CRC Press, 201–219.Find this resource:

                                            (p. 167) Hall, P. A. 1993. Policy paradigms, social learning, and the state. The case of economic policymaking in Britain. Comparative Politics 25: 275–296.Find this resource:

                                              Heidenreich, M. and Bischoff, G. 2008. The open method of co-ordination: A way to the Europeanization of social and employment policies? Journal of Common Market Studies 46: 497–532.Find this resource:

                                                Héritier, A. and Rhodes, M. (eds.) 2011. New Modes of Governance in Europe: Governing in the Shadow of Hierarchy. Basingstoke: Palgrave Macmillan.Find this resource:

                                                  Horowitz, M. C. 2010. Nonstate actors and the diffusion of innovations: The case of suicide terrorism. International Organization 64: 33–64.Find this resource:

                                                    James, O. and Lodge, M. 2003. The limitations of “policy transfer” and “lesson drawing” for public policy research. Political Studies Review 1: 179–193.Find this resource:

                                                      Kahneman, D. and Tversky, A. (eds.) 2000. Choices, Values, and Frames. Cambridge: Cambridge University Press.Find this resource:

                                                        Lenoble, J. and De Schutter, O. (eds.) 2010. Reflexive Governance: Redefining the Public Interest in a Pluralistic World. Oxford: Hart.Find this resource:

                                                          Levitt, B. and March, J. G. (1988). Organizational learning. Annual Review of Sociology 14: 319–340.Find this resource:

                                                            Majone, G. 2000. The credibility crisis of community regulation. Journal of Common Market Studies 38: 273–302.Find this resource:

                                                              Majone, G. 2002. The European Commission: The limits of centralization and the perils of parliamentarization. Governance 15: 375–392.Find this resource:

                                                                March, J. G. 1981. Footnotes to organizational change. Administrative Science Quarterly 26: 563–577.Find this resource:

                                                                  May, P. J. 1992. Policy learning and failure. Journal of Public Policy 12: 331–354.Find this resource:

                                                                    Meseguer, C. 2009. Learning, Policy Making, and Market Reforms. Cambridge: Cambridge University Press.Find this resource:

                                                                      Oliver, M. J. and Pemberton, H. 2004. Learning and change in 20th-century British economic policy. Governance 17: 415–441.Find this resource:

                                                                        Piore, M. J. and Sabel, C. F. 1984. The Second Industrial Divide: Possibilities for Prosperity. New York: Basic Books.Find this resource:

                                                                          Radaelli, C. M. 2005. Diffusion without convergence: How political context shapes the adoption of regulatory impact assessment. Journal of European Public Policy 12: 924–943.Find this resource:

                                                                            Radaelli, C. M. 2008. Europeanization, policy learning, and new modes of governance. Journal of Comparative Policy Analysis 10: 239–254.Find this resource:

                                                                              Radaelli, C. M. 2009. Measuring policy learning: Regulatory impact assessment in Europe. Journal of European Public Policy 16: 1145–1164.Find this resource:

                                                                                Rose, R. 1991. What is lesson-drawing? Journal of Public Policy 11: 3–30.Find this resource:

                                                                                  Sabel, C. and Zeitlin, J. (eds.) 2010. Experimentalist Governance in the European Union: Towards a New Architecture. Oxford: Oxford University Press.Find this resource:

                                                                                    Sanderson, I. 2002. Evaluation, policy learning and evidence-based policy making. Public Administration 80: 1–22.Find this resource:

                                                                                      Sanderson, I. 2009. Intelligent policy making for a complex world: Pragmatism, evidence and learning. Political Studies 57: 699–719.Find this resource:

                                                                                        Scharpf, F. 1997. Games Real Actors Can Play: Actor-Centered Institutionalism in Policy Research. Boulder, CO: Westview Press.Find this resource:

                                                                                          Schneider, A. and Ingram, H. 1988. Systematically pinching ideas: A comparative approach to policy design. Journal of Public Policy 8: 61–80.Find this resource:

                                                                                            (p. 168) Schmidt, V. A. 2008. Discursive institutionalism: The explanatory power of ideas and discourse. Annual Review of Political Science 11: 303–326.Find this resource:

                                                                                              Schrefler, L. 2010. The usage of scientific knowledge by independent regulatory agencies. Governance 23: 309–330.Find this resource:

                                                                                                Scott, C. 2010. Reflexive governance, regulation, and meta-regulation: Control or learning? In J. Lenoble and O. de Schutter (eds.), Reflexive Governance: Redefining the Public Interest in a Pluralistic World. Oxford: Hart, 43–66Find this resource:

                                                                                                  Stone, D., Garnett, M., and Denham, A. (eds.) 1998. Think Tanks across the World: A Comparative Perspective. Manchester: Manchester University Press.Find this resource:

                                                                                                    Volden, C. 2006. States as policy laboratories: Emulating success in the Children's health insurance program. American Journal of Political Science 50: 294–312.Find this resource:

                                                                                                      Volden, C., Ting, M. M, and Carpenter, D. P. 2008. A formal model of learning and policy diffusion. American Political Science Review 102: 319–332.Find this resource:

                                                                                                        Weible, C. M., Sabatier, P. A., and McQueen, K. 2009. Themes and variations: Taking stock of the advocacy coalitions framework. Policy Studies Journal 37: 121–140.Find this resource:

                                                                                                          Weyland, K. 2007. Bounded Rationality and Policy Diffusion: Social Sector Reform in Latin America. Princeton: Princeton University Press.Find this resource:

                                                                                                            Wiener, A. 2008. The Invisible Constitution of Politics: Contested Norms and International Encounters. Cambridge: Cambridge University Press.Find this resource: