Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 05 March 2021

The Evaluation of Reemployment Programs: Between Impact Assessment and Theory-Based Approaches

Abstract and Keywords

In spite of a much improved labor market, the outcome of a leading evaluation report on reemployment programs in the Netherlands turned out negative. This result might be due to limitations of the evaluation method used by the researchers, who had to content themselves with a nonexperimental approach. Currently, for many evaluation researchers, the experimental method stands out as the superior design, especially when combined with a meta-analysis over several trials. We show, however, that experimental evaluations do not solve the uncertainties in this field. Meta-analyses of evaluation studies in Europe and the United States produced strikingly mixed results. Efforts to trace their diversity to variations in reemployment programs have not been very successful. This is mainly because of the “black box character” of many experimental evaluations, which offer little information about the content of the programs. Following “realistic evaluation,” we argue for a focus on the theories behind these programs in evaluation research. To this end, reemployment services are depicted in twelve core (mediating) mechanisms.

Keywords: reemployment services, realistic evaluation, experimental versus nonexperimental designs, meta-analysis, intervention strategies, mediating mechanisms, active labor market policy

In previous chapters we have seen how psychological insights have been brought to bear on the design of reemployment services. Especially the JOBS program, described by Price and Vinokur (this volume), can be regarded as an exercise in applied psychology. These and other rigorously tested interventions have repeatedly shown encouraging results. But do these interventions also work on a larger scale and systematically? And how can we establish that? This chapter deals with the issue of evaluation. It is written from a socioeconomic perspective, since the large majority of the evaluations in this field are conducted by economists and econometricians. Yet this approach is not principally different from the testing strategies we encounter in the psychological literature. You will therefore find the present chapter clearly in line with the other parts of this handbook, many of its methodological topics being familiar to psychologists. This holds especially for the application of experimental designs, the use of meta-analysis, and the recognition of explanatory mechanisms. We illustrate our discussion with the evaluation of reemployment services in the Netherlands, where for 10 years an extensive practice of these services has arisen.

Nice Success or Manifest Failure? The Ambiguities of the Dutch Reemployment Experience

In 2008, the Dutch government sent the Re-employment Policy Review to parliament. The debate that followed can be called memorable for several reasons. First, it is highly unlikely for a government to be heavily criticized in spite of the favorable (p. 526) overall outcome of a policy. Second, the debate paved the way for a very substantial reduction of reemployment efforts. Third, it demonstrated the power of scientific evaluation research.

A Brief History

From the crisis of the 1980s, the Netherlands had to deal with relatively high unemployment and disability rates and a correspondingly large share of social security expenditures in the national income. Because of this, the country was even called the “sick man of Europe.” Since then successive governments have strongly promoted a “work work work” strategy, with an increasing focus on reemploying the unemployed, the sick, and the disabled. Around the turn of the century, a quasimarket for commercially operating reemployment companies was created to this end. Ten years on, the Netherlands is the country with the lowest unemployment rate and an above average labor force participation within the European Union.

This turnaround is described in the Policy Review. The number of unemployment and social assistance benefits sharply decreased after the 2001–2004 recession. The number of disability benefits fell slightly. The share of the long-term unemployed in total unemployment has fallen below the European average. Especially the decrease in the number of welfare recipients to the lowest level in decades is considered a “historical landmark.” The Dutch government writes that “these positive developments cannot be automatically attributed to the re-employment policy. However, it is likely that the re-employment policy has also contributed to this” (Ministry of Social Affairs and Employment, 2008, p. 40). The Policy Review then presents a table showing that the results of reemployment courses have increased over the years.1 Of those who started a reemployment course in 2002, some 26% had found a job 2 years later. Of those who started a course in 2004, the corresponding figure was 41%. “Therefore, the results have increased by more than half. This means that we are on course towards achieving the objective” (p. 40). To give a glimpse of what is to follow: These figures represent what researchers call the gross effectiveness of a policy.

So Secretary Piet Hein Donner and Assistant Secretary Ahmed Aboutaleb came home with some good macro figures. You would expect that they would look forward to the debate with confidence and that approval would be given to the policy conducted. However, nothing was further from the truth. Members of the “Second Chamber” (i.e., the parliament) clashed vehemently with the government about the disappointing results of the reemployment policy. “Billions are disappearing into thin air, I do not understand that you accept this without any questions asked,” said a spokesman for the leftist opposition. And the spokesman of the ruling party (the secretary’s party) said: “These figures are shocking. Tax money is being wasted. The government is far too optimistic.” Newspaper headlines expressed this mood with equal emphasis: “Reemployment rarely results in a job” and “Helping one unemployed individual find a job costs EUR 537,000.”2 The subsequent (right-wing) cabinet lead by Mr. Rutte was therefore of the opinion (2010) that a large part of the reemployment budget could be cut without any noticeable damage.

At some point in the Policy Review, the members of government were forced to consider the net effectiveness of the programs. The idea is that some of the unemployed would have found a job anyway, without reemployment aid, which means that it comes down to estimating the added value of the reemployment services. What would have happened without them? It was mainly economists who, in recent years, emphasized net effectiveness as the only correct measure of effectiveness (e.g., De Koning, 2003; Koning & Heyma, 2009); these voices were especially influential in a departmental report by the Committee on the Future of Labour Market Policy (2001). The fact that the secretaries of state could not get around this in their review shows that economists enjoy the “power to define.” This in itself is worth noting. Evaluation research has been described, in the words of Wildavsky (1979) as “speaking truth to power.” The turn in the Policy Review apparently shows that it is possible for an evaluation science to acquire so much authority that even a government has to listen.

Regarding net effectiveness, the message was clearly less positive. The Policy Review pointed to the results of econometric research which allegedly proved that the added value of reemployment courses, on average, was only a few percentage points. It was these results that the newspapers and the political debate focused on. Therefore there is every reason to look more closely at these results and, as a corollary, to discuss the ways in which reemployment programs can be evaluated in greater depth.

We do this by first presenting the three main approaches to impact evaluation in the next section. One of the three, the nonexperimental approach, will subsequently be illustrated by the Dutch case. (p. 527) The difficulties involved in this case and similar cases drive many evaluation researchers toward the experimental approach. However, an international tour, discussed further on, shows that this often merely shifts the problem and leaves many questions unanswered. Whether the third approach—the “realistic evaluation”—provides a solution remains to be seen. At this time, this approach has few exemplary instances to show. Nevertheless we present, as a first in this field, questions that a realistic reemployment evaluation should answer. Our conclusion is that this would bring a welcome reorientation to evaluation research.

Impact Evaluation: Three Principal Approaches

In the field of impact evaluation, the dominant question is: To what extent can a measured effect be attributed to the intervention with certainty? This is the internal validity of the evaluation design. In addition, there is the external validity that concerns generalization: To what extent will the effect also occur at other times, in other places, and in other groups? There is tension between the attention given to internal and external validity, as will become clear.

The design of an evaluation is crucial to be able to determine these impacts. Three approaches can be differentiated:

  1. 1. The experimental design. In the dominant view, this is the ideal design, or the “gold standard” of policy evaluation.

  2. 2. The nonexperimental design. Within this design, often an attempt is made to approach the certainty of the experimental design as much as possible with the aid of statistical methods. In a majority of opinions, this counts as the second-best alternative.

  3. 3. Theory-based (“realistic”) evaluation. This, too, could be regarded as a second-best alternative to experimental design, but according to some major proponents (Pawson & Tilley, 1997), it is an even better alternative. Authors who are more inclined to compromise view this approach as a corollary of 1 and 2 and argue that it supplements them (e.g., Van der Knaap, Leeuw, Bogaerts, & Nijssen, 2008).

Apart from these, a macroeconomic perspective can be discerned. In this perspective one tries to answer the question of how effects of specific measures add up to the aggregate level. The two polar cases are displacement (i.e., other persons lose their jobs or opportunities) and positive spillovers (i.e., other persons benefit as well). According to this macro framework, the results of a specific program will be overestimated when displacement occurs and underestimated when third parties benefit from the program. Things get even more complicated when one assumes an impact on wages, which affects the overall level of employment. Some economists therefore theorize that micro-level evaluations like experiments can be dead wrong if such equilibrium effects are neglected (cf. Cahuc & Le Barbanchon, 2010). Reemployment policy evaluations only rarely reach this level (but see Blundelll, Costa Dias, Meghir, & Van Reenen, 2004; De Koning, 2001); therefore we refrain from it in this review. This perspective makes it understandable, however, why experimental evaluations are less common in macro-oriented fields like economics and sociology than in clinical fields like medicine and psychology. As most readers will acknowledge, the experimental design very much belongs to the core business of psychologists.

Finally, it is important to note that systematic reviews or metaevaluations are becoming increasingly important in the field of policy evaluation. The abundance of studies has created a need for policymakers (and scientists) to make up the balance of all these studies from time to time. In the past, this took place especially in a narrative sense: A researcher would write a literature overview in which he mainly made his own choices and determined the emphasis himself. Objections arose against the subjectivity of this approach, as a result of which systematic reviews must now comply with strict rules regarding the search strategy (databases, search terms), selection of studies, and the weighting of the results. If, in addition, an attempt is made to calculate an average effect across all the selected studies, a meta-analysis is performed. The advance of the meta-analyses has further strengthened the dominance of the experimental design. Meta-analysts need a selection criterion for the studies they wish to include in their synthesis. The studies with the “strongest designs” are preferred in this regard. Thus, a hierarchy of evidence has been created, where experimental designs are at the top of the list and many “weaker” designs are not even addressed. These other types of studies are therefore not accepted as evidence and, in scientific terms, remain invisible.

1. Experimental research.

The key point of this design is the wish to ensure internal validity by (p. 528) excluding any systematic influence caused by any factor other than the policy. This is done by randomly allocating those who will participate in a program and those who will form the control group of nonparticipants. The standard abbreviation for this approach has therefore become RCT: randomized controlled trial. Designing, implementing, and preserving the trial usually takes considerable effort, but it has proved to be a feasible approach for large social programs as well (Hollister, 2008; Oakley et al., 2003). According to the proponents, a big advantage is that the design can be easily explained to politicians and other outsiders and that the results are more easily accepted than those of other tests (Burtless, 1995). This is particularly important for social programs, because they always take place in a political arena where there are usually parties that have a vested or ideological interest in disputing the results of evaluations. This is undeniably the case in the field of reemployment services. For leftwing critics, this is a surrogate policy that discharges the government from the duty to create real jobs, whereas for conservatives it is an excess of the welfare state that undermines the unemployed person’s own responsibility. Only unequivocal and “incontrovertible” results of experiments would be able to break through political bias and deadlocks.

A full RCT even makes demands with regard to the awareness of the participants. To counteract motivation effects, it is preferred that the participants not know which group they belong to (“double blind). In medical science, where this design is widely used, it is customary to give the control group a placebo. However, so far nobody has managed to figure out how to get people to participate in a pseudo‒reemployment program (a program without content) (unless you, as some Dutch critics would suggest in view of the results mentioned above, feel that all reemployment programs are actually pseudo).

2. Nonexperimental research.

Usually, in this approach, too, the impact of the policy is measured by the difference in comparison to a control group which, for the sake of clarity, is better called the “comparison group” here. The intention is to make the comparison group as similar as possible to the intervention group and, where this is not possible, to correct the differences by statistical means.3 The preequating of the groups is the matching method. For each participant in a reemployment program a nonparticipant is sought that is as similar as possible (in terms of age, gender, ethnicity, education, work experience, duration of unemployment, etc.). The seemingly endless list of variables that can be devised here makes matching a task that is almost impossible from the outset, which is why the method has fallen into disuse over the years. More recently, however, it has made a comeback in the form of propensity score matching, whereby the long series of measured variables is first reduced to a distance measure, after which members of both groups are matched based on this distance measure. This distance is usually calculated in relation to the chances to participate in the program. Thus, the likelihood of being included in a reemployment program is a weighted combination of age, sex, education, etc., with the likelihood of some combinations being much greater than that of others. These differences in likelihood probably say much about the underlying differences in labor market opportunities. Through this convenient procedure, the groups have been made comparable on the basis of the most relevant measured characteristics.

However, conducting statistical corrections afterward is the most common nonexperimental method. Suppose that participants of reemployment programs differ from nonparticipants in terms of the duration of their unemployment. By including unemployment duration as a control variable in the impact estimation (usually a regression equation), its influence is neutralized and the “net effect” (added value) of participation remains in the coefficient of the participation variable. All this is of course provided that these effects can be estimated using the linear model of regression analysis, but terms for nonlinear relationships and interaction effects can also be included. The regression models can become highly complex and sophisticated, but the approach is essentially not different from the standard practice of social scientists to correct for all kinds of “confounding variables” in explanatory statistical analysis.

A major complication occurs because in reemployment programs events are spread over time. The duration of a reemployment program can be different for each person and the inflow occurs at different times. The same applies to the result of getting a job. In impact evaluations, several outcomes can be used as a success criterion: the finding of a job, the quality of that job, the amount of time it took to find the job, and the income earned through the job. All those criteria need to be observed in some (p. 529) real-time interval. When the observation period is over (i.e., the survey is closed), some participants are employed and others are not. Some of the latter will soon find a job and some never will. However, we do not know this: the data are censored. Grouping all non‒job finders together is a rough procedure and would result in incorrect estimates of the predictors (the independent variables). Therefore it is better to include the differing time periods more directly in the impact estimates. To this end, the observed time period is subdivided into many small time boxes and the conditional probability that a person finds a job is estimated for each of those boxes (“conditional” means: given the person’s characteristics and whether or not he is participating in a reemployment program). This way, more consideration is given to the influence of time, whereby the time dependency can be modelled in several ways (linear, curvilinear, indefinite, etc.). Accordingly, the distorting effect of the observations all ending at one single point in time is reduced. Furthermore, cofactors that change over time (such as the economy) can be connected to the time periods as covariates. It is also possible to make time periods infinitely small (this is called a continuous time model as opposed to discrete time periods). Econometricians who conduct these time-sensitive analyses call them econometric duration models and usually present them together with an impressive series of formulas. But, also in other scientific disciplines (biology, sociology, demography, medicine) similar models are applied, often under different names (survival analysis, event-history analysis). By comparison, the duration models that econometricians apply in reemployment studies are mostly a simple variety.

Using the statistical tools of the social sciences, the measured differences between the treatment group and the comparison group can thus be tackled effectively. However, the unmeasured differences are the big issue. Participants of reemployment programs vary in motivation, presentation, language skills, physical and psychological health, social skills, self-control and a whole series of other characteristics. They probably also differ from nonparticipants in these respects. Participation in a reemployment program is selective and endogenous if there is no random assignment. It results from self-selection or selection by bureaucrats, who want to deliver “tailor-made work” and do not want to include candidates who in their view are certain to fail or can easily find reemployment without help. The “soft” personal characteristics that are linked to this selection are undeniably highly important for the success of reemployment programs. However, they are usually not recorded anywhere, certainly not in administrative files, and are therefore usually not visible for evaluation researchers. In evaluation jargon, this is called the unobserved heterogeneity. This is the Achilles’ heel of the nonexperimental approach.

However, to the tools and tricks of nonexperimentalists belongs the surprising claim of mainly econometricians, that statistical corrections can be used for unmeasured characteristics as well. This is where many researchers give up, but the claim is serious. Usually a so-called instrumental variable is sought, which is connected to participation in a program but not to its outcome, and to which the unmeasured differences “stick,” as it were. A more sophisticated version of this, which can be explained convincingly, is Heckman’s two-step correction. In the first step, participation in a reemployment program is estimated through a regression model. It can be reasonably assumed that the unmeasured differences between the unemployed are reflected in the unexplained variance or error term of this selection equation. If the individual error terms (i.e. the residuals or a derivative thereof) can now be included in the substantial equation in which the outcome of the program is estimated, unmeasured differences are still represented (Heckman, 1979) (see, for well-considered discussions, Bushway, Johnson, & Slocum, 2007; Winship & Mare, 1992). This approach is based on a series of assumptions and is often not practicable. Moreover, the results of these corrections appear to be sensitive to the model specification. Nonexperimental results also often differ substantially from the estimates obtained from an experimental design (Glazerman, Levy, & Myers, 2003; Pirog, Buffardi, Chrisinger, Singh, & Briney, 2009). Nevertheless, proponents believe these statistical corrections constitute a powerful weapon. And in the absence of an RCT there is always the ultimate argument of “We have to do something.”

3. Theory-based (“realistic”) evaluation.

Through the years, supporters of this approach have argued that the one-sided focus on impact assessment leaves too many questions unanswered. Why did the effect occur (or not)? Does this apply in general, or were there special circumstances? And what does this say about the underlying theory of the intervention? Advocates of theory-based evaluation keep pointing out that policies are always based on underlying theoretical ideas. Therefore an evaluation is only instructive if those theoretical ideas or—a somewhat stricter (p. 530) term—the policy theories themselves can be assessed. Being able to explain the obtained results is for this a necessary condition (Chen, 1990; Weiss, 1997). In the chapter by Price and Vinokur, this volume, concerning the JOBS program, we learn about the usefulness of this approach. However, within the “black box” mode of impact evaluation, such an explanation is often impossible to give (cf. Mosley & Sol, 2001). Thus in the evaluation of the Dutch reemployment policy, essential explanatory questions have remained unanswered. Does the alleged ineffectiveness of reemployment programs apply in general or only to a specific approach? Are the underlying assumptions and ideas incorrect or were they just poorly implemented? Does the problem mainly lie with the reemployment companies or were the clients not up to the task? Was the creation of a “market” for reemployment services a good idea after all? And, last but not least, does little added value (“net effectiveness”) mean that reemployment services are unnecessary or ineffective?

British sociologists Pawson and Tilley (1997) gave these criticisms a powerful boost with their book Realistic Evaluation, which can already be characterized as a “modern classic.” They argue that, in spite of its high standing, the results of an RCT are often anticlimactic. All the efforts made to establish and maintain this demanding research design result in little more than a simple difference score. For Pawson and Tilley, evaluating means understanding what has happened, and without an answer to “why” questions this understanding cannot occur. They add, importantly, that usually the confusion only increases if multiple RCTs are performed. Sometimes an experiment shows a positive result, the next time this is curiously lacking. “Most things have been found sometimes to work” (Pawson & Tilley, p. 10). This is because in social policy, whether a measure can be successful or not depends decisively on the conditions. In experimental designs, it is those very conditions that are regarded as disturbing factors that should be neutralised as much as possible. “The explanatory capacity of experimental evaluation rests on an irresolvable paradox. . . . The method . . . seeks to discount in design and evidence precisely that which needs to be addressed in explanation” (Pawson & Tilley, p. 31).

This becomes very clear in the role of facilitating conditions. Pawson and Tilley argue that no policy measure can do without the help of favorable circumstances. In reemployment research, the role of motivation is a striking example of this. Most reemployment professionals will endorse the view that motivation of the client is an indispensable condition for a reemployment program to succeed. However, for conventional evaluation researchers such motivation is a variable that should be controlled for. In a plea for more RCTs in labor market policy, Dutch economist De Beer (2001) exemplified his objection to nonrandom assignment as follows: “If participants are generally more motivated than non-participants, the effect of the instrument is overstated” (p. 74). Pawson and Tilley would find this an absurd position and, in their book, it seems that they already anticipated De Beer’s elucidation:

Quasi-experimentation’s method of random allocation, or efforts to mimic it as closely as possible, represents an endeavour to cancel out differences, to find out whether a program will work without the added advantage of special conditions liable to enable it to do so. This is absurd. It is an effort to write out what is essential to a program—social conditions favourable to its success.

(Pawson & Tilley, 1997, p. 52)

A “realistic” evaluation should a priori assume that policy measures can only be successful under certain conditions. We may call this the principle of conditional effectiveness. This seems particularly relevant to the field of reemployment programs, because the results here obviously depend on the cooperation of clients and other stakeholders (notably employers and benefits officers). Social policy always involves a change in behavior. Behavioral change results from changes in the resources and reasoning of all concerned. What we need, according to Pawson and Tilley, is a methodology that seeks to understand what a program actually offers to incite a change in behavior of the actors, and why not every situation is suitable to effect that change. Their proposed method of realistic evaluation is centred on so-called CMO chains: configurations of contexts, mechanisms, and outcomes.

The mechanisms are the ways in which a behavioral change can be brought about. In a paper of a later date, Pawson (2003) suggested that these can essentially be divided in material, cognitive, social, and emotional mechanisms (p. 472). Exactly how these mechanisms work is often not visible but is a matter of theoretical assumptions. The policy theories of reemployment are, in other words, arguments about the initiation and development (p. 531) of mechanisms. If the policy theories are complete, they also identify the main conditions under which these mechanisms may or may not occur. This way, the context and the mechanism together ensure that a particular outcome is achieved: context + mechanism = outcome. In analytical terms, the context is therefore to be understood as the moderator of the relationship between a program and the outcome.

The main question of evaluation can now be easily summarized in the motto “What works for whom under what circumstances?” Below (as in the chapter by Price & Vinokur, this volume), we will see that this key question is not fundamentally incompatible with the RCT approach; in practice, however, this will not be easy. Pawson and Tilley (1997) are even more skeptical about compatibility. They stress that the methodology of internal validity, net effectiveness and randomization distracts from the key question in stead of bringing us closer to it. By demanding absolute certainty that the measured effect can be attributed to the program and nothing but the program, the supporting conditions that should enable the effect go out of sight. In this sense, a tension exists between the pursuit of internal and external validity. Whoever tries to maximize the former will lose in regard to the latter.

The Evaluation of Dutch Reemployment Policy

How has Dutch reemployment policy been evaluated? In the Netherlands, for many years scholars have complained about the reluctance of politicians to establish or promote experimental evaluations in the field of social security. At last the government agreed to an experiment that has started in 2012. The evaluation of the reemployment programs in the Policy Review is therefore not based on an experimental design.

The unfavorable judgment of the added value of reemployment programs is mainly based on nonexperimental impact evaluations carried out by econometricians. This discipline has clearly captured the authoritative evaluations in this field in the Netherlands. The primary study on which the Policy Review is based was conducted by the Foundation for Economic Research (SEO) commissioned by the Council for Work and Income (Groot et al., 2008). A few other studies on which the picture was based are mentioned as well, but it is reasonable and clarifying to zoom in on this dominant study. The report is illustrative of the problems nonexperimental evaluations run into.

The study is based on administrative data. Combining various benefits registrations has created an impressive database. It includes everyone who received a social benefit in the period from 1999 till 2006. Information is added about the use of reemployment services and (if applicable) the reason for no longer receiving the benefit. The longitudinal nature of the data made it possible to use an econometric duration model (see above) as a basis for the estimates of the impact of the reemployment services.

No matter how impressive and large the records and no matter how sophisticated the statistical analysis, it is not experimental data. The main problem for the researchers is that a true control group of unemployed people who do not participate in a reemployment program is lacking. This is further complicated by the fact that, over time, most of the unemployed persons are eligible for some form of reemployment counselling and, as a result, comparison to non-participants often boils down to a comparison with later participants. The researchers are compelled to resort to this and their analysis therefore essentially relies on the fact that some of the unemployed joined a reemployment course earlier than others (Groot et al., 2008, appendix). Since exact dates are available, technically a model can still be estimated of the use made of reemployment services (at some point in time) in relation to the duration of the benefit. However, the researchers themselves perfectly know that the time of entrance into a reemployment course is selective (Groot et al., p. 4).

We would like to highlight this point, because the implication for policy evaluation is fundamental. Evaluation researchers generally assume that selectivity can work both ways: overestimating or underestimating the results. In regard to the former, program officers may indeed prefer to select the most promising individuals in order to make their program a success. In that case the results seem better than they would have been in a larger group. The second possibility, however, is also realistic and puts evaluation research in a somewhat paradoxical situation. Program officers are generally motivated and encouraged to deliver “customized solutions.” They are considered to use their tools where and when they are needed most. Thus, officers do not offer reemployment programs if they feel the unemployed person has a good chance of finding a job him/herself. Suppose that officers can deliver a perfectly customized solution and use tools in such a way that the differences in opportunities between (p. 532) the unemployed are canceled out. The result can only be that the bivariate relationship between the tools and the outcome disappears (cf. Wotschack, Glebbeek, & Wittek, 2013). We do not intend to suggest that this tailor-made work is always provided in actual practice or that it always must have this result, but the degree to which such selectivity plays a role is usually not known. Moreover, we dare to say that equalizing opportunities is the ideal or guideline of policy implementation in very many cases. Therefore the irony is that the better the officers do their job, the harder it is for researchers to prove an impact. We can identify this as the ideal of zero correlation: If the use of a policy instrument neutralizes starting conditions perfectly, the correlation between intervention and outcome will tend to be zero.

This places the low impact estimates of the reemployment evaluation in a different light. Obviously selectivity can be partly eliminated from the data by controlling statistically for characteristics and prospects of the unemployed. And, of course, this is what happened in the studies we are discussing here. “By taking into account the selective use of reemployment instruments in the model, a pure effect can still be calculated,” the report said (Groot et al., 2008, p. 101). And: the selectivity “is at least in part corrected . . . by relating the probability of the resumption of work to the probability of the use of reemployment tools” (p. 102). The latter clearly refers to an application of the Heckman correction that we mentioned above. This passage in the methodological annex, however, was probably copied from another report, because the main text states that this did not occur (p. 4). Corrections were made for a whole series of measured characteristics of the unemployed, like age, sex, education, region, ethnic group, benefit duration and “distance to the labor market” (an administrative category). All practitioners, however, will point to the importance of the unmeasured characteristics: motivation, presentation, self-confidence and health. What’s more, these “soft characteristics” are also emphasised in a literature review enclosed with the evaluation (Gelderblom & De Koning, 2007). In short, despite all statistical efforts, the uneasy feeling remains that selectivity in policy implementation has affected the impact estimates in unknown ways. This selectivity is therefore a severe bias for researchers indeed—but, as we argued, for policy practitioners it is the very core of their business and professionalism. The sobering point we want to make is that evaluation research generally works against the grain, because time and again researchers need to seek corrections for what policy practice pursues as ideal.

All this is not intended as criticism of the researchers—they “had to do something”—but it does indicate that the seemingly destructive image that has been created around reemployment services is not based on strong empirical grounds. This has also been emphasized by fellow researchers in the discussion following the Policy Review. For instance, Dutch labor economist Borghans (2008) writes in a response: “Econometricians have put a great deal of effort into developing techniques for comparing people in cases where no randomisation had taken place. These methods often use unintentional coincidences that occur in reemployment practice. Because these coincidences are often very minor, the power of these methods is not very great” (p. 8).

This conclusion leads Borghans to make a direct plea for RCTs. Only if participants and nonparticipants are truly comparable by randomization, solid conclusions can be drawn. This completes the circle: Nonexperimental methods were developed to overcome the practical difficulties of RCTs, but their deficiencies push researchers back to experimental design. To this conclusion Borghans adds a firm rebuke to policy makers: “Therefore, the unwillingness to let participation in reemployment programs be partly determined by chance means for benefit recipients that the whims of the politicians decide whether they can/should participate in a program nobody knows the effectiveness of” (Borghans, 2008, p. 9).

We could agree with this conclusion were it not that another problem immediately arises. Borghans’s formulation suggests that we will know once and for all after an RCT has been conducted. Let us organise a trial and determine indisputably whether reemployment services work or not. We believe this is an illusion. Besides the internal validity, one also has to account for the external validity of evaluation research. And in this regard, we cannot but conclude that there is a great deal of variation among seemingly similar policies. Reemployment services do not constitute a homogeneous intervention but in practice represent a wide variety of approaches and instruments (e.g. Groothoff et al., 2008; Sol et al., 2011). In addition to these varying approaches, the groups of participants and—highly essential to this policy—labor market conditions vary. Therefore new RCTs will constantly be needed to determine the efficacy of specific approaches for specific target groups under specific circumstances. Without the guidance of more substantive ideas or (p. 533) theories, we would get lost in this forest of RCTs as well. Let us examine this by looking across the Dutch border and considering the results of studies that are indeed based on the experimental method.

Evaluation of Reemployment Programs: A Global View

The story so far could create the impression that experiments are rare and that scholars everywhere insist on them in vain. However, this is not true if we look at other countries. Especially in America, experimental evaluations have often been conducted, also in the field of social policy and labor market policy. This is partly due to a greater skepticism about government interference in the United States, which makes politicians more often insist that it is unequivocally demonstrated that new programs “actually work.” Accordingly large-scale experiments have been conducted with training programs (Friedlander, Greenberg, & Robins, 1997), job search assistance and reemployment bonuses (Meyer, 1995) and welfare-to-work programs. The latter are more comparable to the Dutch reemployment services than the traditional training programs, since they also tend to focus on the shortest route to a job and include many similar elements (orientation, activation, help with finding jobs, and enhancing job readiness).

In a series of meta-analyses, the balance of these welfare-to-work experiments was made up (Ashworth, Cebulla, Greenberg, & Walker, 2004; Greenberg & Cebulla, 2008; Greenberg, Deitch, & Hamilton, 2010). For instance, Greenberg and Cebulla (2008) used 27 evaluation studies from a total of 71 mandatory programs conducted between 1982 and 1996 in the United States as a basis, whereas Greenberg et al. (2010) zoom in on 28 of these programs that were evaluated by one single institution (MDRC) with an identical approach. These 28 programs ran in eleven states and two Canadian provinces and included over 100,000 participants. “All the studies used random assignment research designs, resulting in probably the most extensive and most reliable database of findings about welfare-to-work programs ever assembled” (Greenberg et al., p. 2). The individual studies were carried forward to the stage of cost-benefit analysis—a stage that is rarely reached in this field (cf. Card, Kluve, & Weber, 2010, p. 476)—making them exemplary from an evaluation point of view. The authors consider these costs and benefits from three different viewpoints: the participants, the government budget, and society at large.4 In this regard they establish that just examining whether the unemployed have found jobs or have improved their income, whilst ignoring the administrative costs of the policy (“which is often the case”), easily leads to erroneous conclusions about the return for government and society (Greenberg & Cebulla, p. 136).

With this warning in mind, it should come as no surprise that the general conclusion on the cost-effectiveness of the programs is not one to arouse jubilance. “The net benefits from a typical welfare-to-work intervention, accumulated over several years, are fairly modest for the program group, even smaller from the perspectives of society as a whole” (Greenberg & Cebulla, 2008, p. 122). This corresponds with the familiar picture from the earlier evaluations that positive effects of such programs on earnings and welfare dependency range from minimal to modest (e.g. Friedlander et al., 1997). The authors state, however, that all this pertains to an average program. “Many welfare-to-work programs are cost-beneficial, some highly so” (Greenberg & Cebulla, p. 139).

In reading these meta-analyses, Pawson and Tilley’s (1997) characterization inevitably comes to mind: Most things have been found sometimes to work. Some programs had positive results, some negative, and others insignificant, and all of this seen from each of the different viewpoints. Greenberg et al. (2010) divided their twenty-eight programs into six types, and within each of them they made up the cost-benefit ratio from the perspectives of participants, government, and society. In thirteen of the eighteen cells, both programs with a positive and a negative result occur (Greenberg et al., Table 4). A similar variation in outcomes is observed if we do not use the cost-benefit ratio but the direct effects for the participants as a basis (income improvement, getting off welfare). Here, too, the results of the experiments appear to be “widely dispersed” (Ashworth et al., 2004, pp. 202–204). Greenberg and Cebulla (2008) report that, from a societal point of view, 58% of the programs had a positive and 42% a negative result. Their conclusion is, therefore, that “the variation in findings across the 50 cost-benefit analyses in our sample is enormous” (Greenberg & Cebulla, p. 122, our emphasis)—therewith crushing the hope of Borghans and many others that RCTs would determine the effectiveness of reemployment programs once and for all.

Obviously one can try to figure out if this variation can be attributed to certain features of the (p. 534) programs or their environment, and that is what is done in these meta-analyses. The welfare-to-work experiments have a reputation for providing clear indications in this regard and are therefore considered by some to be the “best experiences” of the use of RCTs in social policy (Hollister, 2008). This is largely due to their contrasting of “work first” (cf. Lindsay, this volume) with “human capital” elements (e.g. schooling, training), “showing that the work-first approach was more effective” (Hollister, p. 406). This is also the conclusion of Ashworth et al. (2004) in their meta-analysis—“the relative success of the “tough love” approach” (p. 209)—but they immediately add a whole series of qualifications to this result. Greenberg et al. (2010) go a step further in weakening the argument by pointing out that work-first programs are generally beneficial to the government but often do not really increase the incomes of the participants. Mixed programs, which include training, perform a little better in their analysis (pp. 14–15, 21). Our reading of the meta-analyses is that they have not uncovered many consistent patterns in the studies. In general, they offer few concrete reference points for the selection or improvement of reemployment programs; the findings have a high degree of “it could be that.” Obviously this has everything to do with the large variation in outcomes but also with the fact that the programs are divided only into broad categories and that the active mechanisms are not tested or assessed. After fifteen years of experimenting, Ashworth et al. therefore admit: “To conclude, the search for a best-practice model of welfare-to-work has only just begun” (p. 211).

Although welfare-to-work programs in the United States are still widespread, the enthusiasm to evaluate these using RCTs has decreased (Greenberg & Cebulla, 2008, p. 116). This is perhaps significant, but we do not wish to speculate about it here. In recent years the use of experiments has found acceptance in Europe, albeit on a lesser scale than in the United States. Therefore when we shift our view to European reemployment programs, we will examine experimental and nonexperimental evaluations jointly.

Kluve (2010) offers the most comprehensive overview of evaluation research of European “active labor market programs.” Note that this policy is broader than the welfare-to-work programs discussed above—they also include training and subsidized employment—but reemployment courses are reasonably well distinguished. For his meta-analysis, Kluve used 137 evaluations from nineteen European countries. The outcome measures of the studies are not cost-benefit ratios but individual job opportunities, which Kluve simply classifies as “positive,” “negative,” and “insignificant” owing to a lack of comparable effect sizes. His study provides the familiar picture of mixed results again. Of the evaluations, 55% estimated a positive impact, 21% a negative effect, and 24% an insignificant effect (p. 907). This is true, even if we have a look at the category of “Services and sanctions” (see below), which is closest to what happens in reemployment courses. Here, positive and insignificant results alternate, although it seems at first sight that the positive results are predominant (p. 908) (Table 29.2). The studies on which the Dutch reemployment Policy Review was based (see above) are not included in the database.

The purpose of Kluve’s meta-analysis is now to trace the variation found in outcomes back to (1) the program type, (2) the evaluation design, (3) institutional features of labor markets, and (4) the economic situation in the country at the time of the study. His findings are more pronounced than those of the American meta-evaluations, which may be caused by the greater dispersion of policies in relation to the welfare-to-work programs.

The program type stands out as an explanatory factor. Kluve distinguishes four types: (1) services and sanctions, (2) training, (3) private sector incentives, and (4) public sector employment. Of these, the subsidized jobs in the public sector score worst, followed by traditional training programs. The results of incentives for the private sector (i.e., wage costs subsidies and grants to start one’s own business) are significantly better. The same holds for the “services and sanctions” category, which comprises all measures aimed at enhancing job-search efficiency. These include job-search courses, job clubs, vocational guidance, counseling and monitoring, and sanctions in the case of noncompliance with job-search requirements. Work-first programs (see Lindsay, this volume) are not mentioned explicitly by Kluve but undoubtedly belong to the same category. It is important to note that the effects of these program types are assessed by a much narrower measure (i.e., individual employment probabilities) than the cost-benefit ratios for the various stakeholders in the American studies. However, this limitation does not prevent Kluve from drawing a powerful conclusion: “This implies that modern private sector incentive schemes are the ones that work, and that modern types of ‘Services and Sanctions’ are particularly effective. (p. 535) This is certainly good news for the public employment services” (Kluve, 2010, p. 916). It would also have been news for the Dutch politicians referred to at the beginning of this chapter.

Finally, Card, Kluve, and Weber (2010) merged European and American studies in a meta-analysis. They compiled a database of ninety-seven evaluations from twenty-six countries, the vast majority of which were of recent date (the year 2000 or later). Because different types of programs and groups of participants can be distinguished within the studies, the researchers were eventually able to use 199 different impact estimates as their basis. Apart from that, the approach is the same as that of Kluve (2010) and the results are broadly the same. The effects of the active labor market programs vary widely, but within them, subsidized jobs seem to do worse and approaches focused on direct reemployment (job-search assistance and sanctions) seem to do better.

A puzzling result is the finding that the outcome measure matters. These measures vary among the studies in the database, ranging from unemployment duration to employment chances and average quarterly earnings. “Evaluations . . . that measure outcomes based on time in registered unemployment appear to show more positive short-term results than evaluations based on employment and earnings” (Card et al., 2010, p. F475). The authors suggest in a footnote (p. F467) that assignment to a program may induce people to leave the benefit system without moving to a job. This reminds us of a much-cited article by Black, Smith, Burger, and Noel (2003), who found that reemployment programs also may have a deterrent effect, a result that was later repeated by Graversen and Van Ours (2008).

However, an important addition to the previous meta-analyses is that, in a subset of studies, the researchers could also examine the effects in the medium and long term. This results in a significant shift of the estimated outcomes in a positive direction. “Indeed, it appears that many programs with insignificant or even negative impacts after only a year have significantly positive impact estimates after 2 or 3 years. Classroom and on-the-job training programs appear to be particularly likely to yield more favourable medium-term than short-term impact estimates” (Card et al., 2010, p. F475). Therefore some interventions seem to need time to reach their potential. This observation corresponds with a recent evaluation of welfare-to-work programs by Dyke, Heinrich, Mueser, Troske, and Jeon (2006), who monitored more than 130,000 welfare mothers in two American states for a period of 4 years on the basis of administrative data. It also corresponds with the evaluation of the long-term effects of the famous California’s Greater Avenues to Independence (GAIN) program (Hotz, Imbens, & Klerman, 2006). Both studies found that more intensive training programs in the long term yield greater and more lasting results than short-term work-first approaches. “These results suggest that the current emphasis on work-first activities is misplaced and argue for a greater emphasis on training activities designed to enhance participants’ human capital” (Dyke et al., p. 569). This is in clear contrast with the conclusion of Kluve and the message of the old welfare-to-work studies. What is the cause of this difference? Is it indeed the longer observation period? Can it lie in the quality and implementation of specific programs? Does the fact that the criterion is not job opportunity, but earned income, play a role? Or are the results of Dyke et al. still the effect of a selectivity bias in the data, which in this case could only be combated with a nonexperimental estimation method? No answers can be given. Therefore we can a least say that after all these impact evaluations, much uncertainty about key questions of labor market policy remains.

In view of the above, what is the balance? Here we feel a certain reluctance, because a critical assessment should not be interpreted as a negative judgment of the work conducted by researchers. On the contrary, the methodological quality of many studies is undeniably high. Impact evaluations and the meta-analyses based on them rely increasingly on an impressive amount of data and are commendable for the quality and accuracy of the analysis. Still, we think it is fair to say that the result of all these efforts is disappointing. The outcomes of evaluations vary widely and only few of them have been successfully related to the design of the programs. This is no wonder, since these programs have been described too superficially to achieve this end. Besides, quality differences in the performance are ignored and it remains particularly unclear which mechanisms can and cannot be activated by the programs. In the impact evaluations we examined, the lack of explanations for the results is striking. In cases where effects are established, their causes remain unclear. The average impact evaluation really does not provide any indication as to how the outcomes should be interpreted in terms of the intervention that took place. The usual composition of the studies is a detailed description of the data, the evaluation design (p. 536) and the details of the statistical analysis, followed by a relatively brief account of the results and a minimal interpretation. It is therefore not unreasonable to say that the cost-benefit ratio of this evaluation method is unsatisfactory.

Toward More Realistic Evaluations?

We are not the first to plead for a more substantive focus in the research into the effectiveness of reemployment programs. Also traditional evaluation researchers recognize the importance of more focus on the content of the programs (e.g. Friedlander et al., 1997). Ashworth et al. (2004) even believe that this stage has arrived with the advance of meta-analyses, because “meta-analysis changes the form of the evaluation question from the narrow “Does it work?” to the more useful: “What works best, when and where, and for whom?”” (p. 211). We will not take a position on whether this optimism is justified, but we welcome the formulation. Incidentally, we have noticed that the realistic evaluation question of “What works for whom under what circumstances” is now widely accepted among reemployment researchers as the relevant question to ask (Koning & Heyma, 2009; Koning, 2012).

Let us return to the Dutch example of reemployment policy. As we saw, this policy has been condemned as ineffective on the basis of econometric impact evaluations. Even if we, with all the methodological reservations we have, accept the results of these evaluations, important questions remain unanswered. Where did the reemployment strategy fail? Was it not possible to provide the unemployed with the required skills? Or did this succeed, but have they been unwilling to act on these skills? Or—another possibility—did the social and personal circumstances of the unemployed form a barrier to finding work? These are just some of the problems that can play a role on the supply side. Obviously there are also factors on the demand side. Are employers reluctant to hire long-term unemployed in spite of their improved skills? Or do they want to, but are there simply too many competitors for the positions (and therefore, by implication, not enough jobs)? Or is the basic problem, classically, the inability to match supply and demand in the labor market?

We believe that without including explanatory questions of this type in the evaluations, reemployment research will not improve. Whether this “opening of the black box” can take place in combination with experimental methods (as advocated by Ashworth et al.) or whether these approaches are incompatible (as argued by Pawson and Tilley) is not our primary concern. It is important to move toward the explanatory mechanisms and demonstrate that this will result in feasible and informative evaluations. Psychologists have shown that it is entirely possible to combine the experimental design with a test of the mediating psychological and behavioral mechanisms like increased self-efficacy and intensified job search (e.g., Eden & Aviram, 1993; Van Hooft & Noordzij, 2009; Van Ryn & Vinokur, 1992; see also Latham, Mawritz, & Locke, this volume; Price & Vinokur, this volume). Whether such a combination is also feasible for social and economic mechanisms and for the role of “contextual” factors remains to be seen. For even though traditional evaluation research has come under heavy criticism, the “realistic” alternative has yet to prove itself.5

In this chapter we can only outline a framework for such a realistic evaluation of reemployment policy. To this end, it is particularly the substantive ideas and/or mechanisms (underlying reemployment policy) we want to put forward. Expectations are that empirical research conducted on the basis of this framework will be completed in 2014 (cf. Sol et al., 2011).

Based on the principles of realistic evaluation, we have started a research project in collaboration with four reemployment companies in the Netherlands with the aim of further opening the black box. These four are all commercial (for profit) companies that need to win contracts from local authorities or social security offices with the purpose of making unemployed people find jobs. The policy theory behind this “market” for reemployment services is that companies that are fully (“no cure no pay”) or partly (“no cure less pay”) paid by result will make the unemployed try harder (and the employers less reluctant), and will develop more effective and innovative methods to achieve this (Struyven & Steurs, 2005). Three of the four companies are leading companies in this reemployment market; the fourth is a small niche player known for its innovative approach.

We have mapped the reemployment counseling strategies used by these companies thoroughly on the basis of documents, interviews, and observations. In particular, the “field visit days” of the researchers appeared to be an important way to truly get in touch with the subject. We subsequently ordered our findings based on the method, recommended by Pawson and Tilley (1997), of the teacher-learner cycle (cf. also Nanninga & Glebbeek, 2011). We have consistently fed reconstructions back to the (p. 537) companies, asking whether we accurately described their approach and their concerns. Is this how you think it works? Are these the thoughts behind your routines? Do these convey the main reemployment mechanisms?

The research has resulted in a conceptual framework for opening the reemployment services black box, consisting of 38 key problems, 34 basic tools, 9 intermediate goals, and 12 core mechanisms (see Tables 28.1 and 28.2). This framework essentially describes reemployment service and its conceptual basis. We then started an empirical study of the four companies, in which over 1,000 clients in their reemployment courses are carefully monitored for two years. Consultants of the companies record on a weekly basis what has happened with their clients and how they assess their progress—or lack thereof—on a range of important issues (e.g., skills, presentation, motivation, health, and behavior). The conceptual framework was converted into a web-based research tool in which these data are recorded in a standardised way. The starting condition (at intake) and the end result of the courses are accurately recorded as well. It has already become clear that there is a wide variation in the progress and outcomes of the reemployment courses. This variation is especially useful for the researchers to be able to assess the underlying ideas of reemployment services in regard to their validity and scope. After all, there is no “control group” in this research. Realistic evaluation exploits the within-program variation in its results to detect the determinants of success and failure, and to test the underlying theories represented by the mechanisms and the ensuing CMO configurations (Pawson & Tilley, 1997, pp. 43, 113).

The twelve core mechanisms constitute the essence of the reemployment policy theories. Each of these mechanisms is linked to an intervention claim, which succinctly expresses why it is thought that participation in a reemployment course can be beneficial for an unemployed person. The mechanisms are all focused on solving a problem that prevents the unemployed from finding work—or at least on making this problem manageable. This is not always a sufficient condition for returning to a job, but it is often a necessary condition. In other words, the mechanisms are often directly focused on achieving an intermediate goal. Table 28.2 provides an overview of the identified mechanisms and their respective intervention claims.

To arrive at these twelve mechanisms, we combined the inductively gathered materials from the reemployment companies (Table 28.1) with the conceptual framework suggested by Pawson (2003). In his view, policy instruments can be classified according to the level at which they seek to elicit and maintain behavioral change: material, cognitive, social, or emotional. We elaborated on Pawson’s suggestion in that we propose a trade-off between feasibility and durability of the intervention. That is, it can be fairly easy to change a person’s behavior by offering a material incentive, but his behavior may fall back as soon as the incentive disappears. This is less likely when the intervention has changed the cognitive patterns of this person and even still less likely when his social network or personal identity has changed. In these last instances we can have more confidence in the durability of the intervention, although clearly this level is more difficult to attain. Table 28.3 depicts this trade-off and arranges the interventions by their appropriate level. Using this conceptual tool, it became apparent how the reemployment instruments could be reduced to a more basic set of core mechanisms.

At the material level, finding practical solutions to a host of practical problems stands out as a major activity reemployment staff conducts on behalf of their unemployed clients. Such facilitation can be offered to employers as well, for instance when they are relieved from formal procedures and paperwork. Clearly financial incentives for employers are an important instrument at this level. (Financial compensation can be offered to the unemployed as well, but in the Netherlands this is uncommon.) The threat with sanctions is another (though mostly implicit) option in this range. Information about vacancies and workers should also be placed in this sphere. Although information affects knowing, this is not what psychologists understand by cognitive processes. Providing labor market information is more like a material incentive—it reduces search costs and reveals the gains that can be had by entering into a contract.

Cognition refers to the mental processes involved in perceiving, attending to, remembering, thinking about and making sense of oneself and others (Moskowitz, 2005). A great many reemployment activities are directed at this level. At the surface this means trying to foster new habits (and shaking off old ones) by engaging the unemployed in all kinds of activities, work and nonwork alike. Such activation is meant to improve the physical and mental condition and to make the unemployed accustomed to daily routines and a rhythm of work again. One step deeper the course may (p. 538) (p. 539) try to change the ways of thinking and customary interpretations of the unemployed persons. This holds the promise that their job search will become more self-directed and resistant against the inevitable setbacks of economic life. The self-efficacy principle (see Kanfer & Bufton, this volume; Latham et al., this volume; Price & Vinokur, this volume) stands out as a prime example of such a cognitive mechanism that will mediate (and moderate) the major outcomes of reemployment counseling. The claim that such counseling must seek to change these cognitive processes is at the heart of the psychological interference with this policy domain.6

Table 28.1 Key Issues, Intermediate Goals, and Basic Tools of Reemployment Services

Focus

Problem Type

Key Problems

Intermediate Goals

Basic Tools

Supply-side (employees)

A. Personal characteristics unfavorable to the labour market

  1. 1. Age (usually too old)

  2. 2. Gender (usually female)

  3. 3. Ethnicity (nonwestern, non-Dutch background)

  4. 4. Occupational disability

  5. 5. Past detention

  6. 6. Welfare dependency

B. Personal and social barriers

  1. 7. Unstable housing situation

  2. 8. Disrupted family

  3. 9. Financial debts

  4. 10. Addiction problems

  5. 11. Child-care needs

  6. 12. Poor integration

  7. 13. Poor mental state

  8. 14. Sickness (nonoccupational disability)

Eliminating practical barriers

  1. 1. Practical support

  2. 2. Debt settlement

  3. 3. Help with housing

  4. 4. Arranging child care

  5. 5. Help with transport

Health promotion

  1. 6. Specialist help

  2. 7. Help with personal care

  3. 8. Fitness

C. Poor personal fitness

  1. 15. Behavioral problems

  2. 16. Self-confidence

  3. 17. Self-esteem

  4. 18. Self-efficacy

  5. 19. Motivation (unwillingness)

  6. 20. Self-care/hygiene

  7. 21. Physical condition

Increasing motivation

  1. 9. Financial sanctions

  2. 10. Financial incentives

  3. 11. Self-esteem/self-confidence

  4. 12. Social approval

  5. 13. Sense of purpose

Reorientation

  1. 14. Career orientation

  2. 15. Career test

  3. 16. Competence profile

D. Lack of labor (market) skills

  1. 22. Work experience

  2. 23. Employee skills

  3. 24. Rhythm of work

  4. 25. Language deficiency

  5. 26. Application skills

  6. 27. Education

  7. 28. Social skills

  8. 29. Presentation

  9. 30. Small network

  10. 31. Insufficient insight into talents, skills, desires

Improving job search

  1. 17. Search training

  2. 18. Application training

  3. 19. Presentation training

  4. 20. Network training

Enhancing employee skills

  1. 21. “Work First”

  2. 22. Traineeship

  3. 23. Work placement

Improving knowledge

  1. 24. Schooling/Training

  2. 25. Language course

  3. 26. Work training program

Demand-side (employers)

E. Negative image of the unemployed

  1. 32. Statistical discrimination by employers

  2. 33. Prejudices against people on welfare

Compensation

27. Subsidies

F. Lack of adaptability

  1. 34. Insufficient capacity to adjust the work organisation

  2. 35. Insufficient knowledge or skills to deal with people with disabilities and/or abnormal behavior

Adaptation

  1. 28. Workplace adjustment

  2. 29. Adjustment of work processes

  3. 30. Job coaching

Matching supply and demand

G Information problems

  1. 36. Employers and unemployed cannot find each other

  2. 37. Employers have no insight into the skills, knowledge, and talents of the unemployed

  3. 38. The unemployed lack understanding of the fit between their own wishes/skills and the needs of the employer

Finding job openings

  1. 31. Networking by staff

  2. 32. Job hunting by staff

Matching/counseling

  1. 33. Matching tools

  2. 34. (Intensive) counseling

Sociologists point at the social rewards reemployment courses should try to offer to their clients. Obtaining social approval can be regarded as a fundamental human desire, on equal footing with achieving economic goals (Brennan & Pettit, 2004; Lindenberg, 2001). Changing the sources and content of social approval can therefore be seen as a powerful but challenging course of action. To be sure, many reemployment courses make use of their social context and try to create an atmosphere in which participants support and stimulate each other and get motivated by social rewards. It is far more difficult, however, to maintain this influence outside the course, as participants fall back on their existing social networks. A more enduring effect would be realized if these existing networks could be permanently affected too, but this is only seldom within the reach of reemployment services. (Possibly, the effort of coaching on the job qualifies for such an influence.)

The most durable effect is obtained when the emotional level is reached, that is when the desired behavior is firmly rooted in a person’s identity or personality. Usually this can be secured only by socialization processes, such as those that take place in childhood or when entering an occupation. It may be too ambitious for reemployment counseling to aim for this level, and we might leave this cell empty accordingly. Reading the chapter by Price and Vinokur (this volume), we nevertheless presume that self-efficacy training may qualify as a candidate. Thus in Table 28.3 we provided for its cautious inclusion within brackets.

It is unnecessary to discuss all these mechanisms here at length. Three illustrations will suffice to point out that it will be instructive to put these mechanisms at the heart of evaluation efforts. Only through considering the workings of these kinds of mechanisms will we be able to understand the limits and possibilities of reemployment policies.

Take the facilitation mechanism. Unemployed persons who are at a great distance from the labor market are often confronted with multiple problems, such as a combination of addiction, debts, and a chaotic domestic life. These can have an obstructing effect on finding a job. The barriers are reinforced by the fact that the infrastructure that provides services in these areas is fragmented, confusing, and inaccessible to the unemployed. Many reemployment professionals work from the assumption that these barriers should be removed before entry into a job can be an option. Thus we find these professionals busy arranging all sorts of practical matters for their clients. An interesting question for evaluation poses itself almost immediately: Can professionals really succeed in removing these barriers? And if so, does this indeed improve the job chances of the unemployed? (p. 540)

Table 28.2 Twelve Core Mechanisms in Reemployment with Their Corresponding Intervention Claims

Reference Point

Mechanism

Intervention Claim

Supply side

Facilitation mechanism

“Frontline staff are often able to provide solutions for practical problems that obstruct the reemployment of an unemployed person.”

Sanctioning mechanism (“stick”)

“Reemployment programmes are essential to track and control the unemployed who are unwilling to work, so that their cost-benefit balance tips to the side of accepting work.”

Information mechanism

“Obtaining vacancies and revealing individual skills are necessary, because long-term unemployed and their suitable employers are often unable to find each other.”

Job-search skills mechanism

“Many unemployed are unfamiliar with the job search process and have to learn how to look for jobs, how to apply for them, and how to present themselves acceptably to an employer.”

Activation mechanism

“Practical activities not directly focused on a job are an effective and often necessary intermediate step to help the unemployed regain the daily rhythm and energy level required for paid work.”

Goal-orientation mechanism

“It is possible to change the negative or unreal expectations and convictions of the unemployed concerning their chances to find a job, and redirect them towards the necessary steps for obtaining reemployment.”

Coaching mechanism

“By assisting the unemployed person (and his employer and colleagues) in the workplace, adaptation and habituation can occur, making it possible to bridge a gap that cannot be bridged in one step.”

Social approval mechanism

“Since unemployed persons who participate in a re-employment course receive positive feedback from colleagues and supervisors for the first time in years, they (re)discover that going to work is an indispensable way to fulfil their social needs.”

Self-efficacy mechanism

“It is possible to break the negative self-image and low self-esteem of the unemployed person, and foster a sense of competence that guards against setbacks and helps to persist in the job search process.”

Demand side

Compensation mechanism (“carrot”)

“The long-term unemployed are hampered by a lack of productivity that can be compensated by providing a financial subsidy to the employer.”

Trust mechanism

“Reemployment consultants know how to build a good relationship with employers by providing them with expert and honest advice, making them receptive to their advocacy for the unemployed.”

Focused on matching supply and demand

Matching mechanism

“Establishing direct links between employers and the unemployed is an effective way to adjust their preferences and to remove their mutual ignorance and fear.”

Second, the job-search-skills mechanism. Although reemployment programs are not training programs,7 almost all of them try to offer learning experiences to the participants. This is because the programs seek to effect a change in behavior. Their aim is to teach the unemployed certain behaviors and to make them refrain from others. This change in behavior has a cognitive component (self-awareness, reorientation, new knowledge and skills) and a routine component (breaking off habits and habituation). Many reemployment companies believe that group training programs provide an appropriate (p. 541) context for this. By working in groups the unemployed learn (again) how to keep appointments and cooperate with others. Getting and giving feedback is also an important component of the learning mechanism. Until now, evaluations have given us little insight in whether such learning really takes place in reemployment courses. This is all the more important as learning goals should be distinguished from performance goals (cf. Latham et al., this volume; Van Hooft, this volume). Van Hooft and Noordzij (2009) claim that in the standard practice of reemployment counselling, performance goals and results-oriented guiding techniques come to override learning goals (p. 1588). If this claim is correct, it may offer an interesting explanation for the alleged lack of success of Dutch reemployment courses.

Table 28.3 Focus and Level of Intervention of the Twelve Core Mechanisms for Reemployment

The Evaluation of Reemployment Programs: Between Impact Assessment and Theory-Based Approaches

As is shown by the tables, the majority of mechanisms concern the supply side of the labor market. That is because most reemployment activities try to enhance the employability of the unemployed. However, reemployment activities can be directed toward the demand side as well. The trust mechanism is an instance of this. Employers have difficulties in judging the individual qualities of unemployed persons and they will therefore resort to prejudices or statistical discrimination. This is not surprising, since they do not like taking risks with their personnel and want to feel comfortable about whom they take in. The long-term unemployed are often unable to get rid of their stigma on their own. They need someone to mediate and/or put in a good word for them. When reemployment professionals gain the confidence of the employers, they may be able to perform this classic intermediary role.

Taken together, the twelve mechanisms of Table 28.2 constitute the claim of reemployment policy. By making them explicit, we have opened up the black box of these policies to a considerable degree. The mechanisms may provide a checklist to practitioners, like the management and staff of reemployment companies, for evaluating their own practices and identifying the strong and weak points in their courses (see also Box 1). To scientific evaluators they convey the message that we would like to know if and when these mechanisms occur in order to be able to explain the successes and failures of reemployment programs.

The fact that most mechanisms focus on intermediate goals has important implications for the evaluation of their effectiveness. For example, a client can be made employable (intermediate goal) but not be able to find employment (ultimate goal) simply because there are no vacancies. Then, despite the disappointing outcome, a number of mechanisms still appear to have worked in accordance with the theory. Thus “no result” does not mean that the policy theory should be automatically rejected but rather that one or more critical conditions for achieving the ultimate goal were lacking (p. 542) (Glebbeek, 2005; Pawson & Tilley, 1997). Both the relationships between intervention and intermediate goals, and those between intermediate and ultimate goals, are in other words context-dependent. Thus to find the appropriate CMO configurations that specify and improve the policy theory, intermediate goals should be taken into account. Impact evaluations that look only at the ultimate goal and ignore the intermediate process teach us little about whether or not mechanisms work and do not further our knowledge of reemployment policies. It is our claim and expectation that an evaluation of the mechanisms identified above will provide us with the necessary progress.

In the end, not only the workings of the mechanisms but also their necessary and supportive conditions should be part of the intervention theory. Thus the “C” for “context” in realistic evaluation is identical to what psychologists call moderators. Latham, Mawritz, and Locke (this volume) identified five moderators for the goal-orientation mechanism: ability, commitment, feedback, complexity, and situational variables (among which the condition of the economy). Accordingly all twelve core mechanisms will have their specific moderators—that is, conditions that can only be discovered when evaluation research is directed to their functioning.

One condition stands out, however: There must be jobs around to fill. We have already noticed that reemployment services are mainly oriented toward the supply side of the labor market. This is in accordance with the dominant policy orientation since the 1980s, which entailed a shift from macroeconomic demand-management to the behavior and institutions of the supply side. For critics of this policy, this shift marks the transition from “full employment” to “full employability” (Mitchell & Muysken, 2008). The latter is described as a digression “forcing unemployed individuals into a relentless succession of training programs designed to address deficiencies in skills and character” (p. 4). To these critics the denounced failure of the Dutch reemployment policy will therefore come as no surprise. “Most welfare-to-work schemes are little more than a cruel joke, precisely because there is no job for most welfare leavers” (Mitchell & Muysken, p. 20).8

Conclusion

Reemployment counseling has definitively become a field of applied psychology. Many courses offered to the unemployed make use of insights from cognitive psychology and social learning theory, with varying degrees of professionalism. This certainly holds true for the Netherlands (cf. Groothoff et al., 2008), where, since the 1990s, an extensive practice of reemployment services has developed.

Dutch reemployment policy has been judged to be ineffective on the basis of evaluation studies. The added value (“net effectiveness”) was alleged to be small. This national experience is not only interesting in itself but also shows that scientific evaluations matter. Based on the negative verdict, funding for reemployment services in the Netherlands has been reduced.

These evaluations had therefore better be right. In this chapter, we have discussed the main evaluation study in more detail and we can say that technically speaking it was conducted in a state-of-the-art fashion. The econometric arsenal of nonexperimental research was widely and judiciously placed in position. Nevertheless, uncertainties are associated with this study and similar ones. To find a comparison group, coincidences in the data must be used, and the correction for selectivity requires disputable assumptions. The prevailing opinion in the field is that nonexperimental estimates can sometimes be way off the mark. As a consequence, we never know whether or not we are dealing with such a case. Moreover, the irony of the situation remains: The Dutch labor market and benefit dependency developed favorably in the period involved. As a colleague of ours once put it: “So, we must have done something right!?”

This chapter has provided a picture of the three main approaches to impact evaluation. We think the reader now will have a good idea of what these approaches are about. When it comes to solely determining added value in an undisputed manner, the experimental approach has the best credentials. No one disputes that a randomized controlled trial is superior in terms of internal validity. But this chapter has sought to make clear that this in itself is not sufficient. We cannot be satisfied with the results of experimental evaluations. The meta-analyses show that the experiments have only shifted the problem: the uncertainty is now at the level of conflicting studies. Moreover, they teach us little about the content and quality of reemployment programs, and as a result provide few suggestions for improvement. We believe that continuing along this path will not take us much further.

In the spirit of realistic evaluation, we have emphasized that it is vital to include the guiding ideas behind reemployment services in the evaluations. We certainly do not rule out that this can be combined (p. 543) with an experimental approach—with sufficient time, resources and inspiration, a “best of both worlds” can probably be achieved (cf. Van der Knaap et al., 2008). However, the priority is an approach that enables an explanation of the findings. Because if we—for the sake of argument—accept the Dutch evaluation and conclude that reemployment policy has led to nothing, we need to know where in the chain it went wrong. Did reemployment programs fail because of a difficult labor market or is it an illusion to think that you can change people’s behaviour effectively through a course? Have we been unable to gain the trust of employers or are the motivation and discipline of the unemployed demonstrably inadequate? And which barriers are most likely to occur under what labor market conditions?

In short, what is needed is to test the underlying ideas (mechanisms) of a policy. This way, evaluation will become more like explanatory research that is concerned with scientific theories (Vaessen & Leeuw, 2010). In the last section we have shown what kind of ideas are at stake in this field, and we have proposed a set of mechanisms by which reemployment can be characterized. We realize that this makes our contribution highly programmatic in nature. Time and experience will tell whether “breaking open the black box” of reemployment services leads to valuable insights.

References

Ashworth, K., Cebulla, A., Greenberg, D., & Walker, R. (2004). Meta-evaluation: Discovering what works best in welfare provision. Evaluation, 10(2), 193–216.Find this resource:

    Black, D. A., Smith, J. A., Berger, M. C., & Noel, B. J. (2003). Is the threat of reemployment services more effective than the services themselves? Evidence from random assignment in the UI system. American Economic Review, 93(4), 1313–1327.Find this resource:

      Blundell, R., Costa Dias, M., Meghir, C., & Van Reenen, J. (2004). Evaluating the employment impact of a mandatory job search program. Journal of the European Economic Association, 2(4), 569–606.Find this resource:

        Borghans, L. (2008). Tijd voor maatwerk in arbeidsmarktbeleid [It’s time for customisation in labour market policy]. Economisch Statistische Berichten, 93(4533S), 4–9.Find this resource:

          Brennan, G., & Pettit, P. (2004). The economy of esteem. An essay on civil and political society. Oxford, UK: Oxford University Press.Find this resource:

            Burtless, G. (1995). The case for randomized field trials in economic and policy research, Journal of Economic Perspectives, 9(2), 63–84.Find this resource:

              Bushway, S., Johnson, B. D., & Slocum, L. A. (2007). Is the magic still there? The use of the Heckman two-step correction for selection bias in criminology. Journal of Quantitative Criminology, 23(2), 151–178.Find this resource:

                Cahuc, P., & Le Barbanchon, T. (2010). Labor market policy evaluation in equilibrium: Some lessons of the job search and matching model. Labour Economics, 17(1), 196–205.Find this resource:

                  Card, D., Kluve, J., & Weber, A. (2010). Active labour market policy evaluations: A meta-analysis. The Economic Journal, 120(548), F452–F477.Find this resource:

                    Chen, H. T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage.Find this resource:

                      Committee on the Future of Labour Market Policy (2001). Aan de slag. Eindrapport van de werkgroep Toekomst van het arbeidsmarktbeleid [Final report of the committee on the future of labour market policy]. The Hague, The Netherlands: Ministry of Social Affairs and Employment (SZW).Find this resource:

                        Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226.Find this resource:

                          De Beer, P. T. (2001). Beoordeling van evaluatie-onderzoeken [An assessment of evaluation studies]. Appendix 4. In Committee on the future of labour market policy. The Hague, The Netherlands: Ministry of Social Affairs and Employment (SZW).Find this resource:

                            De Koning, J. (2001). Aggregate impact analysis of active labour market policy. A literature review. International Journal of Manpower, 22(8), 707–735.Find this resource:

                              De Koning, J. (2003). Wat niet weet, wat niet deert: over de decentralisatie en uitbesteding van het arbeidsmarktbeleid [If you don’t know, it does not hurt: about the decentralization and outsourcing of labour market policy]. Inaugural lecture, Erasmus University. Rotterdam: SEOR.Find this resource:

                                Dyke, A., Heinrich, C. J., Mueser, P. R., Troske, K. R., & Jeon, K. S. (2006). The effects of welfare-to-work program activities on labor market outcomes. Journal of Labor Economics, 24(3), 567–607.Find this resource:

                                  Eden, D., & Aviram, A. (1993). Self-efficacy training to speed reemployment: Helping people to help themselves. Journal of Applied Psychology, 78(3), 352–360.Find this resource:

                                    Friedlander, D., Greenberg, D. H., & Robins, P. K. (1997). Evaluating government training programs for the economically disadvantaged. Journal of Economic Literature, 35(4), 1809–1855.Find this resource:

                                      Gelderblom, A., & De Koning, J. (2007). Effecten van “zachte” kenmerken op de reïntegratie van de WWB, WW en AO populatie: een literatuurstudie [Effects of “soft” characteristics on the re-employment chances of persons on unemployment, welfare and disability benefits: A literature review]. Rotterdam: SEOR.Find this resource:

                                        Glazerman, S., Levy, D. M., & Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. Annals of the American Academy of Political and Social Science, 589, 63–93.Find this resource:

                                          Glebbeek, A. C. (2005). De onrealistische evaluatie van arbeidsmarktbeleid [The unrealistic evaluation of labour market policy]. Tijdschrift voor Arbeidsvraagstukken, 21(1), 38–48.Find this resource:

                                            Graversen, B. K., & Van Ours, J. C. (2008). How to help unemployed find jobs quickly: Experimental evidence from a mandatory activation program. Journal of Public Economics, 92(10–11), 2020–2035.Find this resource:

                                              Greenberg, D., & Cebulla, A. (2008). The cost-effectiveness of welfare-to-work programs: A meta-analysis. Public Budgeting & Finance, 28(2), 112–145.Find this resource:

                                                Greenberg, D., Deitch, V., & Hamilton, G. (2010). A synthesis of random assignment benefit-cost studies of welfare-to-work programs. Journal of Benefit-Cost Analysis, 1(1), 1–28.Find this resource:

                                                  Groot, I., De Graaf-Zijl, M., Hop, P., Kok, L., Fermin, B., Ooms, D., & Zwinkels, W. (2008). De lange weg naar werk. Beleid voor langdurig uitkeringsgerechtigden in de WW en de WWB [It’s a long road to a job. Dutch policies for long-term unemployment and welfare beneficiaries]. The Hague: The Council for Work and Income (RWI).Find this resource:

                                                    Groothoff, J. W., Brouwer, S., Bakker, R. H., Overweg, K., Schellekens, J., Abma, F., . . . Pierik, B. (2008). BIMRA: Beoordelen van interventies en meetinstrumenten bij re-integratie naar arbeid, eindrapportage [BIMRA: Assessment of interventions and measuring instruments for re-employment counselling]. Groningen: University of Groningen/UMCG.Find this resource:

                                                      Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47(1): 153–161.Find this resource:

                                                        Hollister, R. G. (2008). The role of random assignment in social policy research. Journal of Policy Analysis and Management, 27(2), 402–409.Find this resource:

                                                          Hotz, V. J., Imbens, G. W., & Klerman, J. A. (2006). Evaluating the differential effects of alternative welfare-to-work training components: A reanalysis of the California GAIN program. Journal of Labor Economics, 24(3), 521–566.Find this resource:

                                                            Kluve, J. (2010). The effectiveness of European active labor market programs. Labour Economics, 17(6), 904–918.Find this resource:

                                                              Koning, P. (2012). Leren re-integreren [Learning how to re-employ]. TPEdigitaal, 6(2), 28–43.Find this resource:

                                                                (p. 545) Koning, P., & Heyma, A. (2009). Aansturing van klantmanagers voor een effectief re-integratiebeleid [Caseworkers and the effectiveness of active labour market policies]. Tijdschrift voor Arbeidsvraagstukken, 25(4), 440–455.Find this resource:

                                                                  Lindenberg, S. M. (2001). Intrinsic motivation in a new light. Kyklos, 54(2–3), 317–342.Find this resource:

                                                                    Meyer, B. D. (1995). Lessons from the U.S. unemployment insurance experiments. Journal of Economic Literature, 33(1), 91–131.Find this resource:

                                                                      Ministry of Social Affairs and Employment (2008). Beleidsdoorlichting re-integratie [Re-employment policy review]. The Hague: Ministry of Social Affairs and Employment (SZW).Find this resource:

                                                                        Mitchell, W., & Muysken, J. (2008). Full employment abandoned. Shifting sands and policy failures. Cheltenham, UK: Edward Elgar.Find this resource:

                                                                          Moskowitz, G. B. (2005). Social cognition. Understanding self and others. New York: Guilford Press.Find this resource:

                                                                            Mosley, H., & Sol, E. (2001). Evaluation of active labour market policies and trends in implementation regimes. In J. de Koning & H. Mosley (Eds.), Active labour market policy and unemployment (pp. 163–178). Cheltenham, UK: Edward Elgar.Find this resource:

                                                                              Nanninga, M., & Glebbeek, A. C. (2011). Employing the teacher-learner cycle in realistic evaluation: A case study of the social benefits of young people’s playing fields. Evaluation, 17(1), 73–87.Find this resource:

                                                                                Oakley, A., Strange, V., Toroyan, T., Wiggins, M., Roberts, I., & Stephenson, J. (2003). Using random allocation to evaluate social interventions: Three recent U.K. examples. Annals of the American Academy of Political and Social Science, 589, 170–189.Find this resource:

                                                                                  Pawson, R. (2003). Nothing as practical as a good theory. Evaluation, 9(4), 471–490.Find this resource:

                                                                                    Pawson, R., & Manzano-Santaella, A. (2012). A realist diagnostic workshop. Evaluation, 18(2), 176–191.Find this resource:

                                                                                      Pawson, R., & Tilley, N. (1997). Realistic evaluation. London, UK: Sage.Find this resource:

                                                                                        Pirog, M. A., Buffardi, A. L., Chrisinger, C. K., Singh, P., & Briney, J. (2009). Are the alternatives to randomized assignment nearly as good? Statistical corrections to nonrandomized evaluations. Journal of Policy Analysis and Management, 28(1), 169–172.Find this resource:

                                                                                          Sol, C. C. A. M., Glebbeek, A. C., Edzes, A. J. E., Busschers, I., De Bok, H. I., Engelsman, J. S., & Nysten, C. E. R. (2011). “Fit or unfit.” Naar expliciete re-integratietheorieën [“Fit or unfit.” Towards explicit re-employment theories]. RVO-5. Amsterdam: University of Amsterdam.Find this resource:

                                                                                            Struyven, L., & Steurs, G. (2005). Design and redesign of a quasi-market for the reintegration of jobseekers: Empirical evidence from Australia and the Netherlands. Journal of European Social Policy, 15(3), 211–229.Find this resource:

                                                                                              Vaessen, J., & Leeuw, F. L. (Eds.). (2010). Mind the gap. Perspectives on policy evaluation and the social sciences. New Brunswick, NJ: Transaction.Find this resource:

                                                                                                Van der Knaap, L. M., Leeuw, F. L., Bogaerts, S., & Nijssen, L. T. J. (2008). Combining Campbell standards and the realist evaluation approach: The best of two worlds? American Journal of Evaluation, 29(1), 48–57.Find this resource:

                                                                                                  Van Hooft, E. A. J., & Noordzij, G. (2009). The effects of goal orientation on job search and reemployment: A field experiment among unemployed job seekers. Journal of Applied Psychology, 94(6), 1581–1590.Find this resource:

                                                                                                    Van Ryn, M., & Vinokur, A. M. (1992). How did it work? An examination of the mechanisms through which an intervention for the unemployed promoted job-search behavior. American Journal of Community Psychology, 20(5), 577–597.Find this resource:

                                                                                                      Weiss, C. H. (1997). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.Find this resource:

                                                                                                        Wildavsky, A. B. (1979). Speaking truth to power: The art and craft of policy analysis. Boston: Little, Brown.Find this resource:

                                                                                                          Winship, C., & Mare, R. D. (1992). Models for sample selection bias. Annual Review of Sociology, 18, 327–350.Find this resource:

                                                                                                            Wotschack, P., Glebbeek, A. C., & Wittek, R. P. M. (2013). Strong boundary control, weak boundary control, and tailor made solutions. The role of household governance structures in work-family time allocation and mismatch. Manuscript under review. Berlin: Social Science Research Center (WZB). (p. 546) Find this resource:

                                                                                                              Notes:

                                                                                                              (1.) In the Netherlands reemployment programs generally take the form of ‘trajectories’ or “courses” consisting of several phases: intake, diagnosis, orientation, preparation of a personal action plan, teaching employability skills, job-search assistance, and interview training and, when successful, placement and job coaching.

                                                                                                              (2.) Quotations are from De Volkskrant, January 30, April 3, and November 20, 2008.

                                                                                                              (3.) This is sometimes called the quasiexperimental design, but because of the need to somehow correct for nonrandomization, it definitively belongs to the nonexperimental category.

                                                                                                              (4.) Note that this last viewpoint accounts for the macro-level effects we recognized in the second section of this chapter.

                                                                                                              (5.) See Pawson and Manzano-Santaella (2012) for a critical appraisal of allegedly “realistic” evaluations. For theory-based evaluations in general, a preliminary balance was drawn by Weiss (1997) and more recently by Coryn, Noakes, Westine, and Schröter (2011).

                                                                                                              (6.) This claim was made very explicit by Eden and Aviram (1993): “There is much that applied psychologists can do for the unemployed. . . . Helping people to regain their GSE [= general self-efficacy] is help of the noblest kind and is ultimately the most effective, because it truly helps people to help themselves” (p. 359).

                                                                                                              (7.) Following established practice, we distinguish labor market schooling and training from reemployment services. However, they are part of the wider category of active labor market programs (cf. Kluve, 2010).

                                                                                                              (8.) The negative evaluation of reemployment policy has not changed the overall commitment of the Dutch government to supply-side measures. In the proposed reform on dismissal law (2013), a large role is again reserved for reemployment services.