The Intelligence Analysis Crisis
Abstract and Keywords
The U.S. Intelligence community is in crisis. Contrary to popular opinion, this crisis in the intelligence community did not start in recent years, rather it dates back to the 1960s at a time when the U.S. intelligence on several accounts failed to provide sound national intelligence estimates. The failures of the U.S. intelligence is due to two factors: the common psychological traits which resulted in unmotivated biases in information processing which lead to systematically mistaken estimates in analysis; and the political environment of the intelligence system which has produced the notion that the intelligence product is not merely a means to achieve foreign policy goals but also a political commodity that can be used to advance political and bureaucratic interests. These biased outcomes have resulted in a bureaucratically incentivized and personally motivated manipulation in the production and use of intelligence analysis. This article discusses American intelligence failures within the broader context of American intelligence culture. It outlines eight specific aspects of this culture in order to determine the specific domains in which such intelligence estimation failures are likely to emerge. Particular focus is given on the two domains of failures: motivated and unmotivated biases. Suggestions on how to limit the impact of these biases on the intelligence estimates are also discussed.
The US analytical intelligence community is in crisis. However, contrary to much popular opinion, this crisis did not start in recent years. In fact, evidence for fundamental weaknesses within the system can be traced back at least to the 1960s, including such dramatic examples as the Bay of Pigs and the Cuban Missile Crisis, through the 1970s, surrounding such events as the Khomeini revolution in Iran and the Soviet invasion of Afghanistan, into the 1980s with the unforeseen collapse of the USSR, and reaching beyond the 1990s failures to anticipate Iraq's invasion of Kuwait or the Indian and Pakistani nuclear tests until today. In this sense, recent major intelligence failures such as the lack of proper warning prior to the 9/11 terrorist attack, the entirely mistaken estimate of Iraq's WMD capabilities, and the dubious claim that Iran had “halted its nuclear weapons program,” merely represent recent peaks in a long historical valley of failed national intelligence estimates. Indeed, during this time of an ongoing war on terrorism, when intelligence is far more important than absolute levels of military power in many ways in determining outcome, the performance of the American intelligence community is, probably, the poorest since its establishment in 1947. Here we investigate some of the fundamental explanations for this situation.
The causes for these failures have been the subject of numerous studies, starting with Wohlstetter's (1962) classic study of Pearl Harbor. Since then, the research in (p. 360) this field has yielded excellent general as well as more specific explanations for failed intelligence estimates, as well as a vast body of proposals on how to fix the defects in the intelligence process that produce them (for a recent review, see Bar-Joseph and McDermott 2008). To date, this research has not shown how the impact of obstacles to high-quality analysis is shaped by a specific intelligence environment, or why certain kinds of organizations are more likely to fall victim to specific obstacles, while others remain less susceptible. In order to tackle these questions, we suggest here another approach to the problem. Instead of the more commonly used inductive approach, we propose a more deductive one that frames the study of American intelligence failures within the broader context of the American intelligence culture. Such an approach can provide not only a better understanding of what cracks in the foundation have led to such failures, but also provides a more effective structure upon which to reformulate the system, primarily by helping reformers to accept things that cannot be changed, change the things that can and need to be changed, and effectively distinguish between the two.
A first cut at explaining the sources for pervasive intelligence failure reveals two main factors that account for such consistently poor performance. First, the personal quality of the analysts themselves contribute to an inability to see what is there, a tendency to see what is not there, and a fundamental restriction in meta-cognitive perspective, which does not allow each individual to properly interrogate his or her own inferential processes, strategies, beliefs, or ask how these dynamics might influence assessments based on them. In other words, common psychological traits result in unmotivated biases in information processing which lead to systematically mistaken estimates in analysis. These factors can be seem as internally determined.
A second factor remains more externally driven. The political environment affects both intelligence consumers' and producers' views that the intelligence product is not only a means to achieve effectively foreign policy goals but also a political commodity that can be used to advance political and bureaucratic interests. Biased outcomes in this instance result from bureaucratically incentivized or personally motivated manipulations in the production and use of intelligence analysis. Obviously, these two internal and external processes often go hand in glove and each can serve to exacerbate the effect of the other. Since they commonly result in biases which move in the same direction, the tendency is not for such effects to cancel each other out; rather, their combined effect aggregates the error across analysts who share the same perspectives and incentives, further accentuating both the nature of the error itself, and the confidence associated with the mistaken estimate, as each participant gains greater assurance from the endorsement of others in the process.
In seeking to explain the specific ways in which such effects manifest within the context of the American intelligence culture in particular, we outline several specific aspects of this culture in order to locate the specific domains in which such effects are most likely to emerge, and the ways in which such biases remain particularly pernicious.
Accordingly, the first part of this chapter discusses eight important characteristics of American intelligence culture. On this basis, we identify and analyze two main sources of intelligence estimate failures: unmotivated and motivated biases. Each of these elements will be discussed in the following sections. In summarizing this discussion, we suggest a number of mechanisms that may limit the impact of these biases on intelligence estimates.
2. The American Intelligence Culture
2.1. The Concept of Intelligence Culture
While the concept of “strategic culture” had been widely used since the 1980s in order to explain the different ways in which nations formulate their national-security strategies, the systematic study of “intelligence culture” is a relatively new one. The idea was used in the 1970s in order to describe the general atmosphere within the CIA (for example, Marchetti and Marks 1974). In addition, in the early 1990s, the role of general cultural barriers in restricting effective intelligence forecasts was demonstrated (Bathurst 1993). However, despite this fact, a more in-depth investigation into the impact of the intelligence culture itself on the behavior of specific intelligence organizations has only started in recent years (for example, Hastedt 1996; Turner 2004).
We define “intelligence culture” here in a manner consistent with the established definition of “strategic culture” used elsewhere (Johnston 1995; Katzenstein 1996; Gray 2006), as encompassing modes of thought and action derived from perceptions of national historical experience, aspirations for self-characterization, and distinctive state experiences, with respect to the role of intelligence information and analysis in shaping foreign policy. As such, intelligence culture interacts with pre-existing psychological obstacles to effective information processing to reduce or enhance their impact on the quality of the intelligence product.
2.2. The Culture of the CIA
A preliminary study of the CIA suggests eight specific cultural traits that represent pervasive aspects of the intelligence culture we describe:
• Ethnocentricity and cultural insensitivity: Being the product of geostrategic isolation and “lack of history,” combined with the first pilgrims' ethos of “a City upon the Hill,” this prominent characteristic of American culture is highly relevant to the analytical community. Since intelligence analysis requires deep understanding of foreign countries, cultures, languages, (p. 362) mindsets, and operational codes, ethnocentricity is, perhaps, the intelligence community's most important source of weakness. It is manifested in two main forms:
a. Foreign language deficiency that hampers the effective collection and use of available sources of information. Out of two hundred case officers that were sent to Korea in 1950, none spoke Korean, a limitation which became a leading factor in the complete CIA failure to penetrate North Korea. Lack of Farsi-speaking personal in the embassy in Teheran and in the US intelligence community at home limited the American ability to understand firsthand the inner process that led to the collapse of the regime of the Shah in Iran in 1979 (see discussion below). Insufficient number of Arabic-speaking personnel in the community was, and still is, a major cause of ineffectiveness in the current war on terrorism as well. This pervasive and enduring lack of sufficiently skilled linguists speaks not only to the endemic ethnocentrism and lack of cultural sensitivity in the American intelligence culture, but also demonstrates its inability to quickly adapt and learn from repeated mistakes over time as well.
b. The interaction between ethnocentricity, a lack of understanding, apathy to different societies, and obstacles to information processing, primarily unmotivated biases, is likely to exacerbate possibilities for the creation of pathologies such as “mirror-imaging.” Mirror imaging results when a leader, a high-level group, or even citizenry of one country assume that others are just like them, and thus fail to fully understand the differing motives and incentives that may drive others to behave in different ways. This pathology played, for example, an important role in facilitating intelligence blunders such as the failure to forecast the Indian nuclear tests in 1998.
• Club mentality: Until recent years, the veil of secrecy under which intelligence agencies operated limited their ability to recruit workers openly. As a result, agencies like the British Secret Intelligence Service (MI6), which served as a model for the creation of the CIA, tended to recruit manpower on the basis of the “one member brings another” principle, and the CIA copied this system. This “old boys' network” recruitment strategy led to a homogenous, largely white male Ivy League CIA beginning in the 1940s which did not really begin to change until the early 1980s with the massive recruitment of non–Ivy League university graduates. Since this early strategy gave heavy, indeed almost restrictive, preference to the recruitment of American-born Caucasians, it turned the CIA into an organization to which American citizens from the Soviet and the former Soviet block, or Third World countries, especially Muslims, had a slim chance of joining. The result was the enhancement of ethnocentricity and cultural insensitivity, the weakening of HUMINT capabilities, and a growing dependency on foreign (and occasionally unreliable) intelligence services.
• Mass production: For almost a hundred years now, capital-intensive economics, mass production, and large-scale assembly lines have dominated American industry. This mode of production, which proved its effectiveness in WWII, had a major impact on the way the CIA built its own intelligence production lines, emphasizing quantity over quality. This resulted in waves of massively indiscriminate recruitment of case officers, analysts, and agents when the need arose, or political circumstances enabled, and a low quality of professionalism and expertise within the CIA's ranks. Thousands of immigrants from Eastern Europe were recruited by the CIA at the beginning of the Cold War in order to build anti-communist undergrounds in the Soviet satellites (none were operational); two hundred case officers were sent to Korea in 1950 and recruited thousands of Korean and Chinese “agents;” a successful small-scale covert operation in Laos was turned into a far less successful large-scale operation in 1965; and two thousand new analysts and case officers joined the agency in a mass recruitment when the Reagan administration came to power in the early 1980s and in the post-9/11 period. As these examples illustrate, it is not clear that having more people for their own sake, without the experience and tradecraft that is gained in years of intelligence training and practice, enhances the quality of the intelligence received; rather, such growth may just as often obscure useful information in the midst of increasingly internecine bureaucratic networks and procedures. Especially without a clear strategy for the effective implementation and integration of new hires, less can in fact be more in such contexts, particularly if secrecy concerns remain paramount.
• Money can buy anything: The combination of a free market ideology and the vast financial resources of the CIA yield the tendency to use money as a means to compensate for the agency's weaknesses in other domains. The result is the preference for buying, rather than cultivating by other means—first and foremost ideology—HUMINT sources and political influence. For example, in building up the anti-communist networks in Eastern Europe in the late 1940s, an effort that was based on experience gained in WWII, the CIA used payments rather than ideology to carry out the mission. And unlike the Soviet services that used ideology as the main tool to recruit excellent sources during the 1930s and the 1940s, the CIA's main tool of recruitment was money. Experience shows that ideologically motivated sources are more effective than financially motivated ones, and certainly more likely to sustain their services once payoffs or surveillance ceases.
The same is true with regard to gaining political influence. The CIA bought the Italian elections in 1947, financed the demonstrations that led to the collapse of Mussadeq in Iran in 1953, bribed Japanese politicians in 1950, had King Hussein of Jordan on its payroll, controlled the Laotian Parliament in the mid-1960s, and financed, without direct involvement, the anti-Soviet Mujahidin in Afghanistan. Current strategy to buy the allegiance of the local (p. 364) sheiks in Al-Anbar province in Iraq represents a continuation of just such a short-sighted strategy. While financial incentives can work for a period of time, and certainly in the presence of a supporting military occupation, they are unlikely to lead to the kind of internal shifts of hearts and minds that ideological capture can create, which can then continue to survive in the absences of supporting infrastructure.
• Public relations and sales promotion: The American culture which highlights salesmanship as a central means to promote value, combined with aggressive bureaucratic competition (see below), and the fact that the CIA was the “the new kid on the block” in the national-security apparatus (lagging behind the Pentagon and the State Department) led the CIA to emphasize public relations as a means to improve its public image and status within the administration. Unlike any other secret agency, the CIA had an Office of Public Affairs from its inception in 1947, and it invested a lot in describing failure as success, exaggerating the value of potential covert operations, and hiding the “family jewels.” This proclivity, combined with a tamed media (at least until the early 1970s) enabled the CIA to describe humiliating failures such as its covert operations in the Korean War, or in Indonesia in 1958, as major successes. Such a strategy also helped the CIA to limit public damage at least somewhat from humiliating failures such as the failed Bay of Pigs operation in 1961, or the completely mistaken assessment of Iraq's WMD capabilities in 2002. The end result was that in many cases the CIA invested far more in covering up its blunders than in fixing the weaknesses that caused them. This tendency can also be seen in the CIA's attempts to hide the spies that emerged in their own midst, most notably Aldrich Ames, and their inability to uncover those operatives while they were in charge of sensitive information within the agency.
• Technology as panacea: The legacy of American triumphs over geographic and other natural obstacles, and its pioneering technological developments (for example, the Manhattan project), have made preference for problem solving by technological means another dominant trait of the American intelligence culture. Combined with the impact of ethnocentricity, this characteristic yields a strong preference within the intelligence community to address problems through the extensive use of sophisticated technology over more traditional human intelligence means. Generally, this propensity helps to explain the US intelligence community's superiority in collection by technical means (TECHINT) and weakness in collection by human sources (HUMINT). At the analytical level, it explains the preference for the use of quantitative techniques over more traditional qualitative methods of assessment. In some areas, such as economic intelligence, it may produce high-quality estimates. In other, more subtle arenas requiring greater political or military acumen or sensitivity, the use of mechanical means might enhance the detrimental impact of unmotivated biases, by enabling inexperienced and culturally insensitive analysts to ignore complicated (p. 365) realities and contradictory information. A typical example of this problem can be seen in the method of assessing the strength of the Viet-Cong by means of body-counting that, to a large extent, turned the issue of who was winning the war into an exercise in bookkeeping, without any clear understanding of the degree of motivation in the underlying population regardless of casualties. By focusing on body counts, assessments failed to understand the ways in which local actors might deliberately recategorize natural deaths in order to appear more successful or to gain more resources, while simultaneously refusing to recognize the ways in which large numbers of deaths actually helped generate the blowback effects which fueled the larger political insurgency.
• Emphasis on Operations over Analysis: The CIA's preference for operations stemmed, so it seems, from an interest in changing the reality that it failed to understand. It was enhanced by the American “can do” mentality, the Cold War reality, and a bureaucratic interest in showing policymakers the value of the agency as a problem solver. The result was a greater emphasis on covert operations in intelligence collection and analysis. Dulles, for example, had little interest in strategic analysis but he was very enthusiastic about covert action. The ‘Bay of Pigs’ operation was carried out, despite the odds against it, to show Kennedy the CIA's ability to get rid of a major problem in the shape of Castro. And even during the 1962 Cuban missile crisis, covert operations against Castro took place, despite the fact that they could have led to an escalation of a nuclear crisis. At the same time, the CIA's Directory of Intelligence received less attention, primarily because it lacked the “glory” of covert action.
• Aggressive bureaucratic competition: Although bureaucratic competition is a universal pattern in organizational behavior, the competition within the US intelligence community appears more intensive than in other intelligence communities. This is due to the impact of two factors: the capitalist heritage, which promotes competitive, achievement-oriented, society-concerned assertiveness; and the confederated structure of the community, which results from the fear of a centralized intelligence apparatus that might threaten basic individual freedoms. High-level bureaucratic politics interacts primarily with motivated biases and produces two main patterns of behavior:
a. Lack of cooperation, which occurs first and foremost in the domain of information sharing. While compartmentalization may justify this type of behavior on professional grounds, political competition seems to be the main cause for the ineffective distribution of critical information. Examples of the destructive impact of lack of cooperation on effective intelligence analysis include the FBI refusal to deliver critical information to other consumers prior to Pearl Harbor, and the instruction of the Naval Chief of War Plans, Admiral Turner, to avoid distribution of Magic material to the Navy command in Pearl Harbor. Lack of cooperation also (p. 366) contributed to other intelligence failures, and the inability to provide warning prior to the 9/11 terrorist attacks.
b. Intelligence to please. This pattern of behavior, which many intelligence makers in the US (but not in Israel or Britain, for example) regard as fairly normative, is motivated by the desire to gain influence in the policy making process by providing the kind of information a leader wants to hear, at the cost of a less objective and accurate intelligence product. The American presidential system seems to make political pressures on intelligence makers more problematic than in parliamentary democracies. The most recent example of such political pressures involves the 2002–3 estimate of Iraq's WMD.
3. Unmotivated Biases and Intelligence Failure
The study of obstacles to high-quality information processing has yielded a large number of mechanisms that appear to systematically hinder this process. Here we focus on three of them:
a. Belief perseverance, which overlaps or is closely related to mechanisms such as confirmation bias or polarization effect, is a process by which individuals assimilate new information into preexisting theories in biased ways. In particular, individuals typically accept at face value information which accords with their beliefs without subjecting it to strict inferential tests, interpret mixed or ambiguous information as consistent with preexisting beliefs, and tend to dismiss evidence that runs contrary to those theories, or at the very least subject such data to more exacting standards of credibility, than they would data which confirms previous beliefs. In this way, an analyst will tend to interpret evidence in ways which are most likely to support their preexisting beliefs, and least likely to change their views. This can make it extremely challenging for even a preponderance of evidence which runs contrary to established beliefs to penetrate an analyst's viewpoint (Lord, Ross, and Lepper 1979).
b. Judgmental Heuristics are biases that can unconsciously lead to systematic errors in prediction, especially under conditions of uncertainty, by affecting estimates of frequency and probability based on factors such as representativeness, availability, and anchoring, which often fail to systematically track with objective probabilities (Kahneman, Slovic, and Tversky 1982). In representativeness, observers are more likely to judge a person or event as more representative of a larger category, such as (p. 367) “terrorist” for example, to the extent that such a person is similar to others drawn from the larger category on stereotypic features. Availability bias occurs when individuals estimate events to be more likely to the extent that they are easy to remember or access; so, for example, after the attacks of 9/11, domestic security focused on protection of planes and airports because that was the most salient reference, rather than focusing on how the enemy might shift strategy to another area precisely because increased security made such air traffic attacks more challenging. Finally, anchoring reflects the fact the estimates and judgments often change less quickly than might be objectively warranted. Once made, such calculations become anchored and adjustments take place slowly and incrementally, even when the environment demands more radical shifts in understanding. It thus proved difficult for the CIA to understand, for example, that Gorbachev represented authentic change in the Soviet system, and was not just trying to lull the Americans into a false sense of security so as to catch them later unawares.
c. Groupthink, which unlike the previous two mechanisms is a syndrome that affects the dynamics within small groups and can create mutually reinforcing and impenetrable cycles of false belief, as each member of a group perpetuates the collective consensus in pursuit of personal appreciation (Janis 1982). In these dynamics, the critical factors remain the personal self-esteem, social support, status, and camaraderie which each member derives from group membership, which renders challenges to the group particularly threatening from a psychological perspective. Members would rather keep their doubts and objections to themselves rather than risk social sanction, isolation or rejection from the group.
It can be assumed that a number of the CIA's cultural traits enhance the impact of each of these specific mechanisms more than others. Given, for example, that the estimation process is more demanding for analysts who lack first-hand knowledge of their subject or the necessary linguistic skills, they are likely to resort more often to various forms of heuristic judgment as a means to simplify the process. Similarly, club mentality is likely to increase the impact of groupthink, since individuals from a similar background tend to think similarly more than individuals from different backgrounds, and the likelihood of dissenting voices in such closely knit groups is likely to be lower.
A typical example which shows how certain patterns within CIA culture were channeled through unmotivated biases to create intelligence estimation failures is the mistaken estimate of the stability of the Iranian Shah's regime prior to the Khomeini revolution in 1979. Clearly, social revolutions are major events and their accurate assessment not only necessitates the use of secret sources but a clear comprehension of the social, political, and psychological mechanisms that precipitate, encourage, and support them. In 1978, the CIA lacked both sources and comprehension.
In August 1978, the CIA estimated that there was no threat to the Shah's regime (Weiner 2007, 369). On September 28, a Defense Intelligence Agency (DIA) paper predicted that the Shah “is expected to remain actively in power over the next ten years” (Bill 1988, 258). The CIA assessed on October 27 that “the political situation is unlikely to be clarified at least until late next year when the shah, cabinet, and the new parliament . . . begin to interact on the political scene” (Kurzman 2004, 1). Less than a hundred days later the Shah left Iran.
There are a number of explanations for the CIA's estimation failure. Some of them focus mainly on American political commitments to the Shah which created a major obstacle to high quality collection and analysis (for example, Conlin 1993). But Robert Jervis, who as a consultant for the CIA conducted the most thorough investigation into this fiasco, concluded that the agency lacked the elementary tools to grasp the nature of events as the revolution unfolded. To start with, the American embassy in Teheran was a typical example of Lederer and Burdick's classic concept of American missions abroad as S.I.G.G., or “Social Incest in the Golden Ghetto” (Lederer and Burdrick 1958). Its members did not have the linguistic skills (almost none of them spoke Farsi), they were isolated from nongovernmental segments of the Iranian society, having few connections with the secular opposition, and no connections at all with the pro-Khomeini segments that incited the revolution. A typical result of this isolation was the fact that despite requests by analysts, the CIA station in Teheran failed to get cassette tapes by Khomeini that were circulated freely in the streets (Jervis 2006, 16).
At the time, the CIA's Directorate of Intelligence employed only two political analysts in its Iran section, and they did not understand the essence of Iranian politics, society, or culture. Iran, moreover, was not a subject for intelligence analysis in the DIA and the State Department's Intelligence and Research (INR) office, and the CIA's analysts did not use academic expertise to compensate for their intrinsic limitations (Jervis 2006, 21–22). The end result was that the CIA erred in understanding the causes for the Shah's reluctance to use his power against the opposition, estimated him to be stronger than what he was, remained unaware of the serious nature of his cancer and the ways in which it both shifted his emphasis and diminished his abilities, completely misunderstood the role of religion and Khomeini, and failed to see that since the Shah was perceived by the nationalists as an American puppet he (and not the USA) had become the main target of national frustration (Jervis 2006, 23–25).
With lack of detailed information about the dynamics behind the making of the CIA's Iran estimates in the fall of 1978, we cannot point to specific cultural straits, beyond ethnocentricity and cultural insensitivity, that were channeled through unmotivated biases to create this failure. But the role of these cultural factors becomes clear when the American intelligence estimate is compared to the Israeli one. Unlike the American diplomatic, military, and intelligence personnel, the staff of the Israeli embassy in Teheran included a number of people, including senior ones, who were born in Iran and immigrated to Israel. Consequently, they knew the country, politics, and culture very well and had the necessary linguistic skills to communicate with people on the ground. The former Israeli military attaché in (p. 369) Iran, Yaacov Nimrody, who was born in Iran, was doing private business there in 1978. In his memoirs he described how, following a visit to the Island of Kish that had become the Iranian elite's luxurious resort, he reached the conclusion that corruption on the one hand and popular frustration on the other created a considerable threat to the regime's stability. His estimate was enhanced following visits to the bazaar in Teheran, talking with merchants, and other contacts. Toward the end of the year, after he saw that the Shah's picture had been removed from the wall in an office he visited, he concluded that it was the end of the Shah's regime. With this type of information and analysis one can understand why in September 1978 Israeli officials reached the conclusion that the situation in Iran “was not good” and recommended putting an end to investments there and pulling Israeli assets from the country (Nimrody 2003). Recall that in this same month, the American intelligence estimate expected the Shah's regime to stay in power throughout the next decade.
4. Motivated Biases
Motivated biases typically refer to those which derive from strong personal incentives. The literature has often discussed these biases in terms of strongly negative emotional feelings such as guilt, shame or rage (Janis and Mann, 1977). Occasionally these biases are discussed in terms of “wishful thinking,” in that analysts tend to believe things that they wish were so, for personal or professional reasons (Levy 2003; Jervis 1976).
But motivation can come in many forms, and often motivated biases overlap with political and professional incentives structures, such that individuals who know that they will get a raise or promotion if they produce evidence that is consistent with the preexisting desires or beliefs of policymakers will have additional reason to search for, and fail to challenge the credibility of, such information because they desire the political perquisites that will result if they give their sponsors the information they want.
Individuals may or may not be aware of the impact of these personal and professional incentives on their behavior. Certainly, most people would at least be more aware of their personal or professional motivations than they might be of their more unmotivated inferential processes. However, affected individuals may not agree that such influences necessarily represent a problem, especially if policymakers are more likely to use information which aligns with their plans. Many analysts may have their own pre-existing motivated ideological biases for wanting to support such policy positions themselves. Indeed, these people may have self-selected into working in these environments for this very reason, because they wanted to work in support of causes they believe are important. Yet once in place, their strongly held beliefs may prove a hindrance to more objective analysis and interpretation of information which runs contrary to those theories.
The discussion of the role of motivated biases in intelligence practices usually focuses on the politicization of the intelligence product, also known as “intelligence to please” (i.e., the intentional tailoring of intelligence estimates to accord with the political preferences of consumers). This, indeed, is the primary problem with intelligence policy making in the United States. But motivated biases can also lead to the opposite result. When senior intelligence officers believe that they know better than their political bosses where the national interest lies and how to achieve it, they sometimes pursue a policy of their own. This was evidenced, for example in the 1920 military-intelligence plot against British Prime Minister David Lloyd George that aimed to end his rapprochement policy with the Soviet Union; the 1924 intelligence plot against the Labor party (the “Zinoviev Letter” affair); Israel's “Unfortunate Business” of 1954, which intended to create a crisis in Egypt's relations with the west in order to prevent the British forces' evacuation of the Suez Canal, and was carried out without the knowledge, and against the policy, of the prime minister; and the behavior of Israel's Military Intelligence chief on the eve of the 1973 Yom Kippur War, who because he was certain that he knew the situation better misled his superiors with regard to critical information concerning the imminent threat (Bar-Joseph 1995 2005; Bennett 2006).
Intelligence estimates are also influenced by bureaucratic interests. Because of the direct correlation between the magnitude of the threat and their budgets, naval intelligence is more likely to highlight the threat of a rival navy, just as the air force is likely to do with the menace of the opponent's air force, for example. The bomber and the missile-gap debates of the 1950s are two examples of the way motivated biases shaped intelligence estimates in the United States.
History shows that while in other countries motivated biases can be channeled into “intelligence to please” and bureaucratic competition, as well as anti-governmental action, the specific nature of US intelligence culture makes the last pattern quite rare. Indeed, the only significant case in which CIA chiefs acted in contrast to presidential preference was the Bay of Pigs episode, where they presented the operation's chances of success as being far higher than their actual estimate (Kornbluh 1998). Far more common is the CIA's tendency to tailor its reports according to the political needs of the White House.
The CIA's politicization of intelligence represents an acquired cultural trait. Until the mid-1960s the agency was quite clear of it, just as the Second World War OSS reports were a fine example of objective analysis. The legacy that dominated the CIA during that period was that of Sherman Kent, “the father of American intelligence analysis,” who regarded the separation of analysts from policymakers as the best means to preserve high-quality intelligence products (Kent 1949, 200). Indeed, one can hardly find any traces of “intelligence to please” in the agency's reports that were submitted to President Eisenhower who—being an experienced intelligence consumer—was himself aware of the limits of intelligence and the need to keep it objective. But the Johnson administration's need for positive estimates on the Vietnam War started changing this situation. After 1965, CIA reports from Saigon began to be colored in more optimistic tones (Allen 2001, 188–93), and under con (p. 371) siderable political pressure DCI Richard Helms—probably the best DCI in the agency's history—had to accept the Pentagon's optimistic estimation of the Viet Cong Order of Battle for fear that failing to do so would alienate his agency's relations with the White House (Weiner 2007, 267–69). DCI George Bush's readiness to allow a politically biased scrutiny of the CIA's estimate of Soviet military power, which led to the Team A–Team B debate, added another dimension to the politicization of the agency's reports (Prados 1986; Prados 1993).
The politicization of the CIA's estimates reached its peak during the first years of the Reagan administration. Under William Casey as DCI and Robert Gates as DDI, CIA analysts were pressured to produce estimates that portrayed the USSR as the mighty “evil empire” and the master of international terrorism, presented a number of states, such as Mexico and Iran, as being on the verge of becoming communist, and minimized the evaluation of Iran's involvement with terrorism to suit the administration's policy. The intentional exaggeration in the power of the Soviet menace was one of the major causes for the CIA's failure to correctly estimate the collapse of the USSR (Goodman 2008).
In comparison to the CIA under Casey and Gates, Tenet's CIA estimates with regards to Iraqi WMD capabilities and links with al Qaida seem almost sincere, although it is obvious by now that the agency systematically corrupted the intelligence process in order to provide the administration with the products that would justify the invasion of Iraq (for example, Pillar 2006). What seems even more striking than the CIA's yielding to political pressures is the buildup by the Pentagon in 2002 of a special intelligence unit, known as the Office of Special Planning (OSP) under Undersecretary of Defense for Policy, Douglas Feith, whose sole task was to produce intelligence reports confirming the administration's public accusations against Saddam Hussein. That the most senior officials in the administration regarded intelligence estimates as merely a political commodity, and that no one among the intelligence chiefs came forward to protest this perception, constitutes vivid testimony of the level that “intelligence to please” achieved in the United States. As far as is known, this feature of intelligence culture is unique to America and does not exist in other liberal democracies.
This claim is also validated by the political and academic debate that has been taking place in the United States since the 1980s (since approximately the time when Casey became DCI) about how politicized the intelligence product should be. This is a unique debate concerning values and norms that rarely if ever exists anywhere else in the world—a clear indicator by itself of the legitimacy which the politicization of intelligence had gained within American intelligence culture. It involves distinguished academic scholars such as Richard Betts who maintain that a strict separation between intelligence and policy “may preserve the [intelligence] purity at the price of irrelevance” (1980, 109), as well as intelligence chiefs such as DCI Robert Gates who regarded the CIA's “unprecedented access to the Reagan administration” as a major achievement and a key to “a dynamic, healthy relationship” (Gates 1987/8, 225–26). On the other side stand professionals such as Pillar (2006) and Goodman (2008) who follow the tradition of the forefathers of intelligence (p. 372) analysis in the United States, such as William Langer who ran the OSS's Office of Reports during WWII and Sherman Kent, who succeeded Langer and later headed the CIA's Office of National Estimates.
Without getting into the details of this debate, a number of points should be made. First, none of the preachers for close relations between intelligence and policymakers publicly supports political pressure on intelligence or the motivated tailoring of intelligence reports according to the political needs of the administration. Second, non-American participants in the debate such as Jones (1989), Harkabi (1984), Handel (1987), and Bar-Joseph (1995) systematically endorse the more traditional stand, and regard close relations between intelligence analysts and policymakers as a major potential threat to the quality of the intelligence product. Third, although the main justification for a close relationship between intelligence and politics is the interest in making intelligence more relevant for policy making, recent actual experience shows that cooking intelligence can lead to a weakened status for the agency within the governmental apparatus and in the public eye. Fourth, a corrupted intelligence product that yields political outcomes, such as the October 2002 NIE on Iraq's WMD capabilities, can have a far more negative impact on American national security interests than a non-corrupted and less influential product.
In order to improve the performance of intelligence analysts, academic students of the subject should look deeper into the cultural causes and consequences of the American intelligence culture on analytic performance. It may prove beneficial as well to compare the American intelligence culture to cultures within other intelligence organizations in order to generate suggestions as to how to better manage, shift, or exploit the American culture so that it will nurture a more successful and accurate professional analysis environment.
Changing any culture is very difficult, and often the process is quite slow. Instigating such change demands cooperation between politicians and intelligence chiefs to work together to mitigate those aspects of the American intelligence culture that operate to impede the production of objective analysis. One way to achieve this laudable goal is to establish a more open and transparent recruitment policy. This represents a major means by which to overcome ethnocentrism and club mentality. In particular, emphasis needs to center on recruiting United States citizens with Third World background or language skills.
In addition, the agency needs to work harder to recruit and promote “open-minded” personal, especially for analytical positions. Professional incentives should be structured to reward accuracy, even when such assessments run contrary to the established political wisdom, or the stated desires of intelligence consumers. At the very least, analysts should know that their jobs will not be at stake should they offer analysis which runs contrary (p. 373) to political pressures. In addition, less-frequent rotation and longer-term service in the same analytical positions can serve as an effective means by which individuals can gain a sustained and intimate understanding of their area of research.
Finally, serious attempts to diminish the impact of political pressure on the production of intelligence can help reduce the likelihood of pernicious errors in estimation and analysis. For example, passing a law which criminalizes political pressure designed to change intelligence estimates, or punishes those who force intentional change for political reasons constitutes a good first step in changing the incentive structure which encourages this kind of behavior. Such a shift in tactics is comparable to the normative change in attitudes toward sexual harassment which followed the public institutional sanctioning of such behavior in the workplace.
Each aspect of the American political culture that we identified above has the potential to interact with the politicization of intelligence in a way which renders biased estimates. Such assessments, while possibly more useful for a policymaker already bent on a particular plan, does a disservice to the longer term security needs of the state. Fostering a culture in which contrary information and analysis is welcomed, and where politicization it kept to a minimum, at least at an organizational and institutional level, can go some distance toward mitigating some of the more extreme biases which result from the interaction of encapsulated environments operating under institutional political pressure.
Allen, G. W. 2001. None So Blind: A Personal Account of the Intelligence Failure in Vietnam. Chicago: Ivan R. Dee.Find this resource:
Bar-Joseph, U. 1995. Intelligence Intervention in the Politics of Democratic States: The USA, Britain, and Israel. University Park: Pennsylvania State University Press.Find this resource:
———. 2005. The Watchman Fell Asleep: The Surprise of Yom Kippur and Its Sources. New York: State University of New York Press.Find this resource:
———, and R. McDermott. 2008. Change the Analyst and Not the System: A Different Approach to Intelligence Reforms. Foreign Policy Analysis 4, no. 2 (April): 6–44.Find this resource:
Bathurst, R. B. 1993. Intelligence and the Mirror: On Creating an Enemy. London: Sage.Find this resource:
Bennett, G. 2006. Churchill's Man of Mystery: Desmond Morton and the World of Intelligence. London: Routledge.Find this resource:
Betts, R. 1980. Intelligence for Policy making. The Washington Quarterly 3 (Summer): 118–29.Find this resource:
———. 2007. Enemies of Intelligence: Knowledge and Power in American National Security. New York: Columbia University Press.Find this resource:
Bill, J. A. 1988. The Eagle and the Lion: The Tragedy of American-Iranian Relations. New Haven, Conn.: Yale University Press.Find this resource:
Conlin, M. 1993. Failures in Analysis: U.S. Intelligence and the Iranian Revolution. Intelligence and National Security 8 (January): 44–59.Find this resource:
Gates, R. 1987/88. The CIA and Foreign Policy. Foreign Affairs 66 (Winter): 215–30.Find this resource:
Goodman, M. 2008. Failure of Intelligence: The Decline and the Fall of the CIA. Lanham, Md.: Rowman and Littlefield.Find this resource:
(p. 374) Gray, C. 2006. Irregular Enemies and The Essence of Strategy: Can the American Way of War Adapt? Carlisle: US Army War College.Find this resource:
Handel, M. I. 1987. The Politics of Intelligence. Intelligence and National Security 2, no.4: 5–46.Find this resource:
Harkabi, Y. 1984. The Intelligence-Policymaker Tangle. The Jerusalem Quarterly, 125–31.Find this resource:
Hastedt, G. 1996. CIA's Organizational Culture and the Problem of Reform. International Journal of Intelligence and Counterintelligence 9, no. 3 (Fall): 249–69.Find this resource:
Janis, I. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascos. New York: Houghton Mifflin.Find this resource:
———, and L. Mann. 1977. Decision Making. New York: Free Press.Find this resource:
Jervis, R. 1976. Perception and Misperception in International Politics. Princeton: Princeton University Press.Find this resource:
———. 2006. The Failure to See That the Shah Might Fall: The Jervis Post-Mortem for the CIA in Retrospect. Prepared for delivery at the 2006 Annual Meeting of the APSA, August 30–September 3, 2006.Find this resource:
Johnston, A. I. 1995. Thinking about Strategic Culture. International Security 19, no. 4:32–64.Find this resource:
Jones, R. V. 1989. Reflections on Intelligence. London: Heinemann.Find this resource:
Kahneman, D., P. Slovic, and A. Tversky. 1982. Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.Find this resource:
Katzenstein, P. J., ed. 1996. The Culture of National Security: Norms and Identity in World Politics. New York: Columbia University Press.Find this resource:
Kent, S. 1949. Strategic Intelligence for American World Policy. Princeton, N.J.: Princeton University Press.Find this resource:
Kornbluh, P. 1998. Bay Of Pigs Declassified: The Secret CIA Report On The Invasion Of Cuba. New York: New Press.Find this resource:
Kurzman, C. 2004. The Unthinkable Revolution in Iran. Cambridge, Mass.: Harvard University Press.Find this resource:
Lederer, W. J., and E. Burdick. 1958. The Ugly American. New York: Norton.Find this resource:
Levy, J. 2003. Political Psychology and Foreign Policy, in Oxford Handbook of Political Psychology, ed. D. Sears, L. Huddy, and R. Jervis. New York: Oxford University Press.Find this resource:
Lord, C., L. Ross, and M. Lepper. 1979. Biased Assimilation and Attitude Polarization: The Effect of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology 37, no. 11: 2098–109.Find this resource:
Marchetti, V., and J. D. Marks. 1974. The CIA and the Cult of Intelligence. New York: Dell Publishing.Find this resource:
Nimrody, Y. 2003. My Life's Journey. Tel Aviv: Maariv (Hebrew).Find this resource:
Pillar, P. 2006. Intelligence, Politics and the War in Iraq. Foreign Affairs 85 (March–April): 17–25.Find this resource:
Prados, J. 1986. The Soviet Estimate: US Intelligence Analysis and Russian Military Strength. Princeton, N.J.: Princeton University Press.Find this resource:
———. 1993. Team B: The Trillion Dollar Experiment. Bulletin of the Atomic Scientists 49 (April): 23–31.Find this resource:
Turner, M. L. 2004. A Distinctive U.S. Intelligence Identity. International Journal of Intelligence and Counterintelligence 17, no. 1 (Spring): 42–61.Find this resource:
Weiner, T. 2007. Legacy of Ashes: The History of the CIA. New York: Doubleday.Find this resource:
Wohlstetter, R. 1962. Pearl Harbor: Warning and Decision. Stanford: Stanford University Press.Find this resource: