Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 17 September 2019

Competitive Analysis: Techniques for Better Gauging Enemy Political Intentions and Military Capabilities

Abstract and Keywords

This article reviews the calls for competitive analysis from outside reviews of American intelligence performances. It evaluates the major competitive analytic techniques and some of the efforts by the American intelligence community in putting them to practice. The article also discusses the difficult bureaucratic, intellectual, cultural, and human collection hurdles that can pose a constraint on effective competitive analysis practices in the American intelligence community. The article concludes with some recommendations for better and competitive analysis under the auspices of the Director of National Intelligence (DNI).

Keywords: competitive analysis, American intelligence performances, competitive analytic techniques, American intelligence community, collection hurdles, effective competitive analysis, recommendations, DNI

1. Introduction

One of the most important, and yet daunting, tasks for intelligence analysts is to gauge enemy political intentions and military capabilities. Analysts need to marry political-intention and military-capability assessments to form a threat assessment for policymakers. As Richard Betts explains, “a threat consists of capabilities multiplied by intentions; if either one is zero, the threat is zero” (Betts 1998, 30). The failure of intelligence analysts to accurately gauge political intentions, military capabilities, or both can result in strategic intelligence catastrophes. If policymakers are not given warning of impending war, they are denied windows of opportunity to work diplomatically to head it off a crisis before the first shots are fired.

Analysts are responsible for informing policymakers about the military capabilities of foes. These capabilities can come in the form of tanks, armored personnel carriers, artillery, helicopters, and aircraft for waging conventional military operations. They might also come in the form of weapons of mass destruction (WMD)—nuclear, chemical, and biological weapons—in warheads sitting on top of ballistic (p. 376) missiles for waging unconventional warfare at a higher end of the conflict spectrum. Or military capabilities can come in the form of small arms, ammunition, explosives, and rockets wielded by terrorist, militia, and insurgent forces for waging unconventional warfare at the lower end of the conflict spectrum. But it is equally important in the assessment of military capabilities not to concentrate on the measurable and quantifiable to the neglect of the less precise, non-material capabilities such as the quality of morale, military doctrine, leadership, intelligence, logistics, and training (Handel 2003, 12).

Analysts too need to gauge political intentions, or the willingness, to use military capabilities to achieve political ends. As Clausewitz teaches, military force and war are extensions of politics by other means (Clausewitz 1989, 87). Often the gauging of political intentions is comparatively more difficult than the gauging of military capabilities because the latter entails armed forces which can be numbered and tallied to form a rough baseline or order-of-battle assessment. If many military capabilities deal with “hardware” that can be seen and touched, political intentions are “software” that is buried inside the heads of enemy leaders that—more often than not—is imperceptible by intelligence analysts.

How might intelligence analysts strengthen their assessments of enemy political intentions and military capabilities to avoid strategic intelligence debacles? Some intelligence professionals, scholars, and critics argue that a basket of analytic techniques collectively called “alternative analysis” or “competitive analysis” offers strong prospects for strengthening future intelligence performances. This chapter briefly reviews the calls for competitive analysis from outside reviews of American intelligence performances. It examines major competitive analytic techniques and some efforts by the American intelligence community to put them into practice. The discussion then turns to the formidable bureaucratic, cultural, intellectual, and human collection hurdles that will inhibit effective competitive analysis practices in the American intelligence community. The chapter concludes with recommendations for doing better competitive analysis under the auspices of the Director of National Intelligence (DNI).

2. Calls for Competitive Analysis after Intelligence Failures

The United States has experienced over the past decade some of the most catastrophic intelligence failures since the founding of the intelligence community with the National Security Act of 1947. The Central Intelligence Agency, which had been the lead American intelligence agency for assessing threats to the United States, blundered in gauging the military capabilities and political intentions of enemies in Iraq and al-Qaeda.

(p. 377)

The CIA's assessments of Iraqi military capabilities manifested in weapons of mass destruction had been wildly inaccurate. The CIA in the run up to the American- and British-led war in 1991 to liberate Kuwait from occupying Iraqi forces had grossly underestimated the sophistication of Iraq's biological and nuclear weapons programs. More than a decade later in the run up to the American and British 2003 invasion of Iraq, the CIA had assessed that Iraq had robust weapons of mass destruction programs, when in fact Iraq's WMD programs had been dilapidated since 1995 (Russell 2007, 76–85).

American intelligence badly blundered in accurately gauging al-Qaeda's unconventional capabilities to wage war against the United States prior to the September 11 attacks, although it had more accurately gauged al-Qaeda's political intentions. The CIA to its credit had provided strategic warning to President George Bush in the summer 2001 that al-Qaeda was planning a large attack against American interests. But the CIA lacked specific intelligence pointing to the 9/11 conspiracy and the FBI failed to recognize the significance of information FBI field agents had acquired about al-Qaeda members' pilot training inside the United States and to share that information broadly in the intelligence community (Russell 2007, 71–76).

The scope and magnitude of these failures brought about a slew of outside investigations of the intelligence community. President George W. Bush, for example, appointed a commission to examine the American intelligence community's capabilities to assess foreign WMD capabilities. The WMD Commission report was one of the most insightful and thoughtful investigations of intelligence community's performances. But its findings, unfortunately, were overshadowed by the public limelight grabbed by the 9/11 Commission, whose report was a national best-selling book, a notable achievement for a government investigation. The 9/11 Commission's report, however, was long on personal and political drama but not nearly as insightful or strategic in its assessment of the intelligence community as the lower public profile WMD Commission report. The WMD Commission recommended that “The DNI should encourage diverse and independent analysis throughout the intelligence community by encouraging alternative hypothesis generation as part of the analytic process by forming offices dedicated to independent study.” (WMD Report 2005, 405)

The critique that the intelligence community lacked competitive or alternative analysis echoed those of earlier outside investigations of intelligence-community performances. The Jeremiah report, for example, found that the failure of intelligence community to warn of India's denotation of nuclear weapons in 1998 stemmed in part from a prevalent mindset that India would not test nuclear weapons and risk negative international reaction as well as from an inability to conduct effective devil's advocate analysis to counter prevailing, and profoundly wrong, conventional wisdom at the CIA (Pincus 1998, A18). The Rumsfeld Commission in 1998 similarly concluded that the intelligence community did not have the analytic depth or methods to accurately assess the global proliferation of ballistic missiles (Goldberg 2003).

(p. 378) 3. Competitive Analysis Techniques

The analytic task for competitive, or as some observers prefer, alternative analysis is to penetrate, critique, challenge, and develop analyses that run counter to prevailing conventional wisdom, worldviews, and mindsets that are the orthodoxy embedded in intelligence assessments of enemy military capabilities and political intentions. Worldviews or mindsets are a set of expectations through which analysts see the world, and events consistent with these sets of beliefs are embraced as valid and accepted, but those that conflict with mindsets are discarded (George 2004, 312). As Richard Heuer observes, analysts perceive—as do policymakers, statesmen, lawmakers, and everyone else, for that matter—the world through a “lens or screen that channels and focuses and thereby may distort the images that are seen” (Heuer 1999, 4). Roger George rightly warns, “While mindsets can be helpful in sorting through incoming data, they become an Achilles' heel to professional strategists or intelligence analysts when they become out of touch with new international dynamics. Knowing when a mindset is becoming obsolete and in need of revision can test the mettle of the best expert” (George 2004, 312).

Common wisdom and mindsets are nurtured by group discussions and social pressures to conform to consensus thinking. Irving Janis brilliantly captured this phenomenon in foreign-policy decision making and coined the term “groupthink.” In group decision making groups, “members tend to evolve informal norms to preserve friendly intragroup relations and these become part of the hidden agenda at their meetings” (Janis 1982, 7). Janis used the term “groupthink” as shorthand to explain group dynamics where members are striving for unanimity, which overrides their incentive and motivation and ability to realistically assess alternative judgments (Janis 1982, 9). Janis focused on policy making, but these observations equally apply to the making of intelligence assessments. Many commentators have speculated that “groupthink” was at work in the abysmal intelligence assessments that Iraq was working on robust WMD capabilities just before the 2003 war.

Competitive or alternative analysis is a basket of various analytic techniques for steering away from the groupthink or common wisdom that veers strategic intelligence assessments over intellectual cliffs. Alternative analysis is the term often applied to a range of analytic techniques used to challenge conventional thinking on an intelligence problem (George and Bruce 2008, 309). Competitive analysis is similar and refers to the use of competing sets of analysts or analytic units to uncover different assumptions, evidence, and alternative perspectives and to illuminate an intelligence problem better than conventional wisdom (George and Bruce 2008, 310). There is a vast array of competitive analysis methodologies. By one account, there are more than two hundred analytic methods that intelligence analysts might exploit (Johnston 2003, 9).

Some of these methodologies take their bearings from rational-choice theory that increasingly dominates the academic disciplines of political science and international relations. Rational-choice approaches consist of methodologies that (p. 379) leverage statistics, or large N-studies, to quantify political phenomenon in order to make mathematical and computer models to try to predict future behavior. But these approaches teeter on the cusps of irrelevance because the explanatory powers of the theories generated by these methodologies often are inconsequential, and not even interpretable, for policymakers. As one thorough review of rational-choice security-studies literature assessed, “The growing technical complexity of recent formal work has not been matched by a corresponding increase in insight, and as a result, recent formal work has relatively little to say about contemporary security issues” (Walt 1999, 8).

On the other hand, qualitative case-study analysis that often taps history for lessons learned is more readily consumed and appreciated by policymakers and implicitly dominates the day-to-day production of intelligence analysis in the intelligence community. Of the qualitative competitive analytic techniques, several loom large as potentially effective tools for better gauging enemy military capabilities and political intentions: key assumptions checks, devil's advocacy, team A and team B exercises, red cell exercises, contingency analysis, high impact of low probability scenarios, and scenario building (George, 2004, 318–21). Jack Davis rightly calls these techniques “challenge analysis”, which could be undertaken after analysts have “reached a strong consensus and are in danger of becoming complacent with their interpretative and forecasting judgments” (Davis 2008, 168). These are not necessarily mutually exclusive techniques. Some sophisticated competitive or alternative analyses might mix, match, and blend these techniques to challenge conventional wisdom and mindsets to avoid poorly assessing enemy military capabilities and political intentions.

Key assumptions checks press analysts to explicitly identify the foundational assumptions and factors or “drivers” on which the conclusions of their analyses are based (George 2004, 318). With the benefits of twenty-twenty hindsight it is easy to see that a key assumptions check in the run up to the 2003 war with Iraq of the assessment that Saddam Hussein was actively reconstituting his WMD programs would have been useful. Another key assumptions check of the assessment in the summer of 1990 that Iraqi Republican Guard forces were building up along the border with Kuwait with the aim of politically coercing the Kuwait royal family and perhaps mounting only a limited military border incursion would have been useful to highlight some “fast and loose” assumptions that Saddam would risk too much by invading all of Kuwait (Russell 2002, 194–97).

The goal of a devil's advocate is to robustly critique conventional analytic wisdom and to make a persuasive argument using the same data that an alternative conclusion is the best assessment. As Robert Jervis explains, devil's advocacy analysis increases the chances that analysts “will consider alternative explanations of specific bits of data and think more carefully about the beliefs and images that underlie” their judgments (Jervis 1976, 416). A devil's advocate analysis of Iraq's WMD capabilities in the run up to the 2003 war would have been invaluable. The devil's thesis could have been that contrary to the common wisdom in the intelligence community and the CIA, Iraq does not have active WMD capabilities. Such a devil's (p. 380) advocate argument could have used the debriefings of a key Saddam loyalist and former head of Iraq's WMD program, Hussein Kamil, who told the United States in 1995 that Iraq's WMD programs were mothballed. Hussein Kamil's reports were dismissed at the time, in part, because of doubts about his reliability and the failure of his information to conform to the conventional mindset (Russell 2007, 81–82).

Team A and Team B exercises involve intelligence community analysts making an assessment from intelligence data and sharing that same data with a group of scholars and practitioners outside the intelligence community and tasking them to make an alternative assessment. The insider and outsider assessments are then compared and examined to determine strengths, weaknesses, and ultimate persuasiveness. Team A and Team B exercises are exceptional rather than the norm and caused a huge controversy inside the intelligence community, which resented outside intrusion into its domain when an exercise was famously run in 1976 during the Cold War on the assessments of Soviet strategic nuclear weapons forces. The outside Team B “raised important questions about Soviet doctrine and objectives but did not provide an answer with any sophistication,” concluded strategist Lawrence Freedman (Freedman 1986, 138).

The National Intelligence Council (NIC). under the Clinton administration orchestrated some Team A and Team B–like exercises. It commissioned private think tanks to write “parallel estimates” to those being written inside the intelligence community (Treverton 2003, 133). Parallel estimates could be used to probe the strengths, weaknesses, and gaps of the National Intelligence Estimates (NIEs). Such efforts might well be undertaken as the norm, rather than as exceptions, in the NIE process, which should focus on only those issues of the greatest strategic consequence to American security.

Multiple advocacy is another competitive analytic technique. Political scientist Alexander George proposed this technique for policy decision making, but it can be used for intelligence assessments as well. Multiple advocacy encourages competitive analysis and forces analysts to take “partisan” analytic positions to evaluate against the assessments made by other analytic partisans to ensure a full airing and hearing of all aspects of an intelligence problem (George 1980, 201). A multiple-advocacy exercise could be undertaken, for example, using Arab and Iranian scholars outside of the intelligence community to debate strategic perspectives, especially military capabilities and political intentions, of the United Arab Emirates and Iran regarding several islands in the Persian Gulf over which both states claim sovereignty.

Red cell exercises entail assembling groups of analysts to role play foreign leaders and military commanders and to develop policies and actions against American interests. Red team analysts try to escape from American strategic mindsets and to act and behave in the same manner as foreign enemies. Red cells are often composed of country experts (George 2004, 320). The technique is also called “red teaming,” which simulates how potential enemies might threaten American interests or respond to U.S. actions and policies aimed against them (Treverton 2003, 38). A red cell or team, for example, could be assembled to represent Iran's clerics, president, (p. 381) intelligence services, and Revolutionary Guard and military commanders and tasked with developing a strategic campaign to oust American political, economic, and military presences out of the Persian Gulf to achieve Iranian hegemony.

Contingency analysis challenges conventional analyses, which generally assess the most likely outcome or scenario in international events by posing another question such as “what if?” Contingency analysis asks what would be the cause and consequences if an unlikely event—sometimes called a “wild card”—occurred (George, 320). Conventional wisdom on the Chinese-Taiwan dispute, for example, is that China would not want to undertake the political and military risks of invading the island. Much analysis starts with this premise and then builds an argument around why the Chinese would not or could not invade Taiwan. A competitive contingency analysis would start with the question, if the Chinese military is one day tasked by political authorities to invade, how would they do it? (Russell 2001b, 77). As another example, conventional wisdom holds that Iran is years away from acquiring nuclear weapons that could be delivered by Iranian ballistic missiles. A contingency analysis might look to see what “wild cards” or shortcuts might the Iranians take to get nuclear weapons capabilities much sooner such as by the outright purchase of nuclear weapons from the cash strapped nuclear weapons states of Pakistan and North Korea.

High impact of low-probability scenarios is a closely related competitive analysis technique that focuses on what is conventionally assessed to be an unlikely future event, but, if it were to occur, would have enormous negative consequences for American security interests (George, 321). This technique is particularly well-suited for strategic warning because analysts put aside projections of what they think will likely happen and focus on how trends could come about which would be the most damaging to American interests (Davis 2007, 182). One high-impact, but perhaps low-probability, scenario would be al-Qaeda's theft or capture of nuclear weapons from Pakistan. Analysts would have to speculate on how al-Qaeda would politically leverage nuclear weapons or even use them to attack American cities.

Scenario building tasks analysts to “brainstorm” and envision a broad array of scenarios for a potential military conflict. Analysts take the conventional wisdom about the potential conflict and the assumptions on which it is based and explicitly identify all of the uncertainties that are embedded in the assumptions. And from the areas of uncertainty identified, analysts develop a matrix of possible alternative scenarios to the conventional wisdom (George, 321–22). The NIC during the Clinton administration, for example, “brainstormed” major NIEs in unclassified discussions with experts from outside the intelligence community (Treverton 2003, 133). A conventional-wisdom assessment might be, for example, that Pakistan fully controls its nuclear weapons inventory. Analysts could pick apart the assumptions on which this assessment rests and then envision scenarios in which these assumptions could unravel such as massive Pakistani civilian unrest, protests, riots, or civil war between military factions, concerted al-Qaeda attack of specific nuclear-weapons depots, or a military coup dominated by militant Islamists who could give nuclear weapons to al-Qaeda.

(p. 382) 4. Efforts to Implement Competitive Analysis Practices

The DNI, a post created with the intelligence reforms instituted in 2004, is taking up outside calls for implementing competitive analysis efforts inside the intelligence community. The former DNI Admiral Mike McConnell claimed that the intelligence community was addressing the analytic problems identified by the 9/11 Commission and the WMD Commission with the formation of “  ‘devil's advocate’ and alternative analyses, examining, for example, whether avian influenza can be weaponized and how webcams could aid in terrorist planning” (McConnell 2007, 55).

The NIC under the DNI's direction also is incorporating competitive analysis into NIEs. The former NIC Chairman Thomas Fingar said his job was to ensure that there was as much competitive analysis as possible before intelligence reports are completed and commented that “The interesting thing is not when analysts agree. It's when they disagree” (Mazzetti 2007, 5). Former NIC Vice Chair Gregory Treverton echoes this sentiment: “If intelligence doesn't challenge prevailing mind-sets, what good is it?” (Treverton, 2003, 5). The use of competitive analysis in NIEs is noteworthy because the NIE on Iraq's WMD programs written in October 2002—which was used for Secretary of State Colin Powell's presentation in February 2003 to the United Nations Security Council to justify war to oust Saddam Hussein's regime—was deeply flawed.

The CIA claims to be doing more competitive analysis as well. The CIA's Directorate of Intelligence, for example, has set up a unit that conducts red cell analytic exercises that are speculative in nature and sometimes take a position that is at odds with the conventional wisdom (WMD Report 2005, 406; George 2004, 320). The agency also has infused competitive analysis techniques into its training programs for newly hired intelligence analysts (Marrin 2003, 619).

5. Hurdles to Effective Competitive Analysis

It is easy for outsider observers to call for competitive analysis. It is easy too for high-level intelligence-community bureaucrats to insert competitive-analysis slides into their PowerPoint briefings to appease outside critics. But the real and effective implementation and practice of competitive analysis demands skills and an intellectual environment that does not sit well inside the American intelligence community.

Notwithstanding the siren calls from some quarters that competitive and alternative analysis is the “answer” to American intelligence failures in gauging enemy (p. 383) capabilities and intentions, it is important not to lose sight of the reality that many times common wisdom and mindsets in the intelligence community are right. As historian Ernest May reminds us, the ability of analysts “to interpret other peoples' politics is always limited. Their easiest course is to assume that another government will do tomorrow what it did yesterday, and ninety times out of a hundred events justify such an assumption” (May 1984, 537).

That caveat aside, shortages of analytic talent will hamper competitive analysis in the intelligence community. As the WMD Commission found, the predominance of inexperienced analysts in the intelligence community “have a difficult time stating their assumptions up front, explicitly explaining their logic, and, in the end, identifying unambiguously for policymakers what they do not know [italics in original]. In sum, we found that many of the most basic processes and functions for producing accurate and reliable intelligence are broken or underutilized” (WMD Report 2005, 389).

The unquenchable thirst for current intelligence production, moreover, is a huge barrier to effective competitive or alternative analyses at the working level of the intelligence community, especially at the CIA. Political-military analysts working on conflicts often are peppered with daily and even hourly tasking for the production of current intelligence. They simply do not have the luxury of time needed to sit back, read, and think more broadly about strategic intelligence problems to develop even a common wisdom, much less alternative analyses. The immediate always takes precede over the “nice to have,” the category into which competitive and alternative analysis falls among working level analysts.

Those analysts who might have a purview for conducting alternative analysis often focus on the methodologies, but lack the substantive expertise on a target country or region to fill in the inputs. The creation of permanent teams or offices responsible only for competitive analysis methodologies divorced from regional or political-military expertise for gauging enemy capabilities and intentions is not an especially productive route to better analysis. Illustrative of this problem, I recall working crushing analytic workloads on Iraq in the mid-1990s as a political-military analyst. I was visited one day by a colleague from an office devoted to foreign “denial and deception.” Denial refers to efforts by adversaries to prevent their activities from being seen or heard by American intelligence while deception refers to practices to feed American intelligence misleading information or to misdirect attention away from clandestine activities (Bruce and Bennett 2008, 122–23). After a polite exchange of pleasantries, my colleague appeared rather proud of himself when he shared his assessment that Saddam Hussein's regime was active in “denial and deception” to hide his WMD capabilities. I thanked him for his “unique” insight and hustled him out of the office as fast as I could to get back to meeting my current intelligence deadlines. At the end of the day, competitive or alternative analysis demands expert analysts for the intellectual input. No matter how sophisticated or sexy a methodology is, its results could only be as good as the analytic and intelligence input. As the old adage has it, “garbage in, garbage out.”

(p. 384)

The CIA is short on its own substantive experts, however. The CIA is deluged with new analysts from a hiring binge undertaken after 9/11. About half of today's analysts have less than five years of experience in the intelligence business (Mazzetti 2007, 5). Inexperienced analysts might do just fine summarizing or gisting the latest cables with readouts from human sources, satellite imagery, or intercepted communications, but they are ill-suited for effective red-teaming exercises when they need to think in the strategic perspectives of a potential enemy. Inexperienced analysts, moreover, might look at very narrow and specialized intelligence topics and be intellectually unable to step away from tactical minutia to focus on operational, strategic, or grand strategic levels of analysis of foreign rivals.

The CIA too is short on the production of human intelligence that sheds light on the political intentions of adversaries. The CIA, in fact, has failed to produce high-level human sources in the war councils of the enemies the United States has faced in the Soviet Union throughout the Cold War, during the Korean and Vietnam wars, and most recently in the wars against al-Qaeda and Iraq (Russell 2007, 97). Ultimately, some human intelligence is essential input for competitive or alternative analysis techniques. Accurate human intelligence is needed to form a critical mass of actual empirical evidence to lend more weight to one analytic assessment over others. Otherwise, competitive analysis risks becoming an academic exercise of marshaling speculation against more speculation, and of not giving much in the way of “value added” insight to harried policymakers.

The institutionalization of a permanent competitive analysis poses other problems. As Richard Betts points out, institutionalizing devil's advocacy would be akin to institutionalizing the “crying wolf problem” and that group or individual would be bureaucratically indulged and disregarded (Betts 2007, 42). Mark Lowenthal echoes these reservations; “one of the prerequisites for alternative analysis is that it provide a fresh look at an issue” but as soon as alternative analysis is “institutionalized and made a regular part of the process, it loses the originality and vitality that were sought in the first place” (Lowenthal 2006, 132). Former senior CIA official Douglas MacEachin rightly cautions that a permanent office for alternative analysis runs risks of irrelevance: “Because the job is to produce ‘out-of-the-box’ ideas, the product is too often received with a predisposition to see it as the product of an assignment to ‘come up with crazy ideas that have little to do with the real world’ ” (MacEachin 2005, 129).

Effective devil's advocacy, or real competitive and alternative analysis, demands an intellectually open environment to flourish. Alas, the working environment in the CIA resembles a Middle Age feudal-lord system where managers are loath to surrender working-level analysts to till other intellectual fields absent orders being transmitted from higher levels in the managerial command. These orders most often come in order to staff task forces working the crisis du jour and writing current intelligence, not for doing longer term strategic analysis and warning of potential conflicts that loom over the horizon.

(p. 385)

I remember from my working-level intelligence-analyst days in the mid-1990s that I was intellectually uneasy with conventional assessments that Saudi Arabia lacked strategic interest in nuclear weapons. Saudi Arabia was not in my direct line of analytic responsibilities so no one in CIA's management chain gave my concerns any notice. I ended up researching and writing as a scholar my own devil's advocate analysis that argued that Saudi Arabia had both the political intentions and means to develop a nuclear weapons capability. I eventually published the article in a security studies journal, which had a keen interest in the topic that the CIA lacked (Russell 2001a, 69–79).

Analysts who have strongly divergent views and are deeply troubled by conventional-wisdom assessments percolating in the intelligence community's hallways ought to be able, and indeed rewarded, with an intellectual refuge somewhere in intelligence community. They need a shelter to escape their daily current intelligence responsibilities and to take up substantive and intensive research and analysis to mount a devil's advocate or competitive analysis to challenge persistent mindsets.

6. Clearing Obstacles to Competitive Analysis

With all of these bureaucratic and intellectual hurdles to competitive analysis in the bowels of the CIA, the best place to do effective and substantive competitive analysis of war and peace challenges probably would be the NIC under the DNI's wing. The NIC as it is configured today, however, is too small and lean. It traditionally has served more as a clearing house for analysis coming up from the bowels of the intelligence community than as a center to create its own, original strategic analysis. The creation of the DNI's office, moreover, has further burdened the NIC. It now has to deal with more current intelligence and staffing responsibilities such as briefing books, talking points, and testimonies for the DNI, who naturally turned to the NIC for his staff support. In the aftermath of the 2004 intelligence reforms and the creation of the DNI, the NIC arguably is less capable today than it had been in the past to expand its strategic research agenda.

The NIC ought to have a research and analysis unit that is sufficiently funded and resourced to accommodate rotations by working-level analysts seeking a refuge from daily chores of current intelligence to be real “devil's advocates” and research and write provocative devil's advocate analysis with strategic assessments that challenge the conventional wisdom in the intelligence community over potentially high-impact issues. The prestige of working in the NIC for a tour might help analysts (p. 386) overcome the ire of their line and office managers who are loath to surrender their “bodies” to other offices.

The NIC staff also should be beefed-up with outside scholars and experts who could come in for limited tours, not entire careers, to infuse the intelligence community at the top with substantive expertise that the community fails to develop on its own. The Brown Commission wisely made this recommendation more than a decade ago, but its call fell on deaf ears. It recommended transforming the NIC into a “National Assessments Center” which would be a more open and broadly focused analytic entity than the working levels of intelligence community analysts, most of whom today are amateurs, and more aggressively exploit linkages to scholarly and outside expertise not found inside the intelligence community (Intelligence Community Report, ch. 8, 91).

Bringing in top scholarly talent too would carry the gravis needed to call serious attention to alternative and competitive analysis, which if carried out in the working levels will more likely fill burn bags of classified trash than capture the attention of National Security Council senior directors and assistant secretaries at the departments of state and defense. The NIC too could draw in additional expertise from the outside to focus competitive analysis attention to the most strategically dangerous issues the likes of political stability of countries with nuclear weapons inventories or on the cusp of acquiring these capabilities.

The DNI should look to supplement NIC research capabilities with an independent think tank for the intelligence community analogous to the independent research think tanks that work principally for the military services and the Department of Defense. The RAND Corporation, the Center for Naval Analyses, and the Institute for Defense Analyses are Federally Funded Research and Development Centers that have long and distinguished traditions of conducting strategic research for the military. A new center for the intelligence community could be modeled after the Pentagon's independent centers. As the WMD Commission called for—again, to deaf ears—a not-for-profit “sponsored research institute” for the intelligence community that could reach out to outside expertise from the private sector and conduct strategic research unencumbered by the tyranny of current intelligence production inside the intelligence community and be a focal point for a robust external alternative analysis program (WMD Report 2005, 399).

A new independent center should be given a broader research charter than the bowels of the intelligence community. The intelligence community only looks at foreign military capabilities and intentions, but is blinkered when it comes to assessing American policy and military capabilities. And yet, a sophisticated understanding of American military strengths and weaknesses is critical for doing effective competitive analysis such as red teaming. Foreign military leaders in Tehran, Beijing, and Moscow, for example, study the American military closely and are looking for chinks in its armor and ways to best attack us in future contingencies. It is an ironic twist that as things stand today, foreign adversaries have more expertise on how the American military fights wars than the American intelligence analysts who would fill the ranks of red teams.

References

Betts, R. K. 1998. Intelligence Warning: Old Problems, New Agendas. Parameters 28, no. 1 (Spring): 26–35.Find this resource:

    ———. 2007. Enemies of Intelligence: Knowledge and Power in American National Security. New York: Columbia University Press.Find this resource:

      Bruce, J. B., and M. Bennett. 2008. Foreign Denial and Deception: Analytical Imperatives. In Analyzing Intelligence: Origins, Obstacles, and Innovations, ed. R. Z. George and J. B. Bruce, ch. 8. Washington, D.C.: Georgetown University Press.Find this resource:

        Clausewitz, C. von. 1989. On War. Ed. and trans. M. Howard and P. Paret. Princeton, N.J.: Princeton University Press.Find this resource:

          Davis, J. 2007. Strategic Warning: Intelligence Support in a World of Uncertainty and Surprise. In Handbook of Intelligence Studies, ed. L. K. Johnson, ch. 13. London and New York: Routledge.Find this resource:

            Davis, J. 2008. Why Bad Things Happen to Good Analysts. In Analyzing Intelligence: Origins, Obstacles, and Innovations, ed. R. Z. George and J. B. Bruce, ch. 10. Washington, D.C.: Georgetown University Press.Find this resource:

              Freedman, L. 1986. The CIA and the Soviet Threat: The Politicization of Estimates, 1966–1977. Intelligence and National Security 12, no. 1 (January).Find this resource:

                George, A. L. 1980. Presidential Decisionmaking in Foreign Policy: The Effective Use of Information and Advice. Boulder, Colo.: Westview Press.Find this resource:

                  George, R. Z. 2004. Fixing the Problem of Analytical Mindsets: Alternative Analysis. In Intelligence and the National Security Strategist: Enduring Issues and Challenges, ed. R. Z. George and R. D. Kline, ch. 25.Washington, D.C.: National Defense University Press.Find this resource:

                    George, R. Z., and J. B. Bruce, eds. 2008. Analyzing Intelligence: Origins, Obstacles, and Innovations. Washington, D.C.: Georgetown University Press.Find this resource:

                      Goldberg, J. 2003. The Unknown. The New Yorker 78, no. 46.Find this resource:

                        Handel, M. I. 2003. Intelligence and the Problem of Strategic Surprise. In Paradoxes of Strategic Intelligence: Essays in Honor of Michael I. Handel, ed. R. K. Betts and T. G. Mahnken, chap. 1. London and Portland, Ore.: Frank Cass.Find this resource:

                          Heuer Jr., R. J. 1999. Psychology of Intelligence Analysis. Washington, D.C.: Center for the Study of Intelligence, Central Intelligence Agency.Find this resource:

                            Intelligence Community Report. See Report of the Commission on the Roles and Capabilities of the United States Intelligence Community.Find this resource:

                              Janis, I. R. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. 2nd ed. Boston, Mass.: Houghton Mifflin Company.Find this resource:

                                Jervis, R. 1976. Perception and Misperception in International Politics. Princeton, N.J.: Princeton University Press.Find this resource:

                                  Johnston, R. 2003. Integrating Methodologists into Teams of Substantive Experts. Studies in Intelligence 47, no. 1.Find this resource:

                                    Lowenthal, M. M. 2006. Intelligence: From Secrets to Policy. 3rd ed. Washington, D.C.: Congressional Quarterly Press.Find this resource:

                                      MacEachin, D. 2005. Analysis and Estimates: Professional Practices in Intelligence Production. In Transforming U.S. Intelligence, ed. J. E. Sims and B. Gerber, Chap. 7. Washington, D.C.: Georgetown University Press.Find this resource:

                                        Marrin, S. 2003. CIA's Kent School: Improving Training for New Analysts. International Journal of Intelligence and Counter Intelligence 16, no. 4 (October): 609–637.Find this resource:

                                          (p. 388) May, E. R., ed. 1984. Knowing One's Enemies: Intelligence Assessment before the Two World Wars. Princeton, N.J.: Princeton University Press.Find this resource:

                                            Mazzetti, M. 2007. U.S. Spies Now Admit They Don't Know It All. International Herald Tribune (March 3).Find this resource:

                                              McConnell, M. 2007. Overhauling Intelligence. Foreign Affairs 86, no. 4 (July/August): 49–59.Find this resource:

                                                Pincus, W. 1998. Spy Agencies Faulted for Missing Indian Tests. Washington Post (June 3).Find this resource:

                                                  Report of the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction. 2005. Report to the President. Washington, D.C.: Government Printing Office. Available at http://www.wmd.gov/report/index.html. Cited as WMD Report.Find this resource:

                                                    Report of the Commission on the Roles and Capabilities of the United States Intelligence Community. 1996. Preparing for the 21st Century: An Appraisal of U.S. Intelligence. Washington, D.C.: Government Printing Office. Available at http://www.gpoacess.gov/int/report.html. Cited as Intelligence Community Report.Find this resource:

                                                      Russell, R. L. 2001a. A Saudi Nuclear Option? Survival 43, no. 2 (Summer): 69–79.Find this resource:

                                                        ———. 2001b. What If . . .“China Attacks Taiwan!” Parameters 31, no. 3 (Autumn): 76–91.Find this resource:

                                                          ———. 2002. CIA's Strategic Intelligence in Iraq. Political Science Quarterly 117, no. 2 (Summer): 191–207.Find this resource:

                                                            ———. 2007. Sharpening Strategic Intelligence: Why the CIA Gets It Wrong and What Needs to Be Done to Get It Right. New York and London: Cambridge University Press.Find this resource:

                                                              Treverton, G. F. 2003. Reshaping National Intelligence for an Age of Information. New York: Cambridge University Press.Find this resource:

                                                                Walt, S. M. 1999. Rigor or Rigor Mortis? Rational Choice and Security Studies. International Security 23, 4 (Spring): 5–48.Find this resource:

                                                                  WMD Report. See Report of the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction.Find this resource: