Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 19 February 2020

(p. vii) Foreword

(p. vii) Foreword

We have a very long way to go to make management practice evidence-based. A few years ago, while serving on the compensation committee for a publicly traded NASDAQ company, we were considering what to do about the CEOs’ stock options and our stock-option program in general. Just that day, articles appeared in the mainstream business press on Donald Hambrick’s research showing that stock options led to risky behavior (Sanders & Hambrick, 2007). That research added to the growing body of evidence demonstrating that many executive pay practices not only did not enhance company performance (Dalton, Certo, & Roengpitya, 2003), but led instead to misreporting of financial results (Burns & Kedia, 2006). The finding that options led to riskier actions is logical. Once an option is out of the money, there is no further economic downside for executives. Therefore, there is every incentive for managers to take big risks in the hope that the stock price will go up, thereby giving those once previously worthless options economic value.

At the meeting, a vice president from Aon Consulting who was advising the compensation committee replied, “No,” without any hesitation or embarrassment when I asked him, first, if he knew about this research, and, second, if he was interested in my sending him the original articles or other information about the extensive research on stock options and their effects. What is particularly telling is that many people from other compensation consulting firms to whom I have related this story said it could have been their firm, too—that the perspective reflected is typical. Meanwhile, my colleagues on the compensation committee seemed to believe that the fact that our professional advisor was both unknowledgeable about and, even worse, uninterested in sound empirical research that might inform our compensation policy decisions should not in any way affect our continued reliance on his “advice.” This consulting example is all too typical of the perspective of numerous, particularly U.S., executives, many of whom value their “experience” over data, even though much research suggests that learning from experience is quite difficult and much of such presumed learning is just wrong (e.g., Denrell, 2003; Mark & Mellor, 1991; Schkade & Kilbourne, 2004).

Yes, it is true that there is much recent interest in investing in “big data” start-ups and technology and in building software to comb through large data sets for analytical insights, such as how to improve margins through pricing optimization. However, as Jim Goodnight, the CEO of SAS Institute, a business analytics and business intelligence software company, recently told me, that analytic work is mostly done by professionals fairly far down in the company hierarchy who sometimes have trouble getting the CEO interested in implementing their results. Few companies or their leaders seem to see that information and data-driven (p. viii) business insight can be a competitive advantage. Although it may seem ironic that companies invest in analytical capabilities and software that they don’t fully use, such a circumstance is far from unusual. David Larcker, a well-known cost-accounting professor, has said that few companies turn all the data they have collected in their enterprise resource-planning systems into business intelligence but instead use most of that information simply for purposes of control. And purveyors of marketing research also tell a similar tale: companies purchase their services but then don’t act on the results when the recommendations conflict with the preexisting views of senior management.

The problems confronting building evidence-based practice in management are scarcely unique to this domain. It has taken more than 200 years for evidence-based medicine to become a normative standard for professional practice. The objections to applying standards of practice—protocols—in medicine are eerily similar to those encountered in advocating evidence-based management: that each situation is different; that practitioners’ wisdom, experience, and insight is more valuable than aggregated data-based information; and that statistical evidence pertains to what happens “on average,” but that the particular individual speaking is, of course, much above average and not, therefore, bound by the results of aggregate information—a phenomenon reliably observed in psychology and called the above-average effect (e.g., Williams & Gilovich, 2008). Although there are evidence-based movements in various policy domains ranging from criminology to education, resistance to implementing science, particularly when that science conflicts with belief and ideology, looms large.

Failure to adhere to clinical guidelines in medicine, unfortunately still quite common, costs lives and money (e.g., Berry, Murdoch, & McMurray, 2001). Pressures for cost containment and the documented instances of literally hundreds of thousands of lives lost from preventable medical errors have seemingly turned the tide so that evidence-based medical practice now seems to be inevitable. However, the absence of similar pressures in management means that the quest to bring evidence-based practice into organizations still faces a long and tortuous journey. In the meantime, damage occurs to both people and companies. According to Conference Board surveys, job satisfaction is at an all-time low; most studies of employee engagement and distrust of management portray a dismal picture of the typical workplace; and the failure to heed the evidence about the physiological effects of workplace stress, economic insecurity, and long working hours contributes to employee mortality and morbidity (Pfeffer, 2010). Companies suffer from having disengaged workforces with excessive turnover even as they neglect the science and information that could improve their operations and profitability (e.g., Burchell & Robin, 2011).

Simply put, there is a profound “doing-knowing” problem in management practice: many managers make decisions and take actions with little or no consideration of the knowledge base that might inform those decisions. The Oxford Handbook of Evidence-Based Management intends to change this situation. It contains more than just a review of the literature on evidence-based management (EBMgt). The Handbook recognizes the need to provide information on how to teach EBMgt in classes and executive programs and also the requirement to (p. ix) provide role models—illustrations of practitioners who have embraced an evidence-based approach—so that practicing leaders can see how evidence-based principles might be applied in real organizations. Because EBMgt is relevant in both the for-profit and nonprofit worlds, the handbook has examples from both. With its focus on changing the practice of management, this work examines the important topic of how to get research findings implemented in practice and the barriers to connecting science with practice. The Oxford Handbook of Evidence-Based Management provides a comprehensive overview of the information required to understand what EBMgt is, how to teach it, how to apply it, and how to understand and overcome the barriers that stand in the way of basing management practice on the best relevant research. It represents an important step on the road to building an EBMgt movement.

It is scarcely news that management is not a profession, even though it might and should be. There are two elements that define a profession, one of which is the development and adherence to specialized bodies of knowledge. Sometimes specialized knowledge is reflected in licensing examinations. However, licensing is not the only path to ensuring that people know and implement knowledge and standards of professional practice. What may be even more important are social norms and public expectations and the sanctions for violating those expectations. As the opening example—and hundreds of others—make clear, there is, at the moment, neither the expectation that professionally educated practitioners will know relevant management research nor any sanctions that punish their ignorance. Consequently, management, and indeed, much of the popular management literature, is beset with myths, dangerous half-truths (Pfeffer & Sutton, 2006), and rules of thumb often based on little more than publicity, repetition, and belief. Organizations and their leaders do a profound disservice to society and even to themselves by not being more committed—not just in rhetoric but in their behavior—to implementing management science.

In this project of professionalization, the role of educational institutions looms large, which means that it is important that the handbook includes so much material on teaching EBMgt and on linking science with practice. At the moment, however, there is little to suggest that even in business-school classrooms, much attention is paid to EBMgt. Business schools are increasingly staffed by part-time and adjunct faculty who have no requirement to know, let alone contribute, to the science of management. Courses often use case examples as a primary pedagogic focus, and relatively few business schools impart the critical thinking and evaluation of ideas and the skills necessary to separate good management science from quackery. A recent review of more than 800 syllabi of required courses from some 333 programs found that only about one-quarter utilized scientific evidence in any form (Charlier, Brown, & Rynes, 2011). The fact that even business school instructors often inhabit the world of folk wisdom rather than the domain of science has sparked notice and commentary (e.g., Pearce, 2004).

If EBMgt is to become a reality in professional practice, business schools at both the undergraduate and MBA levels will need to play an important role. That’s because it is in school where students not only begin to learn about the (p. x) relevant research and theory but also come to understand standards of science and what constitutes evidence and the sound basis for making decisions. It is interesting that business schools are currently evaluated in many ways—the extent to which they raise their graduates’ salaries, how “satisfied” they make their students and the companies that recruit them, their adherence to guidelines that specify what they must teach and the proportion of their faculty with terminal degrees, their reputations as perceived by their peers, and various other criteria. What is not measured is the extent to which they impart the science of management and critical-thinking skills to their graduates, or the extent to which the graduates actually practice EBMgt. If we are serious about implementing a science of management, we need to measure and evaluate both schools and their graduates, at least to some degree, using these metrics.

We live in an organizational world. Decisions made in and by organizations profoundly affect the working lives and economic well-being of people all over the globe. As the financial crisis of 2007–2008 well illustrates, many of those decisions are all too frequently characterized not just by venality but also by profound incompetence (Lewis, 2010). As scholars and as practitioners whose work affects so many lives in so many ways, we have a sacred obligation and responsibility to develop and use the best knowledge possible to make the world a better place. It is only through embarking on the goal of building an evidence-based body of knowledge linked to professional practice that this obligation can be fulfilled. In that regard, The Oxford Handbook of Evidence-Based Management makes both a profoundly important contribution and lays out a statement of intent to make EBMgt a reality.

—Jeffrey Pfeffer

References

Berry, C., Murdoch, D. R., & McMurray, J. J. V. (2001). Economics of chronic heart failure, European Journal of Heart Failure, 3, 283–291.

Burchell, M., & Robin, J. (2011). The great workplace: How to build it, how to keep it, and why it matters. San Francisco, CA: Jossey-Bass.

Burns, N., & Kedia, S. (2006). The impact of performance-based compensation on misreporting, Journal of Financial Economics, 79, 35–67.

Charlier, S. D., Brown, K. G., & Rynes, S. L. (2011). Teaching evidence-based management in MBA programs: What evidence is there? Academy of Management Learning and Education, 10, 222–236.

Dalton, D. R., Certo, S. T., & Roengpitya, R. (2003). Meta-analyses of financial performance: Fusion or confusion? Academy of Management Journal, 46, 13–28.

Denrell, J. (2003). Vicarious learning, undersampling of failure, and the myths of management, Organization Science, 14, 227–243.

Lewis, M. (2010). The Big Short: Inside the Doomsday Machine, New York: W. W. Norton.

Mark, M. M., & Mellor, S. (1991). Effect of self-relevance on hindsight bias: The foreseeability of a layoff, Journal of Applied Psychology, 76, 569–577.

Pearce, J. (2004). What do we know and how do we really know it? Academy of Management Review, 29, 175–179.

Pfeffer, J. (2010). Building sustainable organizations: The human factor, Academy of Management Perspectives, 24, 34–45.

Pfeffer, J., & Sutton, R. I. (2006). Hard facts, dangerous half-truths, and total nonsense: profiting from evidence-based management. Boston: Harvard Business School Press, 2006.

Sanders, W. M., & Hambrick, D. C. (2007). Swinging for the fences: The effects of CEO stock options on company risk-taking and performance, Academy of Management Journal, 50, 1055–1078.

Schkade, D. A., & Kilbourne, L. M. (2004). Expectation-outcome consistency and hindsight bias, Organizational Behavior and Human Decision Processes, 49, 105–123.

Williams, E. F., & Gilovich, T. (2008). Do people really believe they are above average? Journal of Experimental Social Psychology, 44, 1121–1128.