Change The World: Teach Evidence-Based Practice!


AMLE_Cover_blank_SMIn September 2014 the journal Academy of Management Learning and Education (AMLE) published a special edition on teaching evidence-based practice. Guest editors were Denise Rouseau and Sara Rynes, both members of CEBMa’s academic board, and Eric Barends, CEBMa’s Managing Director.


The special issue contains research articles, essays, book and resource reviews and interviews. In their introduction, with the compelling title “Change The World: Teach Evidence-Based Practice!”, the guest editors provide a detailed overview of all contributions. You can read the introduction below.


To read or download all contributions discussed in this article, please visit AMLE's website




From the Guest Editors: Change the World:
 Teach Evidence-Based Practice!


“What I take from this article is the true career- and life-changing value of training in evidence-based practice, which is fostering an inquiring mind that appreciates the difference between trustworthy and less trustworthy evidence and is committed to making the world a better place.”

—Anonymous reviewer (Barends & Briner’s interview of Guyatt & Burls, this issue)


This AMLE special issue is dedicated to furthering the practice of evidence-based management, and does so by focusing on the how’s, what’s, and wherefores of teaching it. Evidence-based management (EBMgt) is a professional form of managerial practice. It is about making decisions through the conscientious, explicit, and judicious use of the best available evidence from multiple sources to help managers choose effective ways to manage people and structure organizations (Briner, Denyer, & Rousseau, 2009). EBMgt has grown out of the movement across many professions to- ward the uptake of evidence-based practice and away from reliance on tradition and unchallenged authority. Fittingly, the evidence-based movement first began in the field of medicine as a teaching method led by Gordon Guyatt and colleagues (Evidence-Based Medicine Working Group, 1992). In Guyatt’s words, “There was no question that what we were doing was challenging authority, challenging paradigms, challenging educational approaches, challenging what you had to know, challenging how you do practice ultimately” (this issue).


Since its origins in medicine over 2 decades ago, the systematic use of scientific evidence—along with evidence from other sources—is on its way to becoming an interdisciplinary professional standard. At the 2003 EBHC International Joint Conference, the term EB medicine was dropped in favor of evidence-based practice (EBP), emphasizing the fact that from medicine to policing, nursing, management and other professions, the basic principles of EB practice are shared (Dawes et al., 2005). At the same time, so are many of the challenges.


Foremost among the challenges of systematically using evidence in one’s professional practice is that people need to acquire the content, knowledge, and skills in research and critical thinking to do so. EBP in management, as in medicine, begins with education and training. In fostering this training, management educators will be challenged to structure instructional experiences and develop new materials. This special issue provides insights into how to organize and structure learning experiences consistent with both research-based principles of human learning (Goodman & O’Brien, 2012) and practical teaching experience (e.g., Erez & Grant, 2014; Walshe & Briner, this issue).


Teaching EBP has been described as having two sides—push and pull. “Push” refers to courses, textbooks, and other educational media and experiences that help learners acquire research-based content knowledge in substantive areas such as organizational behavior (OB) or strategic management. In this special issue, Steve Charlier’s inter- view of Gary Latham, Robert Dipboye’s review of Pearce’s (2013) textbook, and Tamara Giluk’s of Latham’s (2009) all provide insights into the push side of teaching EBP.


The “pull” side of teaching EBP, in contrast, helps students and managers access and interpret research findings on their own. In a sense, the distinction between push and pull is one of content versus process. While the push side is used primarily to bring research evidence into the teaching of a particular content area, the pull side helps learners acquire skills they can use, whatever the topic, to access practice-related research and principles over the course of their careers. Most articles in this special issue focus on the pull side of teaching EBP (e.g., Gamble & Jelley; Goodman, Gary, & Wood; Trank; and Walshe & Briner). Another article (see Dietz, Antonakis, Hoffrage, Krings, Marewski, & Zender), also exemplifies the pull side, but focuses on helping students and practitioners produce their own research evidence (what Rousseau, 2006, termed “local” or “little e” evidence) rather than the more common approach of teaching how to locate already-existing (“Big E”) research evidence.


Other articles in this special issue address neither push nor pull issues in EBP teaching, but instead focus on different matters. For example, Glaub, Frese, Fischer, and Hoppe describe how they systematically applied research and theory on personal initiative to design and undertake a practice-oriented intervention that yielded important effects on the business success of Ugandan entrepreneurs. Kepes, Bennett, and McDaniel outline how the prevailing bias in academic journals toward publishing studies with positive (rather than null or negative) results sometimes motivates researchers to pursue ethically questionable practices and set overly optimistic expectations of successfully applying research findings among practitioners. Trank uses rhetorical analysis (particularly reader response theory) and examples from her experiences in teaching EBP to take a critical look at academic management texts and ways in which evidence-based language (such as the “what works” movement in education) can un- justifiably privilege certain forms of research over other valuable types of studies. We now introduce the articles, followed by a summary of the emerging themes and implications they suggest for EBP educators.




Research & Reviews

This special issue includes two Research & Re- views articles. The first is a review-plus-pilot experiment regarding the characteristics and effects of bibliographic search training, and the other, a field test of an “action guide” (i.e., a protocol to guide practice that is derived from basic and applied research) for applying (1) research based on the linkages between proactive personality and entrepreneurial business success, and (2) research findings more generally.


In “Bibliographic Search Training for Evidence- Based Management Education: A Review of Relevant Literature,” Jodi Goodman, M. Shayne Gary, and Robert Wood drill down into one of the critical steps in the “pull” side of teaching (and practicing) EBMgt—searching the scholarly literature for the best available scientific evidence (Jelley, Carroll, & Rousseau, 2012). The authors combine a review of the research on electronic search, mainly in the context of evidence-based medicine (EBMed), with principles from evidence-based teaching (Goodman & O’Brien, 2012). The product is a set of action- able instructional strategies for teaching bibliographic search.


Goodman and colleagues provide a wealth of information useful not only to instructors in EBMgt, but also to any academic researcher who seeks to improve bibliographic search methods. First, the authors motivate the reader by providing evidence that trained searchers are more effective than untrained ones, with the latter tending to generate too few or far too many references (they explain, e.g., why simply plugging a few key words into Google Scholar is likely to produce an ineffective search with far too many irrelevant references). Second, they provide structure to conceptualizing the bibliographic search process in terms of its depth, breadth, and sequencing (Debowski, Wood, & Bandura, 2001). These dimensions are useful both for describing training strategies and assess- ing search success. Third, they illustrate how dramatically search results can differ depending on the search engine chosen (e.g., ProQuest ABI/ INFORM vs. Google Scholar) and the mode of search used. Using a specific research question— “What effect does job satisfaction have on creativity and innovation?”—they report that one of their search methods yielded only 33 records, while an- other yielded 588,000! Fourth, they describe research suggesting that experienced, professionally trained librarians tend to search in different ways than do novices and to be more effective and efficient in their searches. The same may well be true for profession- ally trained evidence-based managers.


Goodman and her colleagues then turn to the important questions of whether, and what kinds of, training improves bibliographic search outcomes. Based on a review of the literature, they conclude that search training is generally effective for med- ical students and residents, as well as for under- graduate business students. Their preliminary results suggest that certain aspects of training may be responsible for its positive results. Guided exploration appears to be particularly important, where students work through practice searches using prescribed steps for effective search, with the trainer stopping them and modeling correct behaviors when they make errors. Other actions that contribute to positive outcomes are giving learners feedback regarding the strategy they used and providing an exemplary solution.


They conclude with recommendations for teaching search skills. These include addressing learners’ misconceptions, using group learning, setting nonspecific goals, spacing student practice, and providing already-worked examples and multiple, varied practice experiences. They also recommend designing guided exploration, feedback interventions, and other instructional supports that offer the minimum guidance necessary, leaving learners with the primary responsibility for information processing and task exploration.


In “Increasing Personal Initiative in Small Business Managers or Owners Leads to Entrepreneur- ial Success: A Theory-Based Controlled Randomized Field Intervention for Evidence-Based Management,” Matthias Glaub, Michael Frese, Sebastian Fischer, and Maria Hoppe exemplify the innovation that great thinkers on scholarship and education from Dewey (1958) to Boyer (1990) have called for—conceiving of scholarship, education, and practice as a continuum to be spanned. Glaub and colleagues take up this societally important task by systematically building a bridge from theory and research on the concept of personal initiative to the practice of successful entrepreneurship. They showcase a systematic method for taking well-supported research findings and transforming them into a practice-oriented intervention that yields important real-world results.


Glaub and colleagues begin by detailing the basic tenets, supported by years of research and meta-analyses, of the dynamics underlying personal initiative (Frese & Fay, 2001). These tenets include formulating actionable goals, trying new behaviors, and overcoming obstacles. These key behaviors are recognized in entrepreneurship research as critical to starting and sustaining a new business (Frese, 2009). Based on the convergence of the bodies of evidence on proactivity and entrepreneurship, Glaub and colleagues develop evidence- based guidelines for training and developing entrepreneurs. These guidelines, termed “action guides,” form the basis of the training program these researchers implemented in the developing country of Uganda.


Equally important, in addition to developing an action guide for training entrepreneurs, Glaub and colleagues also built a parallel and more general framework for constructing action guides based on evidence regarding how cognitions translate into behavior. Action regulation theory and research directly address the psychological processes through which the knowledge individuals possess translates into actual behavior. Cognitions do not always lead to action. Barriers real or imagined can keep people from acting on the knowledge they possess. Thus an effective action guide needs to address how individuals can overcome obstacles and manage the environment in ways that put their knowledge to work.


Action guides once developed need to be evaluated. Using a randomized controlled trial design (RCT), Glaub and colleagues recruited local Ugandan business owners into the training program. Several classes of randomly assigned participants were created to allow the comparison of trained participants’ business success with that of randomly assigned but not-yet-trained counterparts. Findings revealed large differences between the two groups in both manifest personal initiative behaviors and objective measures of business success. The resultant validation of the evidence- based guidelines provides a much-needed exemplar for turning research findings into actionable evidence. In doing so, Glaub and colleagues offer a positive reply to Reay, Berta, and Kohn’s (2009) provocative question, “What’s the evidence for evidence-based management?”



This special issue includes four essays, two of which exemplify the “pull” side of teaching EBMgt, where students learn to obtain and make sense of research findings for themselves. The first, by Jörg Dietz, John Antonakis, Ulrich Hoffrage, Franciska Krings, Julian Marewski, and Christian Zender emphasizes helping students to create their own local evidence and the reasons why this is a valuable activity. The second essay, by Neil Walshe and Rob Briner, focuses on teaching students to per- form an abbreviated version of a systematic review—that is, a rapid evidence assessment (REA). The third essay moves outside the management curriculum to one of the most important co-curricular activities— case competitions. Specifically, Ed- ward Gamble and Blake Jelley propose general principles for developing EBMgt case competitions which offer the advantages of (1) integrating the case study method with research findings and EBMgt principles, (2) updating case competition protocols to more closely simulate real problem- solving (e.g., allowing access to the Internet and research databases), and (3) changing case evaluation criteria to elevate the use of evidence over more traditional criteria such as use of conventional strategy tools and selling solutions. The fourth essay, by Sven Kepes, Andrew Bennett, and Michael McDaniel, elevates the discussion to the “management field” level of analysis by raising the following important questions: (1) How trust- worthy are the main sources of research evidence in management, (2) What are the implications of untrustworthy evidence, (3) How can we teach students and managers to critically appraise re- search evidence, and (4) What field-level changes are needed to make research evidence more trustworthy?


In “Teaching Evidence-Based Management With a Focus on Producing Local Evidence,” Dietz and colleagues present a problem-based teaching approach to EBMgt that emphasizes generating local evidence, that is, “causally interpretable data, collected on-site in companies to address a specific problem” (this issue). This approach to generating organizational data is a variant of problem-based learning in which students learn to apply the scientific method to ill-defined and ill-structured business problems. Focusing on the challenging task of producing causally interpretable evidence in the context of solving a business problem helps to make the “intricate craft” of EBMgt more tangible and increases students’ intrinsic motivation. An additional bonus is that in the process of learning how to produce causally relevant evidence, students also learn how to evaluate the quality of research done by others. This article by Dietz and colleagues is the first systematic treatment we know of for developing student skills in gathering and interpreting local organizational evidence in the context of EBMgt practice.


The production of local evidence is especially helpful in cases where decision makers are uncertain whether general research findings can be applied locally, or where skeptics insist that generic evidence-based results “won’t work here.” For example, when Google wanted to “sell” its engineers on the value of management, they collected local evidence on managerial behaviors and subsequent employee outcomes such as satisfaction with career development, work–family balance, and turnover. Given the uniqueness of Google’s culture and its extraordinary level of success, local evidence had a much better chance of being accepted “because it was based on Google data. The attributes were about us, by us, and for us” (Garvin, 2013: 77).


Dietz and colleagues’ approach uses business cases and a local evidence-generating course project to teach students a four-stage evidence-based problem-solving cycle. Using this cycle, students

(1) define the business problem, (2) locate and evaluate existing evidence in terms of causal interpret- ability and likely local relevance, (3) design and execute experimental tests of proposed solutions, and (4) evaluate results and make recommendations. Project results are presented on the last day of class.


The course is broken into two sections: an introductory phase that lays the foundations for EBMgt, and a main segment that involves practicing each step of the EBMgt problem-solving cycle by way of cases. At the end of the course, students have worked on 10 business cases, thus developing EGMgt schema through repeated practice. Two cases (The Bicycle Messenger Case and the Towel Reuse Case) are detailed in the essay to demonstrate how students apply the problem-solving cycle to case analysis. By using this cycle to analyze cases, Dietz and colleagues’ approach avoids the usual separation (or even opposition) of classroom case analysis and academic research (e.g., Mesny, 2013; Shugan, 2006).


In “From Passively Received Wisdom to Actively Constructed Knowledge: Teaching Systematic Review Skills as a Foundation of Evidence-Based Management,” Walshe and Briner focus on the role of systematic reviews in EBMgt and describe a course designed to teach master’s students how to conduct rapid systematic reviews. This course can be transformative in that it “begins to move students from being relatively passive recipients of received management wisdom to becoming active and critical users of research evidence” (this issue).


They begin by defining systematic reviews as “a replicable, scientific and transparent process, in other words, a detailed technology that aims to minimize bias through exhaustive literature searches . . . and by providing an audit trail of the reviewers’ decisions, procedures, and conclusions” (Tranfield, Denyer, & Smart, 2003: 209). Walshe and Briner contrast the systematic review process with that of traditional narrative reviews, for which researchers typically “do not adopt anything approaching the same level of methodological rigor they use to conduct primary research” (this issue). Also in contrast, systematic reviews seek to answer specific practice questions, while traditional literature reviews tend to be less clearly targeted, attempting instead to review a “body” of literature with less transparent search methods.


The bulk of Walshe and Briner’s article focuses on describing a course designed to teach how to conduct REAs, which are an abbreviated form of systematic review more appropriate to the short periods of time typical of formal courses. Although the authors have taught systematic review courses to a wide variety of audiences, in this article they focus on one particular course. Walshe developed that course for students with little work experience, as part of a 1-year full-time master’s program in Human Resource Management and Consulting. The four main skills the course developed are critical thinking and reasoning; identifying and gathering the best available evidence; performing critical appraisal of different forms of evidence, and applying evidence of different types to decision making.


Because the course objectives are broader than simply learning how to conduct systematic reviews, the first four sessions focus on a broad introduction to EBMgt, covering its main concepts, why it is needed, and its use of other forms of evidence besides the results of formal research. The next session introduces the principles and logic behind systematic reviews, followed by five sessions during which students work on their selected REA, consult with the instructor, and receive mini tutorials on various topics related to REAs. In the final session, students prepare short presentations outlining their research questions and why they were chosen, what types of studies were considered relevant and why, what search strategy was used and why, the preliminary results, and problems or pleasant surprises uncovered to date.


The authors describe the role of the teacher in this course as one of adviser rather than lecturer or teacher. Various analogies are used to help students absorb the essence of EBMgt, such as the analogy that while traditional courses give students “fish” (i.e., knowledge) for a day, EBMgt teaches them how to fish for knowledge over a lifetime. The teacher also has to deal with various typical student reactions to the course, such as surprise that EBMgt isn’t already happening, agreement with course principles but doubts about EBMgt’s feasibility, and disquiet as students find out that their professors and textbooks may not be as objective, neutral, and well-founded as they believed. In general, the authors conclude that teachers of EBMgt have to be more neutral than they generally are in content-oriented classes (such as motivation or organizational change), and more humble about what is—and is not—known about various topics.


In “The Case for Competition: Learning About Evidence-Based Management Through Case Com- petition,” Gamble and Jelley make an intriguing case for hastening the spread of EBP by supplementing evidence-based classroom instruction with case competitions whose case selection, team preparation processes, and judging rubrics are all designed to reinforce the principles of EBMgt.


In building their argument for EBMgt case competitions, Gamble and Jelley note that the ubiquitous “case method” as typically taught in business schools is generally at odds with several of the most important principles of EBMgt (see also Dietz et al., this issue). For example, although EBMgt focuses on the usefulness of evidence-supported general principles and both local and general empirical evidence, typical case instruction in management programs portrays “management as a complex, multifaceted practice that is highly dependent on context, which cannot be reduced to general principles or theories, and which is unreservedly value-laden and subjective” (Mesny, 2013: 64). However, Gamble and Jelley point out that although the case method is also widely used in legal and medical education, those professions routinely apply cases as an opportunity for students to make use of library resources (such as research results and legal precedents) in conducting their analyses (see also Goodman et al., this issue). Gamble and Jelley advocate that management educators make a similar shift to adopt evidence-focused case instruction. Further, they argue that the likelihood that management students will become evidence-based practitioners upon graduation can be further escalated through the implementation of EBMgt-based case competitions.


The EBMgt case competitions Gamble and Jelley propose would differ from traditional case competitions in a variety of ways. First, the purpose of EBMgt case competitions would be to encourage students to (1) ask relevant managerial questions, (2) search for the best available evidence, (3) critically appraise the acquired information, and (4) apply relevant information to case issues. In contrast, typical case competitions isolate students in preparation rooms and do not allow access to libraries, databases, or the Internet. Second, case content and context would change. For example, case protagonists would reflect on local and general research evidence while pondering their di- lemmas, might summarize their previous managerial experience or prior local experiments with the problem in question, and would consider ethical issues and multiple stakeholder views. Third, judging and evaluation rubrics would change. Current case competitions generally reward the use of well-known tools, whether evidence-based or not (e.g., SWOT analysis, Porter’s Five Forces, cf. ten Have & Stevens, 2003) and the “selling” of solutions. In contrast, in keeping with principles of EBMgt (Briner et al., 2009), the evaluation rubric for EBMgt case competitions would emphasize “ethics and stakeholder concerns, practitioner judgment and expertise, local data and experimentation, use of evidence-based decision practices, and principles derived through formal research.” In addition, judges would be carefully trained according to the rubric in order to enhance inter-rater reliability and ensure the provision of EBMgt-consistent feedback to competing teams. Finally, the authors recommend a careful sequencing of the competition, as well as a pre-competition training phase.


We believe that Gamble and Jelley’s observations and recommendations will be useful not only to anyone wishing to start an EBMgt case competition, but also to educators wishing to start or modify case competitions in various substantive areas (e.g., sustainability, social entrepreneurship, or strategy). Applying EBMgt principles to case competitions in specific substantive areas will have the joint benefits of making case competitions more similar to actual management problem solving (e.g., where managers have access to the Internet) and increasing the likelihood of transfer- ring both substantive content and EBMgt skills to practice.


In “Evidence-Based Management and the Trustworthiness of Our Cumulative Scientific Knowledge Implications for Teaching, Research, and Practice:” Kepes, Bennett, and McDaniel focus on a matter mentioned by several other contributors (e.g., Barends & Briner; Trank, and Walshe & Briner) as well: the fact that sometimes the “best available evidence” in management (or medicine) is not very strong. Although most commentators on this topic tend to discuss it in terms of certain research topics being better researched than others, Kepes and colleagues focus at a higher level of analysis, arguing that the whole body of management evidence (as well as evidence in a variety of other fields, including medicine and psychology) is biased in favor of statistically significant, new, and “interesting” findings, and against replications and small or non-significant results. They argue that this bias toward presenting mostly new or positive results contributes to overpromising what management interventions can do. The consequence is faddishness in management and the risk of disappointing results in practice.


In addition to creating problems with practice, Kepes and colleagues argue that lack of research trustworthiness also poses problems for teaching. Given the problems they identify with management research, the authors argue, “teaching from an evidence-based perspective requires being honest and transparent about the shortcomings of our research and the threats to its trustworthiness” (this issue). Beyond that, they recommend adopting EBMed’s five-step “Ask, Acquire, Appraise, Apply, and Analyze/Adjust” sequence (e.g., Jelley et al., 2012), placing particular emphasis on the “appraisal” stage. Applying a pull rather than push method, the authors suggest that students perform a critically appraised topic (CAT). A CAT is a reduced version of an REA—a concise summary of the critically appraised best available evidence on a problem with very short, bottom-line recommendations. In helping students to critically appraise the evidence, the authors suggest using a hierarchy of evidence (such as the one provided in their Figure 1), as well as a summary of some of the key points from formal standards for research such as meta-analyses and primary studies (e.g., the American Psychological Association’s Meta- Analysis Reporting Standards, or general best- practice research synthesis recommendations; see their Table 1).


Finally, Kepes and colleagues suggest some fairly dramatic changes to journal reviewing processes, with the goal of reducing publication biases against replications and non-significant results and preventing practices such as “hypothesizing after the results are known” (i.e., HARKing). One suggestion is to require authors to report power analyses and conduct more robustness checks. A more dramatic recommendation is to separate the journal review process into two stages, such that the first stage— encompassing the Introduction, Theory, and Methods sections—would be submitted before knowledge (at least on the part of reviewers) of the study’s results. Then, if the manuscript survives the first stage, it is resubmitted with the Results and Discussion sections, and evaluated on only three aspects: Did the author carry out the research process in the manner promised during the first submission, are the results described accurately, and are the conclusions in the Discussion section actionable and accurate? An even more dramatic departure would be a multistage review process where, after initial screening by an editor, submitted manuscripts are posted on-line as discussion papers for general comment. After several weeks, the paper then goes through a rather traditional review process. The authors hypothesize that such a process would lead to faster dissemination of a manuscript’s results and higher perceived trust- worthiness because of the extra transparency such a review process creates.


Exemplary Contribution

In the special issue’s Exemplary Contribution, “Reading Evidence-Based Management: The Possibilities of Interpretation,” Christine Quinn Trank proposes that “if, as researchers and teachers, we are interested in connecting our work to practice we should more deliberately study the reading of academic texts so that we can better understand their use as well as their effects” (this issue). Using rhetorical criticism, Trank discusses how her stu- dents (some of whom are very experienced professionals) react to the academic research that is ei- ther assigned in her classes, or located by students during the process of doing their semester-long EBP (evidence-based practice) projects.


Trank starts from the dual assumptions that (1) all scholarship is inherently rhetorical and values- based, and (2) readers do not necessarily interpret academic texts in the same way that authors (or other readers, or themselves at other times, for that matter) do. As such, it becomes very important to understand how practitioners and students interpret academic texts because that interpretation determines whether, and how, research will be used.


Trank applies reader response theory to examine the reactions of students to academic readings in her various EBP courses. She has observed three main types of reactions. One set occurs when students are engaged in efferent reading—that is, reading for the purpose of finding needed information for their semester-long projects. Consistent with the observations of Walshe and Briner, these reactions are frequently negative, as students experience various frustrations in trying to find use- ful research related to their chosen problems. These frustrations include paucity of relevant stud- ies, missing statistics, weak designs, and multiplicity of constructs that seem to measure the same thing but make commensurability difficult.


The second set of reactions concerns students’ responses to an article’s aesthetics (Burke, 1950). In Trank’s experience, certain articles—particularly qualitative and mixed-method readings that include contextual information and are linked to meaningful theories—tend to generate excitement, enthusiasm, positive moods, and lively discussion. Other readings, however, such as those that focus mainly on methodological issues or the size of relationships between constructs with little con- textual background, do not. Readings that were aesthetically pleasing “seemed to satisfy a need in the students to express their own competence and creativity,” and were perceived as having “meaningful unity among parts.” They also combined data about “what happened” with theoretical explanations of why it may have happened. These aesthetic reactions tend to corroborate the idea that the most influential and persuasive texts involve elements of storytelling (Brown, 2005).


The third set of reactions stemmed from students’ perceptions that academic research is frequently condescending and disparaging with respect to practitioners: In Trank’s words, “the implicit and explicit construction of the practitioner inscribed in these readings clearly was felt as dismissing the practitioner—it was felt as hierarchical” (this issue). The result was that students often identified with practitioners and against academics—an identification that was frequently associated with rejection of academic arguments. This is almost certainly not the reaction academics have in mind as they are writing up their research.


A related issue Trank raises is her concern that the evidence-based movement may produce “neutral technical experts” no longer committed to “values transcending the immediate and the practical” (Freidson, 2001: 209). At least in theory, this should not happen because EBP visualizes practitioners as professionals who use personal experience, knowledge of the local context, and stake- holder preferences—in addition to research findings—in making decisions. However, in practice, incorporating these other factors may be difficult. Citing Morrell (2008), Trank notes that ethics and judgment are not readily codified or quantified, making their status uncertain among the multiple considerations managers are supposed to weigh in EBP. She also argues that the words “evidence-based” and “what works” are used in public policy as a “rhetorical buffer” (Morrell, 2012: 13), hiding politicians’ involvement in “the interpretation and construction of evidence” to reflect their ideologies and biases. Trank proposes in- creased rhetorical inquiry and dialogue about re- search texts (e.g., journal articles, textbooks) to make their underlying assumptions more transparent and debatable.



In “Incorporating Evidence-Based Management into Management Curricula: A Conversation With Gary Latham,” Steve Charlier extracts pearls of wisdom about teaching EBMgt from a winner of the Academy of Management’s Scholar-Practitioner Lifetime Achievement Award and the only recipient of both the Distinguished Contributions to Science and the Distinguished Contributions to Practice awards from the Society for Industrial and Organizational Psychology. Latham has long been known for his ability to explain scientific findings and research principles in engaging, interactive, and humorous ways. He credits this ability to a number of early mentors: his father; a manager at Weyerhauser who urged him to communicate in a more conversational style; an excellent doctoral instructor, Ken Wexley, who insisted his graduate students engage in “plain speaking” while trying to explain each week’s doctoral readings to a very confused “Vice President Wexley from B.F. Goodrich”; and Ed Lawler, whose model of holding Academy audiences in thrall by having conversations with them made a deep impression on Latham’s own teaching style.


To Latham, the essence of teaching EBMgt is thinking about your audience and engaging them so that they are ready and motivated to think in terms of evidence-based principles. He gets stu- dents involved by raising familiar topics from their work or personal lives to introduce theories and evidence-based principles. He thrives on the use of analogies and metaphors to make his points, analogizing cooking recipes to the “research methods” of academic studies, drawing on metaphors from medicine and medical research, and explicitly introducing “bilingualism” (of English and Research Methods) to class discussions. He uses graphs and pictures rather than complicated equations to show research findings, and raises questions like “why do some people who smoke and have high- stress jobs live to a ripe old age while others don’t?” to introduce the concept of moderator variables. He tells students that they are going to “love” theory because theories are going to help them “predict, explain, and influence the behavior of others” (this issue). He encourages students to “try out” new theories or evidence-based principles at work or home, and then report back on how it worked. Extra time is dedicated to discussing failures in an attempt to figure out why something failed.


In contrast to most teaching-related articles in this special issue, Professor Latham focuses on the “push” side of teaching EBMgt. This is because he is describing how he uses, and teaches, research methods and results in substantive courses such as Organizational Behavior and Human Resource Management, rather than stand-alone courses in EBMgt. Still, like Dietz and colleagues, he uses real-world examples to teach research principles and sometimes generates “local evidence” to re- solve student questions and debates (see, e.g., Latham & Brown, 2006; Latham & Seijts, 1997).


In closing, Latham says that he feels there is a danger in trying to put too much structure on “the way” of teaching EBMgt, when doing so may not be necessary: “From my perspective, I think that the whole evidence-based idea is taking tried-and- true concepts and explaining them in words that people can relate to. And then getting managers and MBA students to continually think about how they can apply these principles in their personal lives and on the job” (this issue). Latham clearly does a great job of this and provides many ideas for us to emulate.


In “Teaching Evidence-Based Practice: Lessons From the Pioneers,” Eric Barends and Rob Briner interview two founders of evidence-based medicine—Gordon Guyatt and Amanda Burls—to learn what lessons EBMed might hold for EBMgt. In 1992, Guyatt, along with 30 other members of the Evidence- Based Medicine Working Group (EBMWG), published the transformative paper, “Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine” in the Journal of the American Medical Association (EBMWG, 1992). Guyatt coined the term “evidence-based medicine” and developed methods to teach it in his role as head of the residency program at McMaster University. Amanda Burls has been a senior fellow of the Centre for Evidence-Based Medicine since 1996 and is currently the director of Postgraduate Programs in Evidence-Based Healthcare at the University of Oxford.


Guyatt and Burls describe the history of EBMed as it evolved from an initial attempt to teach how to critically appraise medical research evidence in the classroom to an approach that moved the assessment of evidence directly into clinical practice at the patient’s bedside. The first label that Guyatt attached to this movement was “scientific medicine,” but that term generated vehement resistance because it implied that then-practicing academic doctors weren’t scientific. So Guyatt went back to the drawing board and came up with “evidence-based medicine,” which somehow man- aged to stick. One thing Guyatt believes operated in favor of the acceptance of EBMed was that there was a general readiness in the medical community at that time to shift its thinking. Another factor was that there was a whole cadre of people who were not only philosophically behind the movement, but also willing and able to teach it.


Once Guyatt and the rest of the EBMWG had developed methods for teaching EBMed in clinical practice, they then had to push to get it adopted in medical curricula. Curriculum coverage increased when medical board exams began to include tests of physicians’ critical appraisal skills and when accreditation programs began to require EBMed instruction. Given that management (unlike medicine) does not require managers or graduating MBAs to pass standardized exams, the medical experience suggests that the most practical ways of increasing the coverage of EBMgt in business schools may be to (1) build a larger cadre of instructors willing and able to teach it (a goal of this special issue), and (2) push for its inclusion in accreditation criteria.


Guyatt and Burls explain that over time the emphasis in EBMed has shifted, first from critical appraisal to EBMed, and then from EBMed to EB health care, or even just EB practice (EBP). The first shift—from critical appraisal to EBMed— happened when instructors realized that simply having research appraisal skills did not automatically translate into knowing how to use research. So over time, EBMed materials shifted from being primarily readers’ guides to being users’ guides, with a greater focus on understanding and applying research results. In Guyatt’s words, “A big part of it . . . is that whether it’s good or not so good, if it’s the best available evidence you’ve got to use it” (this issue). The second shift—from EBMed to EB health care or EBP—emphasizes that EB practitioners of any profession may hold more attitudes in common with other EB practitioners than with col- leagues in their own profession who do not embrace EBP. In a memorable illustration, Burls talks about one of her EB health care student- practitioners who “was having a conversation with her spouse . . . who’s a conservationist, and she was asking, ’How can you not be using these techniques?’ So the next thing, they sent a bunch of conservationists to our course, and they set up the Centre for Evidence-Based Conservation” (this issue).


Guyatt and Burls also explain how they “sell” EBP practice and why they think health care professionals accept it. Guyatt says, “You sell it by showing how people have made grievous errors. . . . I used to get this, ’What is the evidence behind evidence-based medicine?’ So now I tell three stories of people screwing up and I don’t get the questions anymore. To sell it to your management students you’ll need to find similar stories” (this issue). Burls adds, “But you can borrow examples; borrow examples from medicine” (this issue; for excellent examples of applying both pieces of advice, see Charlier’s interview with Latham, this issue). Guyatt also indicates that professionals accept EBP because it deals directly with the issues they are facing and helps them address their concerns: “Everybody has anxieties; everyone has questions about what they’re doing. . . . Even if you find out, ‘Actually, we don’t know the answer to this,’ that’s a huge relief. You’re agonizing over whether it should be X or Y, but lo and behold, we don’t know whether it should be X or Y. And that’s a huge relief” (this issue).


When asked about the future of EBMed, Guyatt and Burls see more emphasis on pushing pre- screened evidence to practitioners, and less on having doctors search for (pull) and critique the evidence themselves. In addition, they see in- creased emphasis on providing guidelines for how to apply evidence in practice. Guyatt says, “[f]rom the very beginning, when we started teaching critical appraisal, we realized ‘Who’s got the time for this?’ It is very demanding . . . so now we are developing what we call ‘push services,’ where as a clinician you sign up and define your area of interest” (this issue). From there, services are in development to send out a small number of important articles in your area that have been prescreened and critically appraised, resulting in a dramatic reduction in research “noise.” Guyatt and Burls’ closing thoughts remind us of Latham’s. In the same way that Latham does not obsess over putting a lot of structure on how he teaches EBMgt, Guyatt emphasizes that his main objective is to “Inspire them and teach them an attitude. And the way you teach them an attitude is to tell them stories and tell them how to appraise things. Get them to have an idea of the basic principles” (this issue). Adds Burls, “I discovered that what we are doing is transformative learning, which means that it is not just about changing a person’s skills and knowledge, it’s giving them a totally different perspective on the world.”


Book & Resource Reviews

This section begins with Barbara Rau’s review of four chapters on teaching evidence-based management from The Oxford Handbook of Evidence- Based Management (Rousseau, 2012). She begins with Jodi Goodman and James O’Brien’s chapter on how to teach using evidence-based principles. In other words, Goodman and O’Brien focus on principles of “evidence-based teaching,” as opposed to “teaching evidence-based management.”


Goodman and O’Brien begin their review with an overview of cognitive load theory, as well as research on expert performance, adaptive expertise, and advanced learning. They then discuss common teaching methods and how they stack up relative to learning research. Goodman and O’Brien’s approach is not to talk about “best ways” of teaching, but rather to help the reader under- stand when various techniques are likely to be most appropriate. They also focus quite a bit on cases and provide examples that are well suited to applying evidence-based teaching and learning principles.


Rau applauds the authors for their succinct summaries of key concepts that might be missed by teachers not familiar with research linking teaching methods and learning outcomes. She also appreciates that they wrap up the chapter with a “nifty little present in the form of an 8-page Appendix summarizing instructional strategies; describing the mechanisms by which they should work (based on evidence) and ideas for successfully implementing each. This table is a great resource for those looking to incorporate new teaching techniques or refresh old ones” (this issue).


The second chapter, by Jelley, Carroll, and Rousseau, discusses the hands-on application of evidence-based teaching and learning principles while teaching EBMgt. It includes detailed descriptions of two approaches to teaching EBMgt: the course approach (at Carnegie Mellon University) and the integrated program approach (the Executive MBA at the University of Prince Edward Island; UPEI). At Carnegie Mellon, students take an elective class that focuses on gathering evidence, making decisions in different contexts, and converting evidence into action plans. In the integrated approach at UPEI, students are introduced to the broad principles of EBMgt during orientation and then guided in applying these principles to work- place problems from the first course onward. The program includes an early research methods course that builds skills needed for subsequent courses, as well as a “signature project” in the form of a business plan, applied research project, or systematic review that is required of each student. The chapter includes many examples of engaging ways to build basic awareness of EBMgt, formulate good questions, search for evidence, and build EBMgt into decisions.


In the third chapter, Paul Salipante and Ann Kowal Smith focus on increasing EBMgt through doctoral programs for managers (as opposed to aspiring academics). They argue that managers and other practitioners are skeptical of traditional doctoral programs, “fearing that (their) theoretical focus supplants practical, demonstrably usable suggestions in a world too fast-paced for debate”. As a remedy, they suggest providing alternative programs that provide doctoral education to practitioner-scholars—an idea also proposed by Grant (2014). The rest of their chapter provides a case study of such a program (the doctor of management, or DM) at Case Western Reserve University’s Weatherhead School of Management.


In contrast to “traditional” doctoral programs, the Weatherhead DM is a 3-year, multidisciplinary program (although the focus appears to be mostly on management) that educates students to conduct qualitative and quantitative research that will help solve a work-related problem of their own choosing. In this sense, much of Salipante and Smith’s chapter is reminiscent of the discussions by Dietz and colleagues (this issue) and Latham (this issue) regarding producing local evidence. Although Rau believes that some distinctions the authors draw between traditional and alternative doctoral programs are overblown, particularly with respect to applied fields such as human re- source management and marketing, she does “appreciate the description of philosophy and training at the Doctorate of Management program” (this issue). She also believes that many ideas offered in the chapter (such as conceptual reductionism, interdisciplinary research integration, and synthesizing concepts near and far) can be fruitfully applied in any doctoral program.


Finally, Rau reviews Pearce’s (2012) chapter on the weak state of most organizational behavior (OB) textbooks with respect to how well their claims are supported (or not) by research findings. Pearce argues that most OB textbooks suffer from a number of evidence-based deficiencies, such as presenting claims that are not backed by research findings, misinterpreting or failing to include relevant research, and continuing to include theories that have been long discredited (similar problems have been noted in strategic management texts; Stambaugh & Trank, 2010). These deficiencies led Pearce to develop and self-publish her own OB textbook; now in its third edition (Pearce, 2006, 2013). Rau indicates that the larger message of Pearce’s chapter is that the field of management needs to elevate its research standards for textbooks.


Apropos this theme, the next two articles review textbooks that are explicitly marketed as evidence- based. In the first of these, Robert Dipboye reviews Pearce’s above-mentioned text, Organizational Behavior: Real Research for Real Managers (2013, 3rd ed.). He finds much to admire about the book. First, channeling the Aristotelian principles of theoria, praxis, and poiesis, he argues that Pearce (2013) does three things highly useful to helping managers develop EBPs. First, she shakes managers’ confidence in their personal theories by presenting them with boxed questions or statements that can be answered as “true–false” or “yes–no,” and then summarizes research evidence that often dis- agrees with managers’ commonsense notions. Second, she communicates in practice-relevant language: “The practicing or aspiring manager will appreciate the short, simple descriptions of re- search and theory and the definitive recommendations for managerial practice” (this issue). Third, she provides tools (in the form of application boxes) in which she suggests specific tactics for applying principles and research relevant to each chapter’s contents. Dipboye also compliments the fact that after the first two introductory chapters— “Why OB?” which focuses on the importance of people to organizational success, and “Why Man- agers?” which focuses on the nature of managerial work and some of the erroneous preconceptions managers hold—the chapters all focus on specific managerial tasks. These include how to hire, making sense of feelings at work, managing performance, managing incentives, understanding culture, mastering power, leading others, and how to fire and retain.


Dipboye’s only reservation (which he offers in the spirit of “tough love”) concerns his feeling that the book falls short of always providing the “best and most recent scholarship” on certain topics (this issue). By way of illustration, he discusses three issues—the usefulness of selection interviews, the relationship between need for achievement and managerial performance, and the effectiveness of individual pay incentives—where the cited evidence is more than 25 years old and where more recent evidence (often in the form of meta- analyses) suggests different conclusions. Nevertheless, he concludes, “Despite these three examples, I agree with most of the author’s conclusions in this text, but the substantiation in the footnotes is somewhat thin even for those assertions that I endorse” (this issue).


Dipboye ends his review by pondering the broader question of whether academic researchers and textbook writers should be aiming to “bridge the gap” between theory or research and practice, or whether a new metaphor might be more useful. Dipboye’s view is that practice-based and research-based knowledge not only are distinct, but also are unbridgeable. He says, “In attempting to use (research) knowledge, the manager will en- counter paradox, dilemma, and change. In the face of complexity the successful application of science requires something more than bridging the gap. . . . In the attempt to deal with paradox and provide integrative solutions, scientific reasoning may fall short compared to dialectical thinking” (this issue).


In place of bridging the gap, Dipboye offers the metaphor of “mapping a journey through an unexplored and dangerous land” (cf. Andriessen & Van Den Bloom, 2009). In this metaphor, the text be- comes a map for the manager, “but like any map, a textbook is an abstraction that can never completely match the reality of personal experience. In short, the gap between manager and scientist is never completely bridged . . . . the construction and modification of these maps will require not just a conduit for the funneling of science to the man- ager, but instead a dialectic among managers, among organizational scientists, and between managers and organizational scientists. This dialectic (e.g., conversation) will allow managers to use what science has produced to develop and refined their personal theories” (this issue). This view aligns with Briner and colleagues’ (2009: 19) definition of EBMgt as consisting not only of using research evidence, but also “practitioner expertise and judgment, evidence from the local context, and the perspectives of those people who might be affected by the decision.” It is also in sync with other recent discussions of academic–practice relationships as being characterized by tensions, dialectics, and paradoxes that cannot be reconciled, but rather that benefit from actively engaging and exploring them (e.g., Bartunek & Rynes, 2014; Trank, this issue).


In the next article, Tamara Giluk reviews Gary Latham’s (2009) The Evidence-Based Manager: Making the Science of Management Work for You. The book contains six key evidence-supported management lessons designed to promote job performance and satisfaction: using the right tools to hire high-performing employees; inspiring employees to effectively execute strategy; developing and training employees to create high-performing teams; motivating employees to become high performers; instilling resiliency in the face of set- backs, and coaching employees to become high performers.


Giluk, a former human resources manager, is highly complimentary about the book’s accessibility and usefulness: “Latham presents the material in an engaging manner, offering myriad examples to illustrate his recommendations, including two cases of evidence-based managers in action” (one in a logging operation in the United States and the other in a technology development center in the Middle East; this issue). She notes that Latham clearly defines terms that might be unfamiliar to managers and provides concrete implementation tools (e.g., 10-step guides).


As with Dipboye’s review of Pearce’s (2013) book, Giluk also spends considerable time talking about Latham’s (2009) use of evidence. In many ways, she finds it to be admirable. For example, approximately one third of the 225 studies he cites are from 2000 onward, and many of the topics addressed are buttressed with either meta-analytic evidence or multiple primary studies. She also cites a number of areas where she believes Latham does a good job of discussing conflicting researcher views (e.g., the usefulness of personality indicators as predictors of performance.


At the same time, she notes areas where she would have liked to see more balanced coverage (e.g., the importance of intelligence as well as diversity in selection); and topics where she would have liked additional research-based citations (e.g., use of affective [emotionally rousing] vision statements and the ineffectiveness of unstructured interviews as selection devices). Also, as a former HR manager, she mentions two areas where she believes Latham’s advice might prove risky in the United States court system (asking employees whether there are any personal or medical reasons for a particular behavior when acting as a coach, and not documenting every coaching session). Despite these concerns, Giluk concludes that Latham’s book “is an excellent primer for fundamental management practices” and that Latham “does an admirable job of presenting research- consistent recommendations and providing adequate evidence to support them” (this issue).


In the final Book & Resource Review, Angeline Lim, Dong Chen Jia Qian, and Alison Eyring describe the current state of on-line evidence-based management resources. Of the websites devoted either primarily or secondarily to EBMgt, Lim and colleagues identify the Center for Evidence- Based Management’s (CEBMa) website (www. as the most up-to-date and informative for both educators and practitioners. This site includes a large repository of articles and book chapters on EBMgt, YouTube videos, and links to EBMgt sites and evidence-based materials in other disciplines such as health care and education. For educators, the site provides sample course outlines, exercises, questionnaires, presentation slides, and references. Another recommended site is Pfeffer and Sutton’s www.evidence-basedmanagement. com, which functions primarily as a conduit to other relevant resources such as cases, books, article summaries, EBMgt quotes, EBP news stories, guest columns by academics and practitioners, and RSS feeds from Pfeffer and Sutton’s blogs. Lim and colleagues also briefly review the major on- line EBMgt communities or forums, open access resources, and Twitter accounts.


In terms of evaluating the current state of web resources, Lim and colleagues conclude that EBMgt lags considerably behind EBMed and EBEduc. They argue that the field needs a centralized website such as the Cochrane Collaboration in medicine ( or The Campbell Collaboration ( in education. Relative to these two sites, current EBMgt websites tend to provide summaries of individual research articles rather than systematic reviews (which are generally regarded as more useful to both academics and practitioners); lack in-depth coverage of management subfields such as HR, entrepreneurship, or leadership; and do not have transparent criteria for deciding what articles will or will not be posted or summarized. However, EBP in management is less mature than in medicine or education, and CEBMa is currently in the process of constructing a free access, on-line database of evidence summaries that will include meta- analyses, systematic reviews, CATs, and REAs.



One theme that emerges from the papers in this special issue is the shift in thinking from a narrow focus on evidence-based disciplines (e.g., medi- cine, health care, education, or management) to a broader focus on evidence-based practice across disciplines. This shift, mentioned in the articles by Barends and Briner, Lim and colleagues, and Trank and noted earlier in other disciplines (e.g., Clegg, 2005; Dawes et al., 2005), recognizes that the basic principles of EBP are fairly generalizable across disciplines—so much so that evidence- based practitioners from different disciplines often have more in common with one another than do evidence-based and non-evidence-based practitioners in the same discipline. The extent to which evidence-based thinking is taking hold across disciplines is perhaps best exemplified by the incorporation of evidence-based logic into the construction and scoring of the recently revamped Scholastic Aptitude Test, the second-most widely used college entrance exam in the United States (Balfmarch, 2014):

Whenever a question really matters in college or career, it is not enough just to give an answer, Coleman said. The crucial next step is to support your answer with evidence, which allows insight into what the student actually knows. And this change (to the SAT) means a lot for the work students do to pre- pare for the exam. No longer will it be good enough to focus on tricks and trying to eliminate answer choices. We are not interested in students just picking an answer, but justifying their answers.


A second theme concerns the two complementary sides—push and pull— of teaching EBP. Although EB medicine began by pushing research evidence to medical students and teaching them to critically appraise it, the pull side emerged shortly thereafter when EBMed moved from the classroom to the patient’s bedside. This same trend has been mirrored in management, where the earliest writings tended to push research findings in particular areas, (e.g., Latham, 2009; Pearce, 2006; Pfeffer & Sutton, 2006), but where the articles in this special issue (as well as those in The Oxford Handbook of Evidence-Based Management, Rousseau, 2012) are a mix of push, pull, and both at the same time. Moreover, teaching on the pull side in this issue includes both looking for already-existing evidence as well as creating local evidence (Rousseau, 2006).


Looking toward the future, Guyatt believes that more emphasis will be placed on the push side. He says:

What we now recognize much more is that practitioners need pre-appraised evidence . . . from the very beginning, when we started teaching critical appraisal, we realized, “Who’s got the time for this?” It is very demanding, so people like Brian Haynes started developing evidence-based resources for people . . . now we are developing what we call “push services,” where as a clinician you sign up and define your areas of interest. So when I’m a specialist in internal medicine, I get what’s new in internal medicine appearing on my e-mail or my smartphone. . . . We have estimated that most physicians don’t need more than 20 articles per year, which is a 99.96% noise reduction (Guyatt, this issue). This is the type of service that, with time and additional resources, might be provided in the future by CEBMa for managers. In addition, we can probably expect to see future teaching articles ad- dressing not only Big E and little e, but also locating or producing “Big D” (i.e., big data).


A third theme noted in several of the articles here is that when students or practitioners look to the research base to answer practical questions, they often find that much of our research evidence isn’t all that impressive. According to accounts by Guyatt and Burls, Trank, and Walshe and Briner, students in their classes often find that there is little research relevant to their chosen problem and that the evidence that does exist is weak (e.g., inadequate sample sizes, missing statistics, causal ambiguity, and questionable construct validity). One implication is that when teaching EBP from the pull (vs. push) side, instructors must be prepared to be more circumspect about our re- search base:

In content-based courses, we typically emphasize how much management researchers know and communicate narratives of unstoppable scientific progress and steady knowledge accumulation. When, as is quite often the case, students are surprised by the limited quantity, quality, and relevance of the research they find, it is easy to become some- what defensive rather than acknowledge that, in many instances, we do not have particularly good evidence to answer their questions and a lot of published research may be of poor quality (Walshe & Briner, this issue).


In addition to these fairly obvious research deficiencies, articles in this special issue also high- light some less obvious ones. For example, Kepes and colleagues discuss how biases toward studies with positive findings in journal review processes lead to questionable ethical practices on the part of researchers (see also O’Boyle, Banks, & Gonzalez-Mule, in press) and possible disappointment on the part of practitioners who implement practices that meta-analyses and other systematic reviews suggest “should” work, but don’t always. Trank shows how the tone of academic writing sometimes causes students and practitioners to categorize researchers as “other” and reject their messages—an outcome that is probably not what most researchers envisage when they are writing up their findings. Drawing on critical studies in education, sociology, and management, Trank also argues that the term “evidence-based” is often used to shut down (rather than encourage) discussions of research by privileging some types of research over others, and efficiency concerns over ethical or moral ones.


These criticisms bring up an issue that is not much discussed in the articles in this special issue (with the exception of Trank’s), but widely dis- cussed elsewhere: epistemological conflicts over so-called hierarchies of research quality. For example, traditional hierarchies of research quality in medicine tend to place randomized controlled (experimental) trials (RCTs) at the top of the hierarchy (Berwick, 2008). However, top scholars in evidence-based health care argue that the appropriate hierarchy of evidence depends on the type of question being asked (Berwick, 2008; Pawson & Tilley, 1997; Petticrew & Roberts, 2003). For exam- ple, while RCTs are superior for evaluating clinical evidence (e.g., which of two medicines is more effective for a particular ailment), they are inferior for evaluating complex, multicomponent, nonlinear social change processes such as the deployment of rapid response teams in hospitals. In such situations, Berwick (2008: 1183) argues:

Any assessment techniques developed in engineering and used in quality improvement— statistical process control, time series analysis, simulations, and factorial experiments— have more power to inform about mechanisms and contexts than do RCTs, as do ethnography, anthropology, and other qualitative methods. For these specific applications, these methods are not compromises in learning how to improve; they are superior.

Similar arguments have been made in EB education (Biesta, 2007; Clegg, 2005), as well as by sociologists and critical management scholars with respect to the presumed superiority of meta- analysis and other forms of systematic review (e.g., Morrell, 2008; Pawson, 2006).


Another theme that emerges is that academics who choose to describe their work (or the work of others) as “evidence-based” should be prepared to defend the basis for that claim. For example, re- viewers for this special issue were appropriately critical about how EBMgt textbook writers use evidence, raising questions about the recency and comprehensiveness of the evidence presented in the two textbooks reviewed here (Latham, 2009, and Pearce, 2013). Similarly, reviewers of the article on electronic search methods (by Goodman et al.) also pushed the authors to make the criteria for article inclusion in their review more systematic and transparent. We expect that this “raising the bar” in terms of systematic reviews, rhetorical analysis, and bias-reducing strategies will continue and most likely accelerate.


Authors in this special issue also reinforced re- cent conclusions by Erez and Grant (2014) that, contrary to many educators’ fears, incorporating research into management classes often has a pos- itive, energizing, and even transformative effect on students.3 In addition, the types of philosophies, strategies, and tactics used to teach EBMgt con- tinue to become clearer. For example, teachers of EBP frequently (1) begin their classes with some sort of exercise that causes students to question their current assumptions and “open them up” for a research-based approach (e.g., Guyatt and Burns; Latham; Pearce); (2) use analogies and examples from medicine (for which management students generally have a high degree of respect and about which they rarely consider themselves experts; Erez & Grant; Latham); (3) use cases, but in a way that deviates from the traditional case method by incorporating research evidence or the search for evidence (Dietz et al.; Gamble & Jelley); (4) invite research librarians to help students improve their Internet search capabilities (e.g., Goodman et al.; Trank), and (5) encourage dialectic and dialogic pedagogies. However, authors (and textbook writ- ers) in this special issue were divided on the use of technical research-based terminology, with Guyatt and Burls, Latham, and Pearce deemphasizing it, while Kepes and colleagues and Trank did not.



This special issue appears at a time of growing emphasis on EBP in management education from the undergraduate to the executive doctorate level (Erez & Grant, 2014; Salipante & Smith, 2012). As the attention to and demand for evidence-based management education expands, we must approach teaching with due humility in recognition of the poor quality of a good deal of organizational re- search evidence. We must grapple as educators and students with how little we really know in many areas of important management and employee concerns, as well as with biases and power dynamics that may cause us to think we know more than we do or to use inappropriate research methods for the question at hand (Kepes et al.; Trank; Walshe & Briner, this issue).


The advancement of EBP in management will be supported by greater development and sharing of summaries of research evidence on questions of importance to practitioners. As in medicine (Guyatt, 1991), the accumulation of rigorously con- ducted evidence summaries starts with small steps. At present, CEBMa is in the process of building a searchable database of critically appraised meta-analyses and systematic reviews on management questions. New and updated summaries are critical as research in management continues to expand. Also important are increased communications between academics and practitioners as well as among “camps” of academics with different epistemological and ideological views (Bar- tunek & Rynes, 2014; Gulati, 2007; Trank, this issue).


The authors, reviewers, and editors of this special issue have pursued the aspiration of EBP— that is, to provide useful knowledge to educators and, through them, to their students. This special issue extends an invitation to its readers to help better realize these aims. To this end, we draw attention to the words of Amanda Burls and Gordon Guyatt, two of the pioneers who started this movement now taken up by so many professional disciplines: Teaching EBP “is not just about changing a person’s skills and knowledge, it’s giving them a totally different perspective on the world…. It’s… an activist thing….We want them to get out of our classroom and change the world” (this issue).


Decisions made by managers have a profound impact on the lives and wellbeing of people all over the world. As Mintzberg (1990: 17) said, “No job is more vital to our society than that of a manager. It is the manager who determines whether our social institutions serve us well or whether they squander our talents and resources.” By ignoring evidence, billions of dollars are spent on ineffective management practices, to the detriment of employees and their families, communities, and the society at large (Rousseau, 2006). As teachers of the next generation of managers, we have a moral obligation to change this situation. We can do this by helping future managers acquire content knowledge based upon a solid and extensive body of research, teaching them how to find the best available evidence and then critically appraise its trustworthiness, and encouraging critical thinking and dialogue about academic (and other) texts and their underlying assumptions. So, let’s go into the classroom and make a change in the world by teaching EBP.


Academy of Management Learning & Education September 2014


Sara L. Rynes Tippie College of Business, University of Iowa

Denise M. Rousseau Tepper School of Business & Heinz College of Public Policy, Carnegie Mellon University

Eric Barends Center for Evidence-Based Management