Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 12 December 2019

Theoretical and Practical Issues: Research Needs

Abstract and Keywords

In this chapter, we summarize the major issues in each of the areas addressed in this volume. Identification of these issues as “major” is obviously a judgment on our part; many more important questions and challenging research issues are highlighted throughout the book. Some of the ideas mentioned in this chapter are central to the discussions of the individual chapter authors; others are concerns that occurred to us as we read the chapters. We believe that these questions are ones that should be addressed by researchers and considered by practitioners as they develop and implement selection interventions. Concerns about the construct validity and meaning of our assessments should be paramount, but reaction to these measures on the part of client organizations, their practical utility in assisting with decision making, and the impact of their use on groups and individuals are of obvious importance as well. Although we have described areas in which more information would be helpful, we want to emphasize our conviction that organizational psychologists have developed a large body of knowledge about human ability and job performance and this is well documented throughout the book.

Keywords: selection research, ability–performance relationships, ability measurement, performance and outcomes research, selection context

In this Handbook, we have summaries of the literature by 40 different groups of authors on a wide variety of topics. These authors summarized the research for each of these topics and in most cases noted areas in which research results were not available to answer critical theoretical and practical issues on the topic(s) they were addressing. We have read all these chapters, some of them several times, and thought our best contribution to this volume would be to highlight those research issues in each area that we thought are most significant and would contribute most productively to selection research and practice. Many of these ideas come directly from the chapter authors; others seemed relevant as we read each chapter and represent our own contributions. In organizing these research questions or foci, we thought it would also be helpful to follow the organization of the chapters in this volume.

Organization of the Handbook

Because most readers will likely not pay much attention to the organization of a volume and simply read the chapters they think will be most interesting to their research efforts, we provide a brief summary of the volume's organization and the rationale for that organization. Part I contains an introduction and overview. In Part II, we began with several chapters that were designed to provide a background in which selection research is conducted in the second century of effort in this area. The historical (Vinchur and Koppes Bryan) and social (Ployhart and Schneider) context and the nature of individual differences that comprise the (p. 940) nature of the human talent available to organizations (Murphy) are part of this section. The manner in which this talent is attracted to organizations (i.e., recruitment) is described by Breaugh.

Given this background, Part III of the Handbook provides a look at the research strategies that members of our discipline employ including the nature and meaning of validity and validation strategies (Sackett, Putka, and McCloy) and job analyses (Brannick, Cadle, and Levine) designed to identify what tasks are required of job incumbents and the attributes necessary to perform these tasks. Hausknecht and Wright provide an analysis as to how selection might contribute to organizational strategy. They note that organizational theorists treat selection differently than do those whose focus is on individual performance. The manner in which data from validation research are summarized in meta-analyses is provided by Banks and McDaniel.

In Part IV of the Handbook, the authors address the nature of the individual difference constructs typically considered important in job performance including cognitive ability (Ones, Dilchert, and Viswesvaran), personality (Barrick and Mount), fit (Ostroff and Zhan), and physical ability (Baker and Gebhardt). The implications of use of a combination of measures of these constructs are considered by Hattrup. Given the nature of these individual difference domains, Part V is comprised of a discussion of methods of assessing these constructs. These chapters include treatments of interviews (Dipboye, Macan, and Shahani-Denning), biodata (Mumford, Barrett, and Hester), simulations (Lievens and De Soete), individual assessments (McPhail and Jeanneret), self-report inventories (Spector), attempts to minimize subgroup differences in the use of these measures (Kuncel and Klieger), and web-based assessments (Scott and Lezotte). Parts IV and V were deliberately separated to reflect the point of view that we should be separating methods from the constructs measured when we discuss validity or subgroup differences in measures. It is common to see meta-analyses of the validity of interviews, biodata, situational judgments, or assessment centers, for example, when it is certainly possible that all of these methods can address different individual difference constructs and that validity and subgroup differences will vary as a function of the targeted construct more so than the method of measurement.

In Part VI, the authors discuss the outcomes that we hope to predict with our individual difference measures. Woehr and Roch provide a review of supervisory ratings, probably the most frequently used outcome measure, whereas Borman and Smith discuss more “objective” measures of employee performance outcomes. Organizational citizenship and counterproductive behavior and the prediction of these outcomes are described by Hoffman and Dilchert. Woo and Maertz provide a review of turnover/attendance, Pulakos, Mueller-Hanson, and Nelson present an evaluation of our efforts to measure adaptive performance, and Wallace, Paul, Landis, and Vodanovich describe the measurement and prediction of safe and unsafe work behavior and the resultant outcomes.

Various societal and organizational constraints influence the manner in which selection procedures are implemented as well as their value to the organizations and people who use them. In Part VII, chapter authors consider a broad range of these issues ranging from examinee reactions (Gilliland and Steiner), legal issues (Gutman), and concerns for workforce diversity (Ryan and Powers) to concerns about team performance (Morgeson, Humphrey, and Reeder) and levels of analysis concerns (Ployhart). Less conventional topics include how time influences the impact of selection practices (Beier and Ackerman), how we estimate the value of human resource efforts including selection (chapters by Sturman and Boudreau), as well as how selection practices vary across cultures (Steiner). Selection out of an organization is discussed by Feldman and Ng, and unique issues related to the selection of temporary and contingent workers are reviewed by Bauer, Truxillo, Mansfield, and Erdogan. Part VIII of the Handbook is devoted to a discussion of the implementation (Tippins) and sustainability of selection systems (Kehoe, Brown, and Hoffman). In Part IX, the book concludes with a discussion of the implications of the various chapter discussions for research (Schmitt and Ott-Holland). As stated above, our discussion of the important issues about which those interested in selection follows the same organization as does the book itself.

Background Issues

Although the chapter on history by Vinchur and Koppes Bryan is an excellent description of the development of our field and the significant events that influenced our field (e.g., World War II, the civil rights movement), we also believe that future historians could also try to interpret our history in more detail. They could point to events that have (p. 941) produced stagnation as well as advances in our discipline. They could show in more detail how ideas extant in society at a given point in time were or were not reflected in the way organizational psychologists approached selection. Probably most importantly they could consider the “lessons” of history and project the future of the field. Based on history, what will likely be of most concern to researchers and practitioners over the next decade? Are there developments in our current historical milieu that we should seek to avoid or enhance as we try to influence the development of our field?

The presentation of the Big-Five as the general taxonomic model of personality (see the chapter by Murphy) has represented a huge advance, but it has raised some major research issues that remain unresolved and about which more research data should be collected. First, the use of facets of the Big-Five directed specifically to an area of job performance often seems sound conceptually and has occasionally proven to produce better predictor–outcome relationships than the use of more general measures of the Big-Five. Conversely, composites of Big-Five measures or aspects of different Big-Five dimensions such as integrity tests, core self-evaluations, and customer service orientation have often proven quite useful and more so than Big-Five measures. Both types of research (more molecular and more macro in terms of the constructs measured) should be useful, but we believe that long-term progress in understanding the personality correlates of performance will be greatest if all researchers attend carefully to the constructs being measured in the various combinations that make up predictor batteries or composite measures. A second major issue raised by Murphy in his chapter is the appropriate use of predictor measures relative to the realization of various outcomes. Organizational psychologists have been primarily concerned with productivity and, in the past several decades, equity for various demographic groups. It would be useful to expand this set of outcomes to consider individual health and satisfaction, the strength of a local community or societal resources as they relate to employment (or unemployment), as well as other outcomes considered in this volume (e.g., organizational citizenship behavior, counterproductivity, length of employment, and safety). Relating various individual and organizational attributes to this or other outcomes in multiattribute utility models could be very exciting research with very different implications for personnel selection.

The chapter by Ployhart and Schneider as well as a later chapter by Ployhart point to the need to consider individual difference-performance outcomes at multiple levels including the work group, the organization, and even the culture (they point to the possible importance of the cultural context in considering how situational judgment measures might operate). We do have literature on the validity of tests in a large portion of the Western world, but a real dearth of information as to whether measures of individual difference constructs used in African, Asian, or even South American countries produce similar results.

Recruitment is certainly an essential component of selection; without an adequate number of applicants for positions, selection is a moot issue. However, there remain significant questions about aspects of recruitment. What outcomes (performance, turnover, absenteeism, diversity, etc.) are associated with external recruits versus internal recruits and are there any differences in predictor–outcome relationships across these two groups? The use of the Internet produces a host of new questions. For example, what is the validity of the use of information about job applicants obtained from various web-based and social network sources and what are the ethical/legal issues associated with the use of these data in decision making? Recruitment of individuals to work as expatriates in another country or even the effectiveness of different recruiting methods cross-culturally are likely to be increasingly relevant, yet we have little solid information as to what might be most useful in different cultures. We do have some information on the prediction of expatriate success (e.g., Mendenhall, Dunbar, & Oddou, 2006), and the various types of measures used across the world are summarized in the chapter by Steiner in this volume.

Research Strategies

The chapter by Sackett, Putka, and McCloy does not lend itself to new research questions, but it certainly indicates in very clear fashion the types of evidence that provide for the set of attributions researchers like to draw from validation research. It should serve as a framework for subsequent test validation work.

Job analysis is a very old topic for organizational psychologists, but the chapter by Brannick, Cadle, and Levine raised a couple of issues for us. Work in many organizations has become electronic and we rely on electronic communication for coordinating (p. 942) work. What constitutes effective communication in this context? What provides for clarity, timeliness, tact, or insight and many other aspects of effective communication? Brannick et al. mention that the collection of critical incidents is highly labor intensive, but that it provides a wealth of information about what and how job tasks are done. Would it be possible to develop a bank of critical incidents that has generalizability across jobs or across the knowledge, skill, and ability lists in O*NET?

Hausknecht and Wright, as do other authors in this volume (e.g., Ployhart; Ostroff and Zhan), point to the need to study the staffing problem from a multilevel perspective and review the thinking and literature on matching organizational strategy and staffing policies. However, they seem to take this idea further when they suggest that we also consider that it makes sense to create flexible staffing strategies and practices that can accommodate shifts in strategic direction and provide greater responsiveness to changing environmental demands. How such a staffing strategy would be comprised much less evaluated will require a consideration of changes in organizational environments and strategies and corresponding adaptations in staffing policies. This would also necessitate an understanding of how human talent or resources flow through organizations. To our knowledge, at least, these are new questions that will necessitate the development of new research designs and the careful delineation of what changes we expect to occur when. Timing of data collection is treated in a later chapter by Beier and Ackerman; it should be an important factor in all validation research, but will certainly be important in evaluating the role of variables discussed in the chapter by Hausknecht and Wright.

The chapter by Banks and McDaniel describes the method by which personnel selection research is cumulated to provide generalizable conclusions that represent superior estimates of the population validity associated with various predictor constructs. Use of meta-analysis has contributed greatly to our knowledge in many areas of psychology including personnel selection. However, we believe that perhaps meta-analysts may overlook the quality of the primary studies in an area of research (see also Schmitt & Sinha, 2010, for similar arguments). Meta-analysts may argue that aspects of the primary data base about which there are concerns can be coded and that moderator analyses can be used to detect the degree to which questionable research practices impact parameter estimates. There are perhaps three possible problems with this defense. One is that in many areas there is not a sufficiently large number of primary studies to allow for meaningful moderator analyses. A second and more important limitation may be that the problems in the primary data bases are present in all primary studies that may either inflate or deflate the estimate of the relationship of interest. If relatively constant across studies, they may also serve to convince the meta-analyst that there is less variability across situations in the relationship of primary interest than is actually the case. Corrections for the variability due to artifacts should no longer be based on assumed artifact distributions. It has now been close to four decades since meta-analysts have urged primary researchers and editors of peer-reviewed journals to report more fully on the characteristics of their participant sample and the measures they used. Future meta-analytic efforts should correct parameters and the variability of those parameters based on actual data from the primary studies involved not on some assumed distribution. See Schmitt (2008) for reservations regarding the claims made based on the corrections made to estimate population validities.

Third, in some areas, particularly in validity studies, the primary studies were likely done decades or even close to a century ago. Even if the relationship studied has not changed in that time, our methods of doing research have certainly improved in that time. Examination of the quality of primary studies and the impact quality has on meta-analytic estimates should be pursued by personnel selection researchers. It is also the case that methods of meta-analysis are being refined constantly (e.g., Le & Schmidt, 2006); research of this type on the methods of meta-analysis should continue. Finally, Banks and McDaniel point to a common problem with the meta-analyses done in personnel selection, namely, the confounding of method and construct in summaries of bodies of research.

Individual Difference Domains and Validity

Certainly, the construct about which we have the most validity evidence and for which validity estimates are the greatest is cognitive ability. Ones, Dilchert, and Viswesvaran provide meta-analyses for the relationship between cognitive ability and a variety of performance constructs, for some of which a rationale for the relationship does not exist. They provide a number of hypotheses as to why cognitive ability relates to organizational citizenship behaviors (OCBs) or counterproductive work behaviors (p. 943) (CWBs) as well as other performance constructs in addition to overall or task performance for which we have the largest amount of data. Evaluation of these hypotheses would provide potentially useful and theoretically important results. Ones et al. do not mention this, but the majority of the studies of cognitive ability–task performance relationships were conducted many decades ago. We believe that it would be useful to reevaluate this relationship using “modern” operationalizations of cognitive ability and better research designs as well as actual data on artifacts. We strongly suspect that these improvements would provide better estimates of this relationship and that those estimates would be larger in magnitude.

Research in the personality area has mushroomed since the highly influential Barrick and Mount (1991) meta-analysis, but there remain important areas in which further theory and research would be highly valuable. Both Murphy in his chapter and Barrick and Mount in their chapter point to the need for a taxonomy of lower level personality traits and research on the adequacy of such a taxonomy. Research on moderators and mediators of the personality–performance relationships should continue (e.g., Barrick, Parks, & Mount, 2005; Barrick, Stewart, & Piotrowski, 2002) and will likely contribute to the important concerns as to when and how personality affects job performance. Related to this point, an analysis of personality profiles and interactive effects between different dimensions of personality may very well prove to increase our understanding of personality–performance relationships. Although compound personality traits (e.g., core self-evaluation, customer service indices) have displayed impressive validity in a variety of situations, from a theoretical standpoint, we think computation of a compound personality measure with little attention to its component parts and their relative reliability and validity represents a step backward in attaining a better understanding of underlying construct relationships. Personality measurement almost always involves the collection of self-reports with the attendant problem of response distortion. Attention to alternate modes (e.g., observation, physiological measures, simulations) and sources (e.g., peers, parents, co-workers) of personality measurement may provide a rich source of information that is not so susceptible to distortion.

If anyone believed that fit was a relatively intuitive and simple concept, a reading of the chapter by Ostroff and Zhan should convince them otherwise. The complexity of the various aspects of fit and its measurement may contribute to the relatively low validity of fit measures as selection devices (see Kristof, 1996; Kristof-Brown, Zimmerman, & Johnson, 2005). Use of an integrated theory such as that presented in Figure 12.1 of the Ostroff and Zhan chapter as a guide to future research endeavors might provide more significant advances in this area of research. In addition, as Ostroff and Zhan indicate, it is important to consider the type of fit that might be most likely to affect individual and organizational outcomes if you hope to use such measures effectively in a selection context. Perhaps the most significant problem for fit researchers is that they have poor theories and measures of situations. If they have no clear understanding of what they are fitting individuals to it is difficult to assess fit. Consequently, research on determinants and outcomes of fit is likely to produce a confusing array of results.

An interesting question about physical ability is the degree to which the possession of psychological capabilities may compensate partially or totally for the possession of physical ability. Certainly for police jobs, the notion is frequently expressed that it is not necessary to physically overwhelm a potential wrongdoer or quell the beginning of a fight if you are smart enough to avoid these instances, head them off before they become confrontational physically, or handle them in nonviolent ways when they do escalate. To our knowledge, there is nothing more than anecdote to support this hypothesis or to assess when or the degree to which this is true. We should find that cognitive ability is related in some way to the successful handling of potentially violent confrontations. There must be other jobs or instances as well in which psychological attributes (i.e., personality, intellectual ability) compensate for a physical liability.

Baker and Gebhardt mention the use of preparation programs for physical ability tests and provide very reasonable suggestions as to how they should be designed and conducted. From a research perspective, we question how these preparation programs change the validity of a physical test. Research on obesity suggests that people often experience a yoyo effect with weight gains or losses over time as people engage diet and exercise programs and then abandon them. If a preparation program serves to condition us physically, but if we then lapse in the physical exercise that produced changes in physical strength or agility, then we would expect that (p. 944) the validity of any physical ability test would suffer as a result. Similar to our comment on personality profiles above, an interesting question in the physical ability realm might be the degree to which ergonomic, physiological, and biomechanical indices of physical ability compensate or complement each other when determining how we deal with physically challenging work. This will certainly be a function of the type of work in which we are involved, but there may be some generalizable guides.

In the final chapter of Part IV, Hattrup considers how various combinations of the variables personnel selection researchers use to make decisions can be aggregated in ways that maximize expected individual-level performance and minimize adverse impact. These examinations have produced surprising results in many instances. Hattrup as well as Murphy in an earlier chapter in this volume and earlier publications (e.g., Roth & Bobko, 1997) suggest a consideration of a broader set of outcomes in multiattribute utility models. To date, we have seen very little use of such models to inform judgments about selection decisions and we think they deserve more attention.

Methods of Assessment

Personnel selection researchers have used a variety of methods to assess the individual difference constructs described in the chapters in Part IV of this handbook. These methods almost always are used to measure several different constructs, although the precise constructs targeted by the method of measurement are frequently not specified. In fact, we often refer to the validity of the interview, biodata, simulations, individual assessments, etc. (e.g., Schmidt & Hunter, 1998) when it would be more accurate and useful to refer to the validity of the interview in assessing interpersonal skills, for example. The latter is so unusual that an attempt to review the literature on subgroup differences in ability as measured by different methods, Schmitt, Clause, and Pulakos (1996) could find very few instances in which such comparisons were possible. This neglect of attention to the construct being measured is becoming less prevalent as is evident in the chapters in Part V.

There are, however, measurement concerns that are peculiar to each of the methods reviewed. There are also research areas that deserve additional attention in each case as well. It would likely be safe to say that there are thousands of papers on the interview, but the role of self-promotion (is it faking?) in the interview has rarely been addressed. Is impression management in the interview (see Van Iddekinge, McFarland, & Raymark, 2007) the same as faking on a self-report personality measure or biodata instrument? Does impression management increase or decrease the validity of the constructs measured in the interview? Van Iddekinge et al. (2007) suggest this may be more of a problem when a behavioral-based interview is employed because interviewees inflate their role in previous achievements. The opposite appears to be the case in biodata as Mael (1991) suggests biodata that are objective and verifiable are less inflated.

Dipboye, Macan, and Shahani-Denning raise the question of the use of social networking sites as places at which interviewers can obtain information about interviewees prior to the interview. Although not integral to the interview per se, the use or knowledge of such information raises important new questions for practitioners and researchers. How does this prior information influence the type of questions or the manner in which they are delivered and the attributions derived from applicants? What is the impact of validity on interview information that is supplemented with information from social network sites? It is also plausible that the type of information available on a Facebook site may lead to different attributions for men and women or people of different groups or backgrounds (see Karl & Peluchette, 2007).

Another question that has not been addressed to our knowledge is the degree to which situational interview content may have different implications for members of different cultures. Situational judgment items may also be subject to similar cultural context effects, although we are unaware of any examination of these effects. The emphasis in this chapter on the interplay between interviewee and interviewer certainly emphasizes the need to investigate the questions raised here as well as many more regarding the interview context, which includes the situation, the constructs being assessed, as well as the characteristics and motivation of the parties involved.

In the biodata realm, we think it is somewhat surprising that more attention has not been directed to the assessment of interests using background data. It could be argued that biodata are inherently a measure of interest since they reflect what a person has done in the past. However, there has been no effort to link biodata measures to the Holland constructs (Holland, 1997). We also believe that a (p. 945) reexamination of early work on the pattern of life experiences (Owens & Schoenfeldt, 1979) along with other indices of constructs not addressed by biodata would prove useful in identifying subgroups of people whose likelihood of success in different activities will vary (see Schmitt, Oswald, Kim, Imus, Drzakowski, & Shivpuri, 2007). Biodata may also be particularly helpful in hiring individuals of postretirement age since most are likely to have a relatively well-established set of life experiences that should predict what they will find interesting and what they will do well.

In their purest form, simulations are designed to be samples of actual work behavior the performance of which is evaluated and used as input to a hiring decision. Consequently, there is no need to consider the ability or construct being measured by the simulation—it is work performance. Some simulations, however, are not such direct replications of a job situation and often involve the evaluation of a construct (e.g., problem-solving ability or leadership in an assessment center). Lievens and De Soete compare these high- and low-fidelity simulations on a number of dimensions and we think their final conclusion that an examination of the stimulus, response, and scoring features of simulations both individually and comparatively will provide a better understanding of what the simulation measures and the psychological/ability requirements of jobs themselves. Research on the nature of the constructs measured or the validity of these measures has most often focused on the nature of the item stimulus, not the nature of the response required. The latter certainly deserves more research attention (e.g., De Soete, Lievens, & Westerveld, 2011).

The chapter by McPhail and Jeanneret provides a very good understanding of the nature and complexity of individual assessment and the problems associated with the evaluation of these methods as selection tools using our normal criterion-related validation paradigm. The reader should also soon realize that there is no one individual assessment but that assessments differ in terms of the method of data collection, how they are scored (if they are) and used, and how they are used by whom and for what purpose (i.e., growth versus development). They certainly suggest many potential areas of research including the notion that qualitative methods may be useful in human resource decision making. We also think that examination of the use of individual assessments in other cultures can be a valuable source of information about leadership models in these cultures as do the authors of this chapter. The notion that an assessor should have a model of job performance when doing assessments is obviously important. Ascertainment of what these models are across cultures should be helpful in understanding how leadership is viewed cross-culturally—at least by those evaluating persons for relatively senior-level leadership positions. It also seems possible that enough assessments are being done that a criterion-related study using multiple nontraditional criteria such as those suggested by McPhail and Jeanneret could be designed and conducted. Note that many of the criteria they suggest may be relevant in all organizations, hence a large sample all from one organization may not be necessary. There have been many calls for more research on individual assessments (Ryan & Sackett, 1987, 1998), but no systematic efforts have been undertaken.

Spector had a somewhat difficult task in that he addressed self-reports as a method of collecting data on individual differences. Self-reports come in many different forms, such as interviews, many personality measures, biodata, situational judgment inventories, and perhaps others. The major concern in discussions of all self-report inventories is that responses of applicants are inflated (Spector, 2006) because of the usual high-stakes situation. Attempts to identify and correct for such inflation as well as to assess its impact on validity and subgroup hiring have a very long history in organizational psychology (Kirchner, 1961,1962; Zavala, 1965). Despite a half century of effort in this area, it does not seem we have an acceptable solution or even that we agree on the nature and severity of the issue (Goffin & Christiansen, 2003; Ones, 1998). Successful efforts in this area would be a major advance in the use of self-report indices, particularly as many of them are now delivered online occasionally in unsupervised settings. Spector also mentioned the problem of common method variance as a source of covariance between variables measured with a single method and pointed out that any degree of inflated variance may also be a function of the trait assessed. To that caveat, we add the notion that there are probably differential effects across various forms of self-report measures. To our knowledge the last question has not been addressed. We think the examination of the existence and meaning of “ideal points” on rating scales (Stark, Chernyshenko, Drasgow, & Williams, 2006) is very interesting since it implies a curvilinear relationship between these measures and criteria. Aside from obvious measurement implications (p. 946) of such models, use of ideal point methods may provide a greater understanding of the constructs measured as well as the criteria to which they are related. Finally, we think a broader exploration of formative as opposed to reflective models of item–construct relationships can be very useful, particularly in instances in which not all items are equally representative of some construct (e.g., the stealing item on a measure of counterproductive work behavior mentioned by Spector or a heart attack on a measure of health indices). The use and meaning of formative assessments remain controversial; the interested reader should consult Edwards (2011).

Web-based assessments have proliferated so rapidly in the past two decades that a visit to conference displays of test publishers would lead to the belief that no one is using a paper-and-pencil measure any longer. The advantages of web-based testing in terms of administrative ease, the possibility for immediate feedback, and the easy linkage to applicant tracking systems are perhaps obvious. Scott and Lezotte point to some of the concerns with the use of unproctored web-based testing. In this context, the use of proctored follow-ups to an initial estimate of a person's standing on some measure is becoming commonplace. This practice provides a unique opportunity to investigate the role of cheating and who is (is not) likely to cheat under these circumstances. Very sophisticated models to estimate level on a construct are being developed to efficiently use examinee testing time (Nye, Do, Drasgow, & Fine, 2008) and to assess the confidence with which decisions are made based on unproctored web-based testing. The question remains, however, as to whether proctored verification tests following unproctored Internet tests significantly improve the criterion-related validity of the assessment, as compared to an unproctored Internet test without verification testing. Even if that were not the case, the follow-up would detect and remove dishonest job applicants.

How to evaluate the degree to which assessments represent fair evaluations of the capabilities of members of different subgroups has been addressed repeatedly over the past four decades and remains a highly charged and controversial issue (see Aguinis, Culpepper, & Pierce, 2010). We believe the chapter by Kuncel and Klieger provides a balanced and authoritative review of this topic along with a number of excellent suggestions as to how to evaluate various open questions. Subgroup differences on measures of cognitive constructs remain relatively large; early reflections of such differences are represented by the achievement gap in elementary and secondary schools. Organizational psychologists work mostly with adults when such differences (and the resulting differences in predicted performance) are relatively stable. It may be most productive societally and organizationally if we focused on developmental experiences and interventions that serve to minimize these differences. This issue seems to have been considered the domain of educational experts and developmental psychologists; greater familiarity, understanding, and communication of a cross-disciplinary nature may be helpful in dealing appropriately with these subgroup differences.

Not necessarily within the scope of and chapter in this section of the Handbook, we think personnel selection researchers and practitioners are probably not taking full advantage of the technology to measure new constructs or measure those familiar to us in different forms. As examples, online simulations (Olson-Buchanan, Drasgow, Moberg, Mead, & Keenan, 1998), in-baskets, and situational judgment inventories (Olson-Buchanan & Drasgow, 2006) can all be delivered in ways that at least appear more face-valid than a paper-and-pencil test. Likewise, we should be able to measure spatial perception and related constructs (perhaps medical knowledge or skills) that are not accessible if we use a flat paper surface.

Performance and Outcome Assessment

Chapters by Woehr and Roch (supervisory performance ratings), Borman and Smith (objective indices of performance), Hoffman and Dilchert (counterproductive work behavior and organizational citizenship behavior), Pulakos, Mueller-Hanson, and Nelson (adaptability), Woo and Maertz (turnover), and Wallace, Paul, Landis, and Vodanovich (accidents and safety behavior) reflect the multidimensional nature of performance that has developed over the past 30 years. The first serious attempt to develop a multidimensional model of performance and then use it to guide research is that represented by Project A researchers (Campbell, McCloy, Oppler, & Sager, 1993; Campbell, Ford, Rumsey, Pulakos, Borman, Felker, de Vera, & Riegelhaupt, 1990). These notions have produced a radical change in the measurement of the criteria against which personnel selection measures are validated. Whereas an overall measure of performance (sometimes a single item and sometimes a composite of several highly intercorrelated items), turnover, or training success was considered (p. 947) adequate in the past, today we often consider multiple outcome measures and we find that they have very different individual difference correlates. One macrolevel research project that would seem useful is to compare these various outcomes to overall organizational performance. One of your authors is now working with an organization that has up to 300% turnover in a given year; for this organization keeping a person long enough for them to make an effective contribution would likely be most related to organizational longevity and effectiveness. If we consider positions such as that of a Transportation Security Administration (TSA) officer, vigilance in the detection of rare happenings would be important. For other organizations, probably most other organizations, multiple dimensions of performance are likely related to overall organizational effectiveness, but to differing degrees. Information about the degree to which various performance dimensions are related to overall organizational functioning would be practically useful and scientifically interesting in understanding the impact of individual functioning on organizational effectiveness.

The chapters by Woehr and Roch and Borman and Smith are devoted to subjective (i.e., ratings) and objective indices of performance. In most cases, these indices measure what has been termed task performance. There are a number of issues concerning these measures that deserve more attention. In terms of ratings, we have some studies of the differences in ratings derived from different sources, but it would still be useful to understand when such differences are due to observational opportunity, rater biases, differences in the organizational level of the rater, or perhaps other factors some of which would be job relevant and others perhaps contaminants. We also have a reasonable body of literature documenting that objective and subjective performance indices do not correlate highly (Ford, Kraiger, & Schechtman, 1986). It may be useful to know which of these indices correlate with overall organizational functioning. Likewise, a very important assumption underlying much of our utility analyses is that individual performance indices can be aggregated directly to the organizational level. To our knowledge, this has not been evaluated and is almost certainly not the case. The Campbell model of performance that guided the Project A effort had a great deal of influence on our literature; it would now be useful to determine the degree to which the body of literature on performance since that model was first proposed supports the hypothesized dimensionality of work performance across organizations. Finally, the advent of latent growth models and hierarchical linear models allows us to study performance changes and the correlates of those changes. This literature is burgeoning and we expect and encourage many more studies of performance change in the near future.

It also seems that we have much to do in understanding the construct validity of some outcome measures. The chapter by Hoffman and Dilchert discusses the research on the dimensionality of counterproductive work behavior and organizational citizenship behavior and their conclusions in both cases are still qualified. Perhaps the most important research need they recognize is the need for information on these outcome constructs from sources other than the employees themselves. This is very difficult, particularly for CWBs, but a more complete understanding of these behaviors requires that there be more than only self-reports of these outcomes. The nature of these two sets of behaviors would seem to suggest that personality (will-do) measures would be better predictors than ability (can-do) measures. Although this seems to be the case, the chapter by Ones, Dilchert, and Viswesvaran suggests that cognitive ability displays a modest correlation with these two behaviors, particularly CWB. Hypotheses are presented by Ones et al. for these relationships; these hypotheses and other potential explanations should be explored.

Turnover may seem simple, but we still need a better understanding of what is functional turnover in given circumstances and when turnover is really voluntary. In this context, researchers might more often make use of the content of exit interviews and consider ways in which they can more rigorously test ideas and predictions suggested by Lee and Mitchell's (1994) unfolding model. Also, researchers are still infrequently using simple linear regression analyses when alternative methods such as logistic regression are more appropriate. When the outcome is simply a dichotomy, logistic analyses should be used, and when the outcome is length of tenure in an organization, survival analyses will be more appropriate and informative.

Accidents and safety violations are still commonly used when our interest is in predicting accidents, but we have long heard arguments that we should be analyzing safe (or unsafe) behaviors since actual accidents are rare and often the result of behavior that has been engaged in repeatedly (witness the disabling of warning devices for methane (p. 948) concentrations in the West Virginia coal mining disaster). If we use accidents themselves as criteria, we might also consider survival analysis as an analytic tool (i.e., what predicts the length of time between accidents?). With the increasing cost of health care and health insurance benefits, it may also be useful to understand which employees will participate in effective health maintenance behaviors? To our knowledge, organizational psychologists have not addressed this concern.

In the case of using adaptability as an outcome variable, we think it still remains a question of the degree to which this construct is independent of task performance. Certainly it is conceptually, but we should also assess the degree to which these measures are correlated with measures of task performance and overall performance as well as other aspects of performance (Pulakos. Arad, Donovan, & Plamondon, 2000; Pulakos, Schmitt, Dorsey, & Arad, 2002). Additional research on the determinants and consequents of adaptability are needed to include the health and satisfaction of individuals who are being forced to “adapt” repeatedly.

Multidimensional performance models and the discussion of in situ performance (Boje, 2008) also suggest that we consider interactionist models of performance. Studies of in situ performance, usually with qualitative observational techniques, reflect performance in a more realistic and generalizable manner than other characterizations of performance. Cascio and Aguinis (2007) define in situ performance as the “specification of the broad range of effects—situational, contextual, strategic, and environmental—that may affect individual, team, or organizational performance.” More detailed qualitative observations of performance (perhaps more time spent observing workers perform during the job analysis phase of our projects) and the way work is performed may yield rich information and hypotheses concerning determinants of performance and alternate methods of achieving satisfactory or exceptional performance. Such studies may also provide information as to how individuals adapt to unusual or novel job demands.

At several points in this chapter, we have considered the importance of time in the collection of variables (also see the chapter by Beier and Ackerman in this volume). It has certainly been investigated and raised as an issue before (e.g., Ackerman, 1989; Fleishman & Hempel, 1955; Henry & Hulin, 1987, 1989), but the possibility that the mean and variance of performance change with time and also the correlates of performance change should be investigated. For example, some authors have suggested that measures of performance taken early in our tenure on a job are likely correlated with task knowledge and ability; measures taken later are more a function of motivation. As mentioned above, new analytic techniques such as hierarchical linear modeling and latent growth curve modeling have made it possible to test hypotheses about growth that were largely inaccessible to previous researchers. In considering time as a variable, we need to be aware that the timing of data collection is very important. For example, we should have a theory that indicates the point in time at which it is likely that knowledge/ability determinants of performance become less important and motivational determinants more important and then time our collection of data to when that change is likely to take place (Kanfer & Ackerman, 1989; Murphy, 1989; Helmreich, Sawin, & Carsrud, 1986; Zyphur, Bradley, Landis, & Thoresen, 2008). Similarly if we have a theory about the mechanism that links any two variables (e.g., motivation and counterproductive work behavior), we should know when it is likely that mechanism will impact the relationship if we are to assess its impact at the appropriate time. So it is not enough to collect longitudinal data, we must collect it at the right time.

Context of Selection and Special Issues

Including this last section in the Handbook is a recognition that selection does not occur in a vacuum. Many special situations and factors impact the nature of knowledge, skills, abilities, and other characteristics (KASO)–performance relationships and the utility of the selection instruments we use. The authors of previous chapters certainly mentioned many of these issues, but the last set of chapters brings a focus to these.

Perhaps the most obvious of these constraints are the applicants themselves. Selection researchers have long recognized the important role of recruiting a qualified pool of applicants (see the chapter by Breaugh in this volume). However, the study of applicant reactions to the selection process reflects an understanding that applicants are not passive actors in the selection situation; they form attitudes and perceptions that influence their subsequent behavior. Gilliland and Steiner review an impressive body of research that is theoretically grounded and provides evidence that reactions are related to prehire attitudes, self-perceptions, and, in some (p. 949) instances, to intent to recommend that others apply to the organization, participation in subsequent steps of the hiring process, and intent to accept a position if offered. However, no research to this point has examined behavior that might result from these attitudes other than continued participation in the selection process. Do applicants who report a negative reaction to the procedures employed subsequently refuse a job offer more frequently than those who had positive reactions? Are the behaviors (e.g., organizational citizenship behavior or counterproductive work behavior, length of time spent with the organization) of the people selected any different as a function of their perceptions of the hiring process? Do those who are rejected refuse to buy the organizations’ products? We would also like to emphasize some of the research questions suggested at the end of their chapter. First, in a study at the organizational level, it would be interesting to document that an organization's reputation in the hiring process area is related to the number and quality of the applicants it receives for open positions. Second, multiorganizational studies that simply document the mean and variance of practices and reactions to those practices might better inform future researchers as to what might be the impact of variables at the organization level.

Perhaps the external event that has most influenced selection practices in the United States has been civil rights legislation and the involvement of the court system in deciding what is a fair and valid practice. Similar legislation has been enacted in various parts of the world, particularly in the European community, though there are important differences. This pressure has also stimulated much of the research in our discipline. Obviously there has been substantial research on subgroup differences on a variety of instruments used as well as research on adverse impact that results from the use of a test(s) to make numerous employment decisions (e.g., hiring, promotion, salary raises, termination). In addition, there have been numerous studies of the presence and meaning of bias in ratings and interviews and phenomena such as stereotype threat (see also the Kuncel and Klieger chapter in this volume) that influence the scores of groups of people about whom society has developed certain stereotypes (i.e., one group cannot perform well on mechanical ability tests). Gutman provides an analysis of case and statutory law related to many human resource practices including those related to selection issues. Although this may be the domain of public policy analysts, an interesting question relates to the impact this legal effort has played on a variety of organizational indices. Could we ascertain if there are differences in human resource practices, representation of members of minority groups, or performance and turnover between those organizations that have been prosecuted for an Equal Employment Opportunity (EEO) case and those that have not had that experience? Or would a time series analysis of archival data on various indices before and after such legal involvement be informative? Of course there are the usual problems associated with the fact that a host of other things happened to organizations and society simultaneously.

Issues related to time and the need for longitudinal research were mentioned previously. However, we would like to highlight one item mentioned by Beier and Ackerman as needing research attention. We need to know more about age changes in ability, motivation, and performance. People are living longer as healthy individuals and many want to continue to work or feel obligated to work because of economic reasons. Time series analyses and growth curve modeling along with collection of data (or use of archival data) will need to be employed. It may also be useful to develop literature that specifies the types of volunteer work that retirees find rewarding and that contribute to their sense of self-efficacy and health.

In his chapter, Steiner asserts that “few studies on personnel selection internationally have systematically studied cultural variables associated with their effective application and few cultural variables are represented in the studies conducted.” Given the almost universal globalization of large (and many small) organizations, this represents an obvious area in which organizational psychologists should devote more effort. What are the constructs that are relevant in different cultures (see Ones & Viswesvaran, 1997)? Are they the same ones as mentioned throughout this book (see the overview of individual differences by Murphy)? If we are interested in similar constructs, then do we see evidence for measurement equivalence when Western measures are translated or adapted to another culture? Are particular methods of assessment or even a formal process of selection acceptable in various cultures? These and the four questions highlighted at the end of Steiner's chapter reflect the need for multiple programs of research directed to answering these questions.

Chapters by Boudreau and by Sturman both address the manner in which the validation data we have collected on various constructs can be (p. 950) communicated to managers in ways they will understand and be motivated to use in their decision making. Sturman's proposal is to develop a multidimensional employee worth construct and connect measures of that construct to strategic human resource objectives. Boudreau claims managers do not relate to our validation research paradigm and proposes that we begin to frame our research in terms of performance tolerances, supply-chain, and portfolio theory. In this way, it is hoped that managers will have more adequate mental models within which to assess validation claims. These are both extensions of the concern with the utility of selection measures. Both deserve research attention and Sturman and Boudreau provide numerous ideas as to how that research might progress. Communication with the various constituencies we hope to serve is essential for both practice and research communities so these proposals should be operationalized and evaluated.

Related to the issue of communicating the utility of selection interventions is the manner in which a selection intervention is implemented in an organization (see the chapter by Tippins) and its sustainability (see the chapter by Kehoe, Brown, and Hoffman).

Ryan and Powers review the impact of a variety of strategies designed to increase the diversity of a workforce and provide a set of useful questions by which to monitor an organization's recruitment and selection strategies. One issue that likely deserves more attention is the long-term impact of the use of various strategies. If an approach to workforce diversification results in the hiring of people who subsequently leave for employment elsewhere the organization may actually find itself in a worse situation than previously. Organizational members may become disillusioned with the effectiveness of their efforts and the climate for diversity may actually become less favorable. Like so many other issues in this volume, we should also more frequently address the impact of recruitment and selection procedures at the organizational level on the diversity of the workforce and the impact on subsequent organizational measures such as organizational citizenship behavior, counterproductive work behavior, turnover and absenteeism, as well as indices of productivity. Sacco and Schmitt (2005) present an example linking diversity measures to profitability and turnover only, but diversity was not linked to any specific human resource practice and was likely not the result of those practices. As Ryan and Powers mention, the workforce composition is changing rapidly and issues they and others addressed in this chapter may need to be readdressed given different contexts in the future.

Despite a rapid increase in interest on research involving the selection of teams, Morgeson, Humphrey, and Reeder identify a large number of unresolved issues. Perhaps the one that seems least often addressed is how to select teams as teams rather than individuals who will work in teams. The authors of this chapter provide a number of ways in which this can be done (e.g., team members make the selection, the selection is made to provide expertise in a needed area), but we have few or no data on the outcomes associated with these various approaches to team selection or a simple focus on identifying individuals with the skills associated with working in teams. Again, careful consideration of multilevel and cross-level inferences and how they are evaluated is mandatory.

The chapter by Feldman and Ng presents a completely different problem from the rest of the chapters; that is, they consider how organizations decide which people to lay off or retire when downsizing becomes necessary. In this case, too, it is rare that organizations use selection procedures or test to help make these decisions. The criticality of the position an employee holds as well as previous performance are likely far more important criteria, though one of the authors of this chapter is aware of at least one instance in which an organization used formal assessments of employee KSAOs to determine who would be terminated. An interesting question in this area is whether the characteristics of individuals who remain with the organization (their personality and organizational involvement and OCBs) impact subsequent morale and unit performance. It could be hypothesized that if well-liked persons are let go there would be greater negative impact on the unit than would be the case if less-liked or even disliked persons are laid off. The perceived justice of any layoffs will also certainly impact the fallout of these decisions, but more attention could be directed to a determination of what influences employee perceptions of justice in these situations. Longitudinal research on the impact of layoffs on organizational reputation and subsequent ability to recruit and select effectively would also be helpful in this arena.

As companies begin to climb out of the recent recession, as in other recessions, they have hired contingent and temporary workers before committing (p. 951) to “permanent” hiring decisions. Bauer, Truxillo, Mansfield, and Erdogan point to two major issues to which researchers should devote attention. The first is the degree to which a variety of personality characteristics may be related to the motivation and ability of contingent workers to adapt to their contingent status and do well in these jobs. They provide a variety of possibilities, but research on their hypotheses is largely nonexistent. They also report no literature on the validation of the use of any selection procedures used to predict the performance of contingent and temporary workers, although one reason may be that most agencies or organizations that hire these individuals may have little choice (i.e., their challenge is recruiting people to these jobs) and they may view the contingent assignment as a “selection” device for those they hope to hire permanently. Yet a third area that needs research attention is the impact of contingent hiring on the morale and behavior of the permanent workforce. Certainly the successful hiring of contingent workers must send messages to the remainder of the workforce about their own value to the organization. The question is whether such effects have any impact on turnover and performance, particular in the areas of organizational citizenship behavior and counterproductive work behavior.

Implementation and Sustainability of Selection Systems

The chapters on implementation (Tippins) and sustainability (Kehoe, Brown, & Hoffman) are excellent and underscore the need to pay more attention to these concerns. The work of implementing and sustaining a valid selection program does not seem to have received much attention from the research community, but these two chapters demonstrate that much can go wrong after the researcher has established the validity of a set of tests and described the manner in which the tests should be used to make hiring decisions.

Tippins ably documents that the work of test development and validation is really only the beginning (and perhaps a minor part of the total effort) of a successful introduction of a selection system. She describes the multiple factors that must be considered during implementation—the organizational culture, the administrative systems, and managerial and applicant expectations, to name a few. Any of these factors as well as the decisions that result from their consideration can impact the utility of the selection system dramatically. For example, the use of top-down selection as opposed to some minimal cutoff score will usually have an impact on expected performance levels and often the resultant demographic diversity of those hired. The importance of continued monitoring of these systems was emphasized for one of our authors when introducing a selection system for entry-level manufacturing personnel. Having provided scoring instructions and a suggested score cutoff in raw scores, the author was chagrined to find a year later that one manager had changed the raw scores to percentages before employing the score cutoff as he had always done with a previous system! There is a literature on the key component of successful interventions in other areas; attempts to replicate or learn from that literature might better inform our efforts in selection (e.g., Austin & Bartunek, 2003).

The analysis of Kehoe et al. of the stakeholders in these interventions as well as the metrics associated with the effectiveness of selection systems should convince readers of the complexity of these situations. Their analysis is primarily borne of experience with the introduction of selection systems in large organizations rather than an extensive research literature. The whole issue of determinants and consequences of the sustainability and adaptive change of selection systems has been largely neglected by researchers. To pursue such research, we might want to consider the literature and the experience of our colleagues working in organizational change and development areas.

Much of these chapters is borne of wisdom gleaned by the authors from decades of experience. There are some issues that should be of research interest. What is the most effective means of monitoring interventions to make sure they are used as intended? What is the impact of various alternative methods of implementing selection procedures on their acceptability and the effectiveness and sustainability of these interventions? Even recognizing them as organizational interventions may suggest a literature on which to draw ideas. What are the determinants and consequences of the sustainability and successful adaptations on interventions?

Summary

We hope that this volume provides useful information to practitioners in deciding how to proceed to develop, implement, and evaluate human resource practices that improve the quality of their workforce, the effectiveness of their organizations, (p. 952) and the welfare of their employees. We also hope that this chapter and the whole volume are effective in stimulating further research on questions related to the recruitment, selection, performance, and retention of employees. Toward the latter goal, we have summarized the major questions highlighted in this chapter and by the various authors in Table 41.1. Finally, it was one of our goals to convince researchers and students, both graduate and undergraduate, that many useful and interesting questions and programs of research remain unexplored and await their attention. (p. 953) (p. 954) (p. 955) (p. 956)

Table 41.1 Major Research Questions Highlighted in this Volume.

Relevant Chapter

Questions

Part II. Historical and Social Context of Selection and the Nature of Individual Differences

Vinchur and Koppes Bryan Historical Context of Selection

What trends and patterns can we glean from the history of organizational selection? How might these trends progress in the future? What lessons have we learned based on our understanding of the history of selection?

Murphy Individual Differences

How can we identify and measure more specific facets of individual differences in efforts to increase our understanding of specific predictor—criterion relationships?

How might individual differences relate to more expansive, nontraditional criteria, including individual well-being, the strength of the surrounding community, or societal resources associated with employment? Can we develop multiattribute utility models of these relationships?

Ployhart and Schneider Social Context of Selection

How can we best measure and treat traditional predictor—criteria selection data at multiple levels of analysis, including team, organizational, and cultural levels?

Are the individual difference measures utilized in Western contexts as effective when used in a more global context?

Breaugh Recruitment

Do predictor—outcome relationships differ relative to whether applicants were recruited internally or externally?

Do web-based and social networking sites provide valid, job-related information to recruiters? What are the ethical and legal issues surrounding the use of web-based data in employment decision making? In what ways might this type of information influence recruiter behavior and decision making, both explicitly and implicitly?

How do recruitment methods differ cross-culturally, and how might our understanding of these differences better inform recruiting within transnational organizations?

Part III. Research Strategies

Sackett, Putka, and McCloy Validity

This chapter should be used as a guide to the design and evaluation of the inferences we hope to derive from our assessments.

Brannick, Cadle, and Levine Job Analysis

How should we define effective communication in a web-based context, and how might the behaviors involved provide a more comprehensive model of job performance?

How can we make the process of collecting critical incidents less labor intensive?

Would it be possible to develop a public bank of critical incidents for different jobs and corresponding required KSAOs?

Hausknecht and Wright Organizational Strategy and Staffing

How might staffing strategies be constructed with greater flexibility and responsiveness to ongoing changes in environmental demands?

How do talent and resources transition over time within organizations? What methodology can we use to explore this topic in research?

Banks and McDaniel Meta-Analysis

When have meta-analysts utilized low-quality primary studies in their research and with what impact on the results? What can be done to rectify this problem?

Where have meta-analysts confounded method and construct in their research? How can we use this distinction to conduct better meta-analyses in the future?

Part IV. Individual DifFefence Constructs

Ones, Dilchert, and Viswesvaran Cognitive Ability

Can we develop new measures of cognitive ability and validate them with “modern” research methods with better results?

What would be the impact of using actual values of artifacts (e.g., criterion unreliability and range restriction) in estimating parameters in meta-analytic research?

Barrick and Mount Personality

How might we identify and categorize lower level trait facets into a taxonomy?

What variables mediate and moderate personality—criterion relationships? Are any of these moderator and mediator effects generalizable?

How might personality profiles and interactive effects between traits help us more thoroughly understand personality—criterion relationships?

How can personality research advance beyond the use of individual self-reports through the use of different raters and methods of assessment?

Ostroffand Zhan Fit

What type of fit is most appropriate when considering a given individual or organizational outcome?

How might we improve our theoretical understanding and measurement of situations?

Baker and Gebhardt Physical Ability

How might psychological capabilities compensate for inadequacies in physical ability?

Do physical preparation training programs impact the validity of physical assessment programs?

How do different types of specific physical abilities complement or compensate for each other when individuals engage in challenging physical labor?

Hattrup Composite Measures

How might multiattribute utility models be used to predict broader outcomes?

Part V. Measures of Predictor Constructs

Dipboye, Macan, and Shahani-Denning Interviews

Is impression management in the interview context comparable to faking on a self-report or biodata measure? How does impression management impact the validity of interview data?

How might prior exposure to applicant data gathered from web searches and social networking websites influence the interaction between an interviewer and applicant? Are the attributions derived from social networking sites valid indicators of job-related criteria? How might the attributions formed from social networking sites differentially impact men and women or members of various subgroups?

To what degree does the context of situational interview items differentially impact members of different cultures and subgroups?

Mumford, Barrett, and Hester Biodata

How do biodata relate to measures of interest, such as the Holland constructs?

Could we develop a more work-related interest measure using biodata?

What might the early literature on life experiences (e.g., Owens & Schoenfeldt, 1979) add to our current understanding of biodata, especially in terms of subgroups formed on the basis of biodata?

How can biodata be effectively used to assess the interests and skills of postretire-ment applicants?

Lievens and De Soete Simulations

How can the conceptual framework of the stimulus, response, and scoring features help better identify the constructs measured in simulations and those required to perform job duties?

McPhail and Jeanneret Individual Assessment

How are individual assessments performed (or are they?) in different cultures? How is leadership conceptualized across cultures?

Could a criterion-related validity study be conducted that combines data from numerous individual assessments to examine nontraditional criteria? Can those conducting individual assessments articulate a model of performance that guides their assessments?

Spector Self-Reports

How does social desirability impact self-reports? Is social desirability problematic in predicting various criteria, and if so, to what extent?

Do different forms of self-report measures and the targeted constructs lead to differential amounts of common method variance?

How can ideal point methods provide a greater understanding of construct—criterion relationships?

Can formative models of item—construct relationships be utilized to create more meaningful representations of broad constructs and how?

Kuncel and Klieger

What is the level of bias in measures other than cognitive ability and personality measures such as letters of recommendation or personal statements and information gleaned from social networking sites?

Instead of focusing solely on the bias in test scores, can we also examine the bias in the manner in which test scores are combined and used to make selection decisions? What are the potential “omitted variables” that influence our assessments of bias?

Scott and Lezotte Web-Based Assessments

Does a proctored verification test following an unproctored Internet test (UIT) significantly improve the criterion-related validity of the assessment, as compared to a UIT without verification testing?

What alternative forms of testing could be more effectively delivered in an online format? What new testing types can be offered in a computer-based setting that might not otherwise be possible?

Does web-based assessments impact applicant reactions and with what impact on their behavior?

Part VI. Performance and Outcomes Assessment

Woehr and Roch Supervisory Ratings Borman and Smith Objective Indices of Performance

Can we better understand the nature of differences in ratings across sources?

When do we find substantive differences in objective and subjective performance and what are the implications for organizational effectiveness?

How do individual performance indices relate to overall organizational effectiveness?

Can we use observational and qualitative research more effectively to understand the nature of performance?

Does the performance model provided in Project A research generalize across organizations and jobs?

What are the important individual difference correlates of performance trajectories?

Hoffman and Dilchert Organizational Citizenship Behaviors (OCBs) and Counterproductive Work Behaviors (CWBs)

Can we develop sources of information on OCBs and CWBs other than the employee? Will those sources provide information similar to that provided by the target employee?

What are the correlates of the trajectory of performance on these measures? Can performance on these measures tell us anything about other employee outcomes such as safety behavior or future turnover?

Woo and Maertz Turnover

What is functional turnover in given circumstances? When is turnover truly voluntary?

When are outcomes such as length of tenure more appropriate to use than logistical analyses of dichotomous or categorical indices of turnover?

Pulakos, Mueller-Hanson, and Nelson Adaptability and Trainability

To what extent is adaptability independent of task performance?

How might well-being and satisfaction levels be incorporated into our understanding of the antecedents and consequences of adaptability?

Wallace, Paul, Landis, and Vodanovich Occupational Safety

How might we conduct research and practices that view safety behaviors as an outcome, and not simply the occurrence of accidents?

Can we predict employee involvement in health practices or exercise programs? Should this be another aspect of our performance models?

Part VII. Societal and Organizational Constraints on Selection

Gilliland and Steiner Applicant Reactions

What types of behaviors result from different applicant reactions? How does the hiring process impact the workplace perceptions and attributions made by applicants who are eventually hired? How does it impact the behaviors toward the organization for individuals who are not hired?

How is an organization's reputation related to the number and quality of job applicants it receives and with what implications?

What effects do applicant reactions have at the organizational level?

Ployhart Levels of Analysis

What are predictor—criterion relationships at different levels of analysis as well as cases in which cross-level relationships are hypothesized?

Gutman Legal Issues in Selection

How might the occurrence of legal issues surrounding an organizations selection practices impact various organizational indices? How might legal issues impact organizational indices over time?

Beier and Ackerman Time as a Variable

In what ways do correlates of performance change over time? At what specific point in time do shifts in predictability occur between different psychological constructs?

What is the appropriate time frame for collecting longitudinal data?

Steiner Culture in Selection

What psychological constructs are relevant in different cultures?

Are Western psychological constructs and measures relevant and equivalent when translated or modified for different cultural contexts?

Are traditional Western assessment and selection practices acceptable across different cultures?

Sturman Utility Analysis

How effective are the suggested methods of linking utility analysis to strategic human resource management?

Boudreau Evidence-Based Selection and Validation

Do the suggested frameworks for validity make validation more understandable to managers? What other frameworks can be used to couch I/O topics in more traditional management language?

Ryan and Powers Diversity in Selection

What is the long-term impact of diversification selection strategies?

What is the impact of organization level diversity initiatives on the performance of individuals?

Morgeson, Humphrey, and Reeder Teams

How can we select teams as collectives, and not as simply individuals who will work in teams?

How do the suggested approaches to team selection compare to individual-level selection of team members?

Feldman and Ng Downsizing

How do the characteristics of the individuals selected to leave organizations impact the subsequent morale and performance of the remaining employees?

What factors impact employee perceptions of justice when individuals are “selected out” of an organization?

What are the effects of layoffs on an organization's reputation over time? How does this impact their subsequent ability to recruit and select effectively?

Bauer, Truxillo, Mansfield, and Erdogan Temporary and Contingent Employees

How do personality characteristics relate to the motivation and ability of contingent workers to adapt to their contingent status and perform well in their jobs? What is the validity of the selection procedures used to predict contingent worker performance?

How does contingent hiring impact the morale and behavior of the current workforce? Do contingent hires impact the turnover and performance of permanent workers, and if so, how?

Part VIII. Implementation and Sustainability of Selection Systems

Tippins Implementation Issues

How might selection interventions be better monitored for continued utility? What is the most effective means of monitoring interventions to make sure they are used as intended?

What impact do various alternative methods of implementing selection procedures have on their acceptability and the effectiveness and sustainability of these interventions?

Kehoe, Brown, and Hoffman Longevity of Selection Systems

What are the determinants and consequences of the sustainability and adaptability of selection systems?

What are the determinants and consequences of the sustainability and successful adaptations of interventions?

References

Ackerman, P. L. (1989) Within task intercorrelations of skilled performance: Implications of predicting individual differences. Journal of Applied Psychology, 74, 360–364.Find this resource:

Aguinis, H., Culpepper, S. A., & Pierce, C. A. (2010). Revival of test bias research in preemployment testing. Journal of Applied Psychology, 95, 648–680.Find this resource:

Austin, J. R., & Bartunek, J. M. (2003). Theories and practices of organizational development. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Industrial and organizational psychology (Vol.12, pp. 309–312). Hoboken, NJ: John Wiley.Find this resource:

Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance. Personnel Psychology, 44, 1–26.Find this resource:

Barrick, M. R., Parks, L., & Mount, M. K. (2005). Self-monitoring as a moderator of the relationships between personality traits and performance. Personnel Psychology, 58, 745–767.Find this resource:

(p. 957) arrick, M. R., Stewart, G. L., & Piotrowski, M. (2002). Personality and job performance: Test of the mediating effects of motivation among sales representatives. Journal of Applied Psychology, 87, 43–51.Find this resource:

Boje, D. M. (2008). Storytelling. In Stewart R. Clegg & James R. Bailey (Eds). International encyclopedia of organization studies (Vol. 4., pp. 1454–1458). London: Sage.Find this resource:

Campbell, C. H., Ford, P., Rumsey, M. G., Pulakos, E. D., Borman, W. C., Felker, D. B., de Vera, M. V., & Riegelhaupt, B. J. (1990). Development of multiple job performance measures in a representative sample of jobs. Personnel Psychology, 43, 277–300.Find this resource:

Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 35–70). San Francisco, CA: Jossey-Bass.Find this resource:

Cascio, W. F., & Aguinis, H. (2007). Staffing twenty-first century organizations. The Academy of Management Annals, 2, 133–165.Find this resource:

De Soete, B., Lievens, F., & Westerveld, L. (2011). Higher level response fidelity effects on SJT performance and validity. Paper presented at the 26th Annual Conference of the Society for Industrial and Organizational Psychology, Chicago, IL.Find this resource:

Edwards, J. R. (2011). The fallacy of formative assessment. Organizational Research Methods, 14, 370–388.Find this resource:

Fleishman, E. A., & Hempel, W. E., Jr. (1955). The relationship between abilities and improvement with practice in a visual discrimination task. Journal of Applied Psychology, 49, 301–312.Find this resource:

Ford, J. K., Kraiger, K., & Schechtman, S. L. (1986). A study of race effects in objective indices and subjective evaluations of performance: A meta-analysis of performance criteria. Psychological Bulletin, 99, 330–337.Find this resource:

Goffin, R. D., & Christiansen, N. D. (2003). Correcting personality tests for faking: A review of popular personality tests and initial survey of researchers. International Journal of Selection and Assessment, 11, 340–344.Find this resource:

Helmreich, R. L., Sawin, L. L., & Carsrud, A. L. (1986). The honeymoon effect in job performance: Temporal increases in the predictive power of achievement motivation. Journal of Applied Psychology, 71, 185–188.Find this resource:

Henry, R. A., & Hulin, C. L. (1987). Stability of skilled performance across time: Some generalizations and limitations on utilities. Journal of Applied Psychology, 72, 457–462.Find this resource:

Holland, J. L. (1997). Making vocational choices: A theory of personalities and work environments (3rd ed.). Odessa, FL: PAR.Find this resource:

Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74, 657–690.Find this resource:

Karl, K. A., & Peluchette, J. V. (2007). Facebook follies: Who suffers the most? Presented at the Annual Conference of the Midwest Academy Management, Kansas City, MO.Find this resource:

Kirchner, W. K. (1961). ‘Real-life’ faking on the strong vocational interest blank by sales applicants. Journal of Applied Psychology, 45, 273–276.Find this resource:

Kirchner, W. K (1962). ‘Real-life’ faking on the Edwards personal preference schedule by sales applicants. Journal of Applied Psychology, 46, 128–130.Find this resource:

Kristof, A. L. (1996). Person-organization fit: An integrative review of its conceptualizations, measurement, and implications. Personnel Psychology, 49(1), 1–49.Find this resource:

Kristof-Brown, A., Zimmerman, R. D., & Johnson, E. C. (2005). Consequences of individual's fit at work: A meta-analysis of person-job, person-organization, person-group, and person-supervisor fit. Personnel Psychology, 58, 281–342.Find this resource:

Le, H., & Schmidt, F. L. (2006). Correcting for indirect restriction of range in meta- analysis: Testing a new meta-analytic procedure. Psychological Methods, 11, 416–438.Find this resource:

Lee, T. W., & Mitchell, T. R. (1994). An alternative approach: The unfolding model of voluntary employee turnover. Academy of Management Review, 19, 51–89.Find this resource:

Mael, F. A. (1991). A conceptual rationale for the domain and attributes of biodata items. Personnel Psychology, 44, 763–792.Find this resource:

Mendenhall, M. E., Dunbar, E., & Oddou, G. R. (2006). Expatriate selection, training, and career-pathing: Review and critique. Human Resource Management, 26, 331–345.Find this resource:

Murphy, K. R. (1989). Is the relationship between cognitive ability and job performance stable over time? Human Performance, 2, 183–100.Find this resource:

Nye, C. D., Do, B. R., Drasgow, F., & Fine, S. (2008). Two-step testing in employment selection: Is score inflation a problem? International Journal of Selection and Assessment, 16, 112–120.Find this resource:

Olson-Buchanan, J., & Drasgow, F. (2006). Multimedia situational judgment tests: The medium creates the message. In J. A. Weekley & R. E. Ployhart (Eds.), Situational judgment tests: Theory, measurement, and application (pp. 253–278). Mahwah, NJ: Lawrence Erlbaum Associates.Find this resource:

Olson-Buchanan, J., Drasgow, F., Moberg, P. J., Mead, A. D., Keenan, P., & Donovan, M. A. (1998). Interactive video assessment of conflict resolution skills. Personnel Psychology, 51, 1–24.Find this resource:

Ones, D. S. (1998). The effects of social desirability and faking on personality and integrity assessment for personnel selection. Human Performance, 11, 245–269.Find this resource:

Ones, D. S., & Viswesvaran, C. (1997). Personality determinants in the prediction of aspects of expatriate job success. In Z. Aycan (Ed.), New approaches to employee management, Vol. 4: Expatriate management: Theory and research (pp. 63–92). Greenwich, CT: Elsevier Science/JAI Press.Find this resource:

Owens, W. A., & Schoenfeldt, L. F. (1979). Toward a classification of persons. Journal of Applied Psychology, 64, 569–607.Find this resource:

Pulakos, E. D., Arad, S., Donovan, M. A., & Plamondon, K. E. (2000). Adaptability in the work place: Development of a taxonomy of adaptive performance. Journal of Applied Psychology, 85, 612–624.Find this resource:

Pulakos, E. D., Schmitt, N., Dorsey, D. W., Arad, S., Hedge, J. W., & Borman, W. C. (2002). Predicting adaptive performance: Further tests of a model of adaptability. Human Performance, 15, 299–323.Find this resource:

Roth, P. L., & Bobko, P. (1997). A research agenda for multi-attribute utility analysis in human resource management. Human Resource Management Review, 7, 341–368.Find this resource:

Ryan, A. M., & Sackett, P. R. (1987). A survey of individual assessment practices by I/O psychologists. Personnel Psychology, 40, 455–488.Find this resource:

Ryan, A. M., & Sackett, P. R. (1998). The scope of individual assessment practice. In R. P. Jeanneret & R. Silzer (Eds.), Individual assessment: Predicting behavior in organizational settings (pp. 54–87). San Francisco: Jossey-Bass.Find this resource:

Sacco, J. M., & Schmitt, N. (2005). A multilevel longitudinal investigation of demographic misfit and diversity effects of turnover and profitability in quickservice restaurants. Journal of Applied Psychology, 90, 203–231.Find this resource:

(p. 958) Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.Find this resource:

Schmitt, N. (2008). The value of personnel selection: Reflections on some remarkable claims. Academy of Management Perspectives, 21, 19–23.Find this resource:

Schmitt, N., Clause, C., & Pulakos, E. D. (1996). Subgroup differences in ability as assessed by different methods. In C. L. Cooper & I. Robertson (Eds.), International review of industrial and organizational psychology (pp. 115–140). New York: John Wiley.Find this resource:

Schmitt, N., Oswald, F. L., Kim, B. H., Imus, A., Drzakowski, S., Friede, A., & Shivpuri, S. (2007). The use of background and ability profiles to predict college student outcomes. Journal of Applied Psychology, 92, 165–179.Find this resource:

Schmitt, N., & Sinha, R. (2010). Validation strategies for personnel selection systems. In S. Zedeck (Ed.), APA handbook of industrial and organizational psychology (pp. 399–420). Washington DC: American Psychological Association.Find this resource:

Spector, P. E. (2006). Method variance in organizational research: truth or urban legend? Organizational Research Methods, 9, 221–232.Find this resource:

Stark, S., Chernyshenko, O. S., Drasgow, F., & Williams, B. A. (2006). Examining assumptions about item responding in personality assessment: Should ideal point methods be considered for scale development and scoring? Journal of Applied Psychology, 91, 25–39.Find this resource:

Van Iddekinge, C. H., McFarland, L. A., & Raymark, P. H. (2007). Antecedents of impression management use and effectiveness in a structured interview. Journal of Management, 33, 752–773.Find this resource:

Zavala, A. (1965). Development of the forced-choice rating scale technique. Psychological Bulletin, 63, 117–124.Find this resource:

Zyphur, M. J., Bradley, J. C., Landis, R. S., & Thoresen, C. J. (2008). The effects of cognitive ability and conscientiousness on performance over time: A censored latent growth model. Human Performance, 21, 1–27.Find this resource:

Zyphur, M. J., Bradley, J. C., Landis, R. S., & Thoresen, C. J. (2009). The effects of cognitive ability and conscientiousness on performance over time: A censored latent growth model. Human Performance, 21, 1–27.Find this resource: