Selecting Graduate Students: Doctoral Program and Internship Admissions
Abstract and Keywords
Selecting students for psychology doctoral programs and doctoral internships is a challenging process because the costs for doctoral students, academic and internship programs, the profession, and the public can be high. This chapter reviews the literature examining predictors of doctoral student selection by academic and doctoral internship programs. Although there is limited research specifically examining counseling/clinical academic program selection factors, there is some support indicating that GRE scores are predictive of academic performance but not of clinical performance. Structured interview procedures as compared to less structured interviews are better at differentiating between doctoral students. Other methods of assessment, such as letters of recommendation, have little value in the prediction of doctoral student performance. New methods for selection of doctoral students are also discussed.
Selecting students for doctoral academic programs and doctoral internships can be an arduous and challenging process, with many factors influencing the ultimate decision. Although programs differ in their training philosophies, clinical foci, and resources, many commonalities still remain in the admission process. The focus of this chapter is to examine the literature on student selection for doctoral programs in clinical/counseling psychology, as well as for doctoral internships. Our ultimate goal is to provide implications and recommendations for the selection of students and trainees.
Student Selection in Doctoral Programs
From the vantage point of faculty in doctoral programs, the doctoral-student-selection process generally entails two phases: (1) review of applicants’ admission material (e.g., graduate records examination (GRE) scores, grade point average (GPA)/transcripts, letter of intent, personal essay, vita, and letters of recommendation) and (2) interview of applicants who have passed the initial screening phase. Although all this information, to varying degrees, weighs into the final decision (King et al., 1986), it is unclear how useful any of these criteria are in the selection of doctoral students. The answer to this question likely rests with the ultimate training goals for any given program. Doctoral programs vary in their training philosophy (e.g., clinical-scientist vs. scientist-practitioner vs. practitioner-scholar) and within these training philosophies the relative emphasis on research and practice varies considerably. Nonetheless, there are two overarching themes in doctoral training: (1) the functional aspects of being a psychologist, essentially the clinical work, accented by the requirement of the doctoral internship, and (2) the engagement/appreciation for the empirical basis of foundational psychological and clinical research. These aspects of doctoral training have been described in several (p. 238) ways over the years and most recently they have been categorized within functional and foundational competencies (see Fouad et al., 2009; Kaslow et al., 2004; Rodolfa et al., 2005 for comprehensive review of competencies within counseling/clinical psychology).
Prior to discussing the common selection factors typically examined in research, we believe that the issue of ‘fit’ between the applicant and program, the financial needs of universities as well as universities’ accreditation requirements (e.g., the need to graduate X number of doctoral students per year) merits attention. The issue of fit is difficult to fully operationalize, but at the core is the degree to which the applicant’s goals for training/professional pursuits are compatible with the program’s training goals (and vice versa). For instance, an applicant that would like to have a profession as a faculty at a research intensive university would likely have a better fit with programs that have a strong emphasis on the production of research. Alternatively, a doctoral program might select an applicant whose research interests are a closer match with faculty as compared to an applicant who scored higher on the GRE or had a higher GPA. It is likely that fit shapes both applicants’ and programs’ selection process (Norcross, Evans, & Ellis, 2010).
There are also financial and university requirement issues when it comes to selecting doctoral students. For instance, for-profit universities depend heavily on student enrollment for maintaining budgetary operations. Thus, the degree to which programs might be more or less liberal with acceptance criteria can vary as a function of need. For instance, PsyD programs typically accept more students with lower GRE scores as compared to PhD programs (Norcross, Ellis, & Sayette, 2010). Alternatively, programs can face pressure to graduate a number of doctoral students to assist with requirements for maintaining Carnegie status (e.g., research intensive). These practical issues in doctoral-student selection are seldom discussed in public forums, but merit more conversation. As universities make decisions to act more like a business than a pillar of academic excellence, the fields of counseling and clinical psychology may ultimately feel the impact.
Beyond relative fit and the pragmatic issues discussed earlier, the larger issue in doctoralstudent selection rests with this relatively simple question: How will faculty in clinical and counseling psychology doctoral programs know whether the selection criteria are indeed useful? Or do faculty, aided by various criteria, rely on their intuition when making selection decisions (Highhouse, 2008). The selection process for entry into doctoral programs, described earlier with some variants, is an established practice with decades of precedent. Thus, we will examine some of the pros and cons of these selection criteria.
Graduate Records Examination (GRE)
The GRE is one of the most commonly utilized selection criteria, and for many programs it is weighted heavily in the decision-making process (Chernyshenko & Ones, 1999; Norcross, Kohout, & Wicherski, 2005). For instance, many programs use the GRE as a screening tool and have cutoff scores that students must exceed before faculty consider other forms of information, such as interviews or personal statements (see Rem, Oren, & Childrey, 1987).
The utility of the GRE in predicting graduate grades and comprehensive exams has been called into question. Chernyshenko and Ones (1999) statistically corrected for the restriction of range in GRE scores and found that GRE scores accounted for approximately 13% to 49% of the variance in graduate comprehensive exams and graduate GPA. Kuncel et al. (2010) conducted a meta-analysis of approximately 100 studies and found that GRE scores (both verbal and quantitative) accounted for approximately 7% to 8% of the variance in the GPAs of doctoral students. Further, GRE scores accounted for approximately 9% to 10% of the variance in faculty ratings of doctoral student performance. These findings are consistent with prior meta-analyses (see Goldberg & Alliger, 1992; Morrison & Morrison, 1995). The degree to which these findings are promising or problematic likely rests within programs and how they utilize the GRE in their selection process.
Although there have been statistical attempts to help correct for the restriction of range in GRE scores, these issues cannot be fully reconciled (e.g., graduate GPAs are also restricted). Also, we only know about the predictive validity of the GRE for students who were admitted into graduate programs. Thus, the logic of utilizing such empirical support for the GRE is flawed. Simply, we do not know whether GRE scores for those who did not get admitted would predict their graduate GPAs. Second, academic or clinically based outcomes in most research-student selection studies lack any meaningful indicators of validity (and at times reliability). For instance, we do not know of any studies that have examined the association between student (p. 239) selection criteria and actual clinical outcomes (e.g., pre-/postchanges of clients’ psychological distress). Rather, most academic and clinical outcomes are based on professors’ ratings of students’ performance or GPAs. GPAs are generally restricted in range—as many doctoral programs use a grading system ranging from A to C, with a C indicating a failing grade. Professors’ ratings of students’ abilities have intuitive appeal, but they are generally not supported with any measure of reliability or validity to other outcomes.
For clinical and counseling programs, the question about whether the GRE will predict clinical abilities is paramount. The empirical literature on the predictive ability of student selection criteria to clinical abilities and/or research acumen is significantly limited and with mixed results (e.g., Allred & Briggs, 1988; Market & Monke, 1990). For example, King et al. (1986) found that doctoral students’ GRE verbal scores were negatively related to their GPAs for clinically based courses. They also found no significant association between GRE quantitative scores and GPAs for quantitative courses. Piercy et al. (1995) found no significant associations between GRE scores and academic, clinical, and research ratings from professors. Consistently, previous research has found little association between GRE scores and clinical abilities (Hines, 1986). Consequently, GRE scores may be useful, in part, to understand how graduate students will perform overall in graduate education, but appears to provide limited information regarding the how students will be able to learn and, in turn, practice the profession of psychology.
Letters of Recommendation
Letters of recommendation are another source of information commonly utilized in doctoralstudent selection, and they have been rated as important by selection committees and training directors (e.g., Lopez, Oehlert, & Moberly, 1996; Rodolfa et al., 1999). Letters of Recommendation are intended to give a sense of an applicant’s experience, character, and conscientiousness. However, studies have found they may not provide the intended information. Written by supervisors, faculty, advisers, and others who can attest to the applicant’s character and performance, often these letters contain an abundance of praise with limited indications of any weaknesses or shortcomings (Stedman, Hatch, & Schoenfeld, 2009; Grote, Robiner, Haut, 2001). There is a recognition that letters of recommendation should be balanced in their comments of the student; however, in the high-stakes system that is currently extremely competitive, could one bad letter (or one that mentions applicant limitations) have deleterious effects for an applicant? Simply, as Stedman et al. (2009) pointed out, letters of recommendation do not successfully differentiate applicants. However, they may play a different role in the selection process. For instance, Puplampu, Lewis, and Hogan (2003) found that letters of recommendation are used as a way to verify other applicant information.
In reality, admission committees typically encounter many dilemmas when examining letters of recommendation. For instance, how can committee members understand the value of any given letter? At this point, it is common knowledge that nearly all applicants are in the “top 10%” (Miller & Van Rybroek, 1988). Additionally, many letters of recommendation include statements, such as “in the past [X] years of being a professor [name of student] is one of the very best I have mentored.” Letter statements like these are so commonplace, it is unclear what meaning they have for selection committees. More confusion arises from other aspects of letters of recommendation. For instance, there are often variations in the length of letters and depth of the description of the applicant. Typically letters from academic programs faculty are significantly longer than letters from practitioners, and letters from psychologists are longer than letters from professionals from other specialties. Are differences in letter length and depth reflective of the applicant or the writer or both? Is the omission of certain information truly telling?
All applicants have weaknesses and shortcomings. However, the competitive and high-stakes nature of the process seems to have suppressed the willingness of letter writers to suggest any constructive criticism. As a result, many letters sing sterilized praises of each applicant, forcing interviewers to make assumptions about nuances in word choices. Ironically, it may be the most beneficial thing for interviewers to know about the limitations of letters of recommendation as they make the important decision of who will be a good fit for their program.
Although the request to change how letters of recommendation are written and pleas to limit the positive bias are present in the literature (Stedman et al., 2009), no change appears on the horizon. Due to the limited utility of letters of recommendation, programs may consider eliminating this requirement, which would certainly free up countless hours for taxed, overworked faculty and other professionals in writing these letters. However, the (p. 240) likelihood of eliminating letters of recommendation is minimal, and, as a result, programs may wish to consider using letters with caution or, even better, might consider requiring a structure to the letters of recommendations that includes both positive and growth areas of the applicant. This would provide the program important information that they can explore during an applicant’s interview.
Interviews are a frequently utilized method to select doctoral students. Although interviews of potential doctoral students vary in structure and content, the utility of these methods have been called into question. Highly structured interviews have more support for their utility than unstructured interviews (Conway, Jako, & Goodman, 1995). For example, structured interviews across various disciplines accounted for 6% of the variance in training success over cognitive abilities (Ziegler, MacCann, & Roberts, 2011). However, Berry et al. (2007) found that the association between interview ratings and cognitive abilities was higher when the interviewer had awareness of the cognitive abilities of the interviewee—a situation that is common in the selection of doctoral students (e.g., awareness of GRE scores). Yet, highly structured interviews resulted in a lower association between interview ratings and cognitive abilities. Thus, while doctoral interviews might be confounded with the applicant’s cognitive abilities, providing a highly structured interview format may be helpful to buffer these effects.
The degree to which findings in other disciplines will translate to clinical and counseling psychology is unknown. For example, King et al. (1986) found a positive relationship between more favorable ratings during the interview and the number of incomplete courses. Moreover, interview ratings of applicants were not significantly related to professors’ ratings of their academic abilities during the program or their GPA in courses. Nonetheless, one lesson can be gleaned from the work examining interviews—highly structured interviews are advantageous.
Faculty and applicants are both looking for a match between personal goals and interests—a professional fit. Beyond assessing the fit of the professional goals and interests of applicants, faculty/interviewers commonly describe a “feeling” they get for an applicant, swaying them toward a desire to make an offer to a specific applicant. During interviews, the degree of fit can be influenced by personality characteristics, professional demeanor, and interview disposition. These personality expressions are significant because they suggest how one might carry themselves in professional settings and within clinical roles. For example, garnering a sense that a person is too abrasive, arrogant, or lacks self-awareness might suggest to an interviewer that these characteristics would manifest within the therapy room and within professional relationships as well.
However, faculty also must be aware of their own biases during interviews and within their ratings of applicants (see Huffcutt, Van Iddekinge, & Roth, 2011). For instance, individuals have been known to make relatively quick judgments, based on limited information and subsequently search for information to confirm these initial judgments (Dawes, 1994; Gambrill, 2005; Owen, 2008). Common tendencies to identify easily identifiable information, or the availability heuristic (Faust, 1986; Gambrill, 2005), may be particularly salient in the interview process because there is limited time to gather information. Indeed, psychologists are not immune to forming quick impressions based on limited information—within three minutes (cf. Sandifer, Hordern, & Green, 1970; also see Ambady, Bernieri, & Richeson, 2000; Ambady & Rosenthal, 1992 for reviews). Making quick judgments, based on unbalanced information, may not be problematic when they are accurate; however, this is not typically the case. These initial impressions can and likely do guide future interactions and evaluations—even in the face of disconfirmatory information (Chapman & Johnson, 2002; Owen, 2008). For example, Parmley (2007) found that therapists did not adjust their judgments of clients, even when they were given direct evidence to disconfirm their beliefs. It is important to recognize that these initial impressions formed primarily on limited cursory information may capture students’ narratives about their relational or cultural history. Although these biases will likely not be fully overwritten, it is important to have structures in place to mitigate their influence, especially, as interviews generally are weighted more than the other information in the selection process (King et al., 1986). Consequently, when it comes to interviews, it is crucial to recognize biases, actively work to gain balanced information, and be humble to accept other sources of information.
Notwithstanding these biases that can occur in the interview process, in-person interviews may be well suited to identify students’ relative ability to form and engage in relationships that are genuine, (p. 241) collaborative, and empathic. When students and faculty engage with an applicant, many develop a kind of feeling about the candidate, encoding and interpreting the way the person carries him or herself and interacts with others. Although it is often difficult to articulate this feeling, it is often one of the most critical factors in forming an opinion about accepting or rejecting the student. This feeling or gut-reaction needs to be better elucidated and articulated. It is more than a feeling—it is the synthesis of perceived behaviors, nonverbals, interpersonal dynamics, and personality expressions. It makes sense that many professions wish to discard or diminish this feeling because it does not seem grounded in fair and concrete standards and criteria. We propose that these reactions need to be better understood, defined, and linked to key professional roles and outcomes. However, we caution about utilizing a gut reaction without checking biases and identifying (operationalizing) the sources of these feelings, because there could be inherent biases that might plague the process (e.g., stereotyping, discrimination, etc). The biases just described may be experienced or described within the guise of gut reactions or intuition but may be nothing more than a series of confirmatory biases that have been reinforced over the years (also see Ambady et al., 2000 and Gladwell, 2005 for value of intuition). Therapists necessarily bring who they are into the therapy office and into the therapy relationship. Paying attention to, and better understanding, the applicant as a person is wise. Interactions between applicants and current students in the program, faculty, and staff serve as rich illustrations of who the candidate might become as a student, colleague, researcher, and therapist.
The following case example illustrates how multiple small expressions of personality can form a general feeling about a candidate—rooted in specific behaviors—as well as what that feeling might suggest about professional dynamics.
Following the review of applications, the faculty of a counseling psychology doctoral program invites a group of students to an in-person interview. All students are invited to attend a student-hosted welcome dinner the night before the interview. Applicants Jessica and Rebecca both agree to attend, arrive on time, appropriately dressed, and engage with current graduate students while eating dinner. However, at the end of the night, students report notable differences in their experiences and perceptions of each applicant. Students recalled Jessica excitedly asking questions about the program, admitted to being nervous about the process, and inquired about surrounding area/city. With Jessica, students said they really got a feel for who she was, and could easily perceive her excitement about the prospect of entering the program. Her questions about the city and her acknowledgement of being nervous felt real and students were able to relate to her. At dinner, she made efforts to find commonality between herself and those around her, and the conversation flowed easily. When asked about her experiences in school, she, of course, talked about her strengths and positive experiences, but also acknowledged the truth of how hard it can be to master scientific writing and to keep up with classes.
Descriptions of interactions with Rebecca were a bit different; students described difficulty in keeping up a conversation with her, and found it hard to really get a sense of who she was. Rebecca chatted some about her hometown and her recent vacation, while apparently not making much of an effort to really connect with others. Students were struck by her lack of questions about the program and her seeming disengagement, and were even more surprised when she reported not being nervous at all about the process. The seemingly stoic front that Rebecca exhibited made it hard for others to really gauge her interest, personality, and desires or fears. They felt shut out, and perceived no effort on Rebecca’s part to bridge that gap.
Interestingly, the faculty had similar reactions about Jessica and Rebecca through the structured interview process. However, they did not fully recognize some of the interpersonal issues that the students identified. Thus, the student feedback was valuable to make selection decisions.
These case examples have real and meaningful connections to professional dynamics. The behaviors of each applicant can be thought to represent their own relational style and say a bit about their personality. Jessica sought to form connections, found commonalities between herself and others, and allowed herself to be real about her own struggles and fears. It is easy to imagine how this might be exhibited in the therapy office, with Jessica potentially having an easier time forming an alliance and a real relationship with a client. Her readily expressed excitement for the program also seemed to suggest that she possessed the motivation and drive to succeed in a demanding program. She was able to effectively use herself to form connections (p. 242) with others in a real way, beyond a cookie-cutter interviewee performance, which made others trust they were seeing who she really was.
On the other hand, Rebecca seemingly struggled to form connections, for whatever reason. Perhaps she was too anxious about the process, perhaps too preoccupied with finding the right questions and answers, or unwilling or unable to let down her guard to be able to allow others to get to know her. These potential obstacles could easily be expressed in therapy or within other professional relationships. To learn and grow in a training program, one must be able to admit fears and weaknesses to be able to work on them. In therapy, one must be able to be real with a client, use appropriate self-disclosure, and express real reactions or empathic feelings when fitting. A therapist must be able to recognize when mistakes have been made and should be willing to address them to repair any potential rupture that has occurred. Beyond working from a manualized intervention, therapists must be able to form a real and productive relationship with clients, as well as with colleagues and supervisors.
The ratio between number of positions available for internship and number of students applying is admittedly discrepant and has been a source of concern for decades (Baker, McCutcheon, & Keilin, 2007; Hatcher, 2011). One outcome of this inequality is an ever-increasing level of competitiveness involved in attaining an internship placement. This can be seen as a positive aspect as it motivates students to strive to be better, more competent and prepared, and to seek out ways to bolster their abilities. Unfortunately, this inequity also increases students’ anxiety and hypervigilence about their prospects of finding an internship.
Students currently applying for internship have increasingly high numbers of clinical hours, publications, and research experience (Rodolfa, Owen, & Clark, 2007). However, this competitiveness also muddies the waters of what these increases really mean and obscures the differences between enhancements in professional competencies and mere inflation of numbers. At the end of the day, training directors’ selection processes seek to identify students who will effectively utilize training experiences to become competent clinicians and/or academics in the field. Yet, there is currently no reliable and consistent way this is accomplished, with the closest approximation being the APPIC Application for Psychology Internship essays, letters of recommendation, grades, scores, hours, and the in-person interview.
Over the past 20 years, empirical studies have begun to disentangle factors involved in the application process, seeking to identify salient and superfluous factors. These studies have found mixed results. For example, Rodolfa and colleagues (1999) surveyed 249 accredited internship training directors, who indicated what criteria is most and least important in selecting students to fill internship slots. Overall, there was strong agreement on important inclusion criteria such as career goal fit, clinical experience, interview, letters of recommendation, and personal insight. In addition, these authors also found some agreement on exclusion criteria such as lack of APA standing of the applicant’s program of study, incomplete coursework and/or comprehensive exams, and low numbers of supervised practicum hours. Although the consistency between sites is an interesting and important finding, questions arise about the implications of these criteria. For example, even though certain criteria were endorsed by most training directors as important, there were numerous other factors also identified as vital. Using a 7-point likert-scale, training directors rated 18 inclusion criteria with a mean score above 5.
Building on these results, a similar study was conducted about 10 years later, again asking training directors for their perspective on what is and what is not important in identifying qualified applicants (Ginkel, Davis, & Michael, 2010). These authors found the same prioritization of fit between student and site as well as number of supervised hours and experience. However, they identified a greater emphasis placed on personality characteristics. In comparison, Rodolfa et al. (1999) found the top three criteria reported by training directors to be applicant fit, clinical experience, and completion of coursework. In the more recent study by Ginkel et al. (2010), the top three criteria were reported to be fit, the interview, and professional demeanor of the applicant. The contrast between these studies suggests a shift in the thinking of training directors in how they evaluate candidates, exerting greater focus on personal attributes that may say more about how an individual will function as a student and professional. This finding is interesting, given the dearth of evidence supporting the reliability and validity of personal attribute assessments. Clearly, more research is needed to provide training directors and selection committees an empirically supported way forward in the selection process. Beyond agreeing on the relevant criteria, it is crucial to have (p. 243) a clear understanding of what these criteria actually reflect in a reliable and valid way.
What does it mean to intern-selection committees that applicants have 1500 clinical hours versus 1000 and does it matter during the selection process or more importantly does it impact the students’ competence as well as desire to learn? Hours give little insight about the types of experiences they are associated with and the degree to which a student mastered skills or internalized feedback. In addition, the interview has been repeatedly cited by studies as extremely important in determining fit between applicants and programs (e.g. Stedman, 2007). Is a well-executed interview merely a reflection of a student’s ability to engage in a way they have been told is appropriate or is it an accurate reflection of the interpersonal style of the applicant? Furthermore, as the competition for internship placement continues to increase, and the process becomes more standardized, it becomes even more difficult to determine if the “fit” between student and site is a product of actual fit or a student’s attempt to increase the chances of attaining a placement by putting and keeping a best foot forward.
But how are these attributes assessed? Are the same criteria used for each applicant? Do different raters and different programs/sites assess these things differently? When it comes down to deciding between three or four great applicants, how does “fit” play a role? At the end of the day, internship directors seek to admit a developmentally competent student who will contribute to the goals of their respective programs and institutions. Through the use of interviews, grades, letters of recommendation, and other criteria, directors attempt to answer this question: Will this student be a good fit for our site? The gap between these criteria and the answer to this question is difficult to answer.
Student Selection: A New View
Although there is never going to be a perfect system for selecting doctoral students or doctoral interns, we propose two factors that might serve as a useful heuristic to guide doctoral student selection that are rooted in psychotherapy research and student development: Facilitative Interpersonal Skills and Cognitive Complexity.
Facilitative Interpersonal Skills
Facilitative interpersonal skills (FIS; Anderson, Ogles, Patterson, Lambert, & Vermeersch, 2009) refers to an individual’s ability to effectively and accurately communicate and interpret messages as well as the ability to persuade others in helpful ways. This domain proves to be particularly important within the field of psychology due to students’ work with clients, supervisors, and colleagues. Training directors are in search of students who exhibit appropriate and effective interpersonal skills, yet assessment of this domain is difficult. The current model proposes two distinct factors of FIS that can be useful to assess during the selection process.
The first aspect of FIS is the quality of the relationships one is able to build, including the relationship with supervisors, with peers, and the therapeutic relationship a student must build with clients. Within the therapeutic relationship, it is important that students are able to be genuine, are able to form a strong alliance, and are self-aware. These ideas are supported as critical relational facets in psychotherapy literature with studies highlighting the importance of empirically supported relationships as they relate to therapy outcomes (see Norcross & Wampold, 2011).
The empirical literature has identified several aspects of the therapeutic relationship that are thought to be important and influential with therapy outcomes. In many ways, the relationship between client and therapist is the vehicle through which therapists express empathy, unconditional support, and acceptance. Grounded in this relationship, therapist and client work together (ideally) to collaborate on the aims and course of therapy. The alliance is thought to be one of the most crucial components of the therapeutic relationship, and is conceptualized as the agreement between client and therapist on the tasks and goals for therapy, couched within a strong relational bond. Empirical studies continue to identify strong associations between outcome and alliance, with recent studies finding the alliance to account for approximately 8% of outcome (Horvath, Del Re, Flückiger, & Symonds, 2011). Several other relational factors have been identified as important, and contribute to positive therapeutic outcomes. Accurate expressions of empathy can be a crucial factor in therapy, and has been found to account for approximately 9% of variance in outcome (Elliott, Bohart, Watson, & Greenberg, 2011). However, not all empathy is created equal, nor are the effects. Empathy must be accurate—meaning, therapist empathy must be congruent with the client’s perception of the issue (Ickes, 2003). This is recognized by clients in the accuracy of the therapist’s conception of the big picture and the nuance of their struggle, as well as by the judgment of the congruence and authenticity of the therapist’s expression. All these components (p. 244) contribute to the “real relationship,” which is thought to be a genuine connection between client and therapist, free (mostly) from transference and countertransference or any feigned or forced interactions (Gelso, 2009). At the heart of this concept is the aspiration that both client and therapist will be able to make contact with each other, without the influence of roles or power or defenses, allowing each to form realistic perceptions of the other.
From an intern-selection process, trainers may want to assess these therapeutic relational abilities via video submissions, role plays, or direct therapeutic reports (e.g., average of client ratings on measures of alliance or supervisor ratings of trainees’ alliance ability). This approach is a logical extension of current practices wherein students are commonly asked to reflect on their own way of being in “personal essays” or during the interview process through questions about their “interpersonal strengths and weaknesses.” Yet, instead of assessing students’ reflective abilities, these techniques offer a more direct assessment of students’ relational acumen. In doing so, there will be a more clear connection between the student-selection process and the activities that they will be asked to do during their graduate training years.
The second element of FIS is the effectiveness of professional relationships. It is one thing to be able to exhibit the aforementioned aspects of a quality relationship, such as empathy and self-awareness, but these skills are merely a foundation for effective relationships. One must be flexible and responsive within professional relationships to make them effective. For example, in working with clients, therapists/students must modify their approach to best suit the individual needs and dynamics of each client and their unique concerns. Operating only from an empathic or real relationship approach is devoid of the necessary flexibility to maintain an effective therapeutic relationship. One must also be self-aware within these relationships, accurately perceiving when a path must be altered, and then must be willing to change course. This dynamic helps generate and maintain more effective relationships across many professional domains.
Identification of an applicant’s level of FIS is relevant for training directors in making decisions about who will be best suited for clinical practice. Asking applicants to respond to complex clinical situations in the interview is one way to structure the interview and examine applicants’ abilities to engage in FIS. Here are some example dilemmas that might be useful during the interview process.
A client that you have been working with for several weeks arrives for her regularly scheduled 4:00 p.m. appointment at your office. The woman presented for therapy due to the recent loss of a significant relationship and was initially very distressed. However, she has been doing markedly better in the last few sessions, expressing renewed feelings of hope and stability. Today, she walks through your office, sits down, and announces that she went out to lunch and had a few drinks and is now feeling moderately drunk. You express to the client that this is not acceptable, that your agency has a policy against seeing clients who are intoxicated, and that she will need to reschedule. Angrily, the client tries to persuade you to change your mind, and when this fails, she picks up her keys to leave, dropping them on the floor on her way out.
What are you initial concerns for the client?
Would you see the client or follow the agency policy? Why or why not?
When the client attempts to leave, would you attempt to take her keys from her?
What concerns do you have regarding your relationship and the trajectory of therapy at that point?
You and a client have been working together for several sessions. During one session, you pose an interpretation/challenge that the client is not happy about. In fact, it offends and angers the client quite a bit. Although you attempt to calm the client down and explain your vantage point, the client stands to leave, hoping to storm out of your office.
Do you stand, attempting to stop the client from leaving?
Do you stay seated, letting the client leave if they wish?
If they do leave, what are you immediate concerns?
What concerns do you have about your work at this point?
What aspects of the therapeutic relationship may now be in flux?
As a practicum student, you have recently been placed at a new training site and have been working with your clients and supervisor for a few weeks. Your supervisor has suggested that you regularly assess your clients for depression, anxiety, and (p. 245) self-harm. In addition to these routine evaluations, your supervisor also has suggested that you assess all clients for a new personality disorder that she has recently become fascinated by. In fact, she believes a large proportion of the population may in fact experience some degree of this disorder, and as such, she would like you to screen all your clients for it. The disorder and assessment tool are not well validated, and you are skeptical of the diagnosis. However, your supervisor is asking you to administer the measure to all of your clients.
What are your initial tendencies to respond?
What are the issues at stake for your clients?
How might you handle the situation with your supervisor?
Within these three dilemmas, there is arguably no “right” answer or outcome. Ultimately, these questions seek to understand the ways in which trainees process and make decisions about difficult situations. Ideally, responses would reflect consideration of multiple viewpoints, different potential pathways of action, and possible consequences and benefits of choices. It is hoped that answers to these dilemmas include perspective taking of the client, supervisor, or colleagues, and is sensitive to the intersection of multiple dynamics embedded in one scenario. Responses that exhibit an overreliance on dogmatic or rigid perspectives may suggest that the trainee has difficulty making autonomous choices or does not engage in a self-reflective process. For example, answers that rely on textbook “solutions” or overdependency on “what my supervisor tells me to do” may exhibit a lack of independent thinking or evaluation of risks and benefits of interpersonal situations. Those who are able to bring themselves into the process, evaluating their reactions and how their personality and culture may evoke certain responses, show a willingness to examine internal processes during difficult situations.
With regard to doctoral student or doctoral intern selection, it would behoove programs to select students who have high cognitive complexity (Holloway & Wampold, 1986). Cognitive complexity guides how students interact with their professors, supervisors, empirical articles, and their clients (Owen & Lindley, 2010; Spengler, Strohmer, Dixon, & Shivy, 1995). Cognitive complexity generally applies to many of the competencies desired for psychologists, such as how to make ethical decisions or the selection of treatments based on empirical evidence and clinical expertise.
There are various forms of cognitive complexity, such as daily thoughts or session thoughts, which reflect basic knowledge, such as GRE scores or knowledge about specific psychological concepts (Owen & Lindley, 2010; Spengler et al., 1995). Additionally, meta-cognitions involve the ability to reflect on thought processes (King & Kitchener, 1994). This ability is consistent with competencies that involve self-reflective processes and conceptualization abilities of the therapy process (Owen & Lindley, 2010). However, the heart of the cognitive complexity rests with how individuals understand the nature of knowledge and the acquisition of knowledge or epistemic cognitions.
Epistemic models generally describe the developmental nature of thought process moving from a dualistic, relativistic, to constructivist belief system. More specifically, epistemic models describe the ways individuals view the certainty of knowledge, the acquisition of knowledge, and the process of making decisions (cf. King & Kitchener, 1994). Generally, higher levels of cognitive complexity are denoted by an appreciation of relative instability of knowledge and yet still being able to form a decision based on available information (Owen & Lindley, 2010). Moreover, knowledge acquisition is done through a thoughtful analysis of various sources of information (e.g., experts, data, and personal experiences) and not resting on simple heuristics or authorities for answers.
For most issues in counseling and clinical psychology there are multiple, potentially equal answers. For instance, therapists constantly are challenged to answer this fundamental question: What therapy approach should be utilized for client X who also experiences diagnosis Z at this time? Decisions like these lead to no clear-cut solutions, and warrant critical examination. Thus, we want therapists who can evaluate evidence in a critical way to make informed clinical decisions. These decisions are not easy and should not be relegated to conventional wisdom or reliance on authorities.
To assist in the assessment of cognitive complexity, we provide some example questions that illustrate ill-defined problems in counseling and clinical psychology. We recognize that the difficulty of these questions will need to vary based on the setting (e.g., doctoral-student interviews versus doctoral-intern interviews). However, we hope that these examples will provide a basis for the development of other questions.
Cognitive Complexity Question I
Studies comparing theory-based models of psychotherapy (empirically supported treatments) have (p. 246) shown that bona-fide therapies are effective and they are similarly effective (i.e., the dodo bird verdict; Wampold et al., 1997). This has led to several conclusions. On the one hand, bonafide therapies are effective, but they work in different ways to assist clients. That is, there are multiple equally valid ways (e.g., techniques/approaches) to assist clients’ change. On the other hand, bona-fide therapies are effective, but the specific techniques in these therapies are not directly responsible for change. Rather, there may be common factors that are responsible for change (e.g., therapist effects, client factors, alliance, empathy, congruence).
What is your perspective on this issue?
Provide support for your position.
Describe how your perspective on this issue is reflected in your theory of psychotherapy.
Cognitive Complexity Question II
Some psychologists believe that most integrative forms of therapy are not empirically supported. Among other critiques, psychologists on this side of the debate typically claim that integrative therapists seldom have randomized clinical trials to support their efficacy, the therapies are less theoretically cohesive, and what support they do have rests within common therapeutic factors (e.g., alliance, empathy) versus specific theoretically consistent factors. Other psychologists claim that some integrative forms of therapy are empirically supported. They claim that there is empirical support for integrative therapies via a range of methodological approaches, the use of nontheoretically specific interventions in randomized clinical trials (e.g., the use of CBT techniques within psychodynamic therapy) have been shown to predict therapy outcomes, and the theoretical model is more important than the specific techniques.
What is your position on this debate?
Provide a cogent rationale for your position and a counterargument for oppositional position(s).
Are there cases in which your position may be more (or less) valid?
Cognitive Complexity Question III
You are a research assistant in a lab that works with psychotherapy clients in a clinical trial comparing two interventions. Specifically, some clients are assigned to a treatment therapy group that contains what is thought to be the most helpful and impactful aspect of therapy, whereas other clients are assigned to a group that does not contain this aspect (primarily just support and venting). It seems that the intervention is helping the participants in the treatment group, as reported by the clients and as observed in their overall symptom reduction. However, those in the other group are not receiving the valuable intervention, and although no members seem to be deteriorating, they are not improving at all either. You feel concerned about the well-being of those in the nonintervention group and feel frustrated about the fairness of one group receiving a treatment that is reducing distressing symptoms while the other participants are continuing to experience significant distress.
What are your initial concerns here?
What actions, if any, might you take? Why?
What are the critical issues to consider?
These tough questions pose an opportunity for trainees to exhibit engagement in complex thinking, to evaluate multiple and conflicting pieces of information, and to take a stand on an issue. Trainees who respond in an overtly black-and-white manner may possess a deficit in being able to see a diversity of positions that are still grounded in sound support and logic. These answers may take the form of an over-reliance on personal experience or a single-minded orientation that misses the richness of the “grey” in tough situations. On the other hand, those responses that avoid clear and definitive positions may be reluctant to assert their voice or to be “wrong,” and they may lack critical risk taking that will ultimately foster professional growth. Ideally, responses should reflect multiple sources of knowledge, flexibility in thinking, evaluation of competing ideas, and commitment to a well-reasoned answer. In this way, trainees exhibit an ability to be reflexive, independent, and able to take a stand in critical areas.
This paper has reviewed relevant literature describing the selection of psychology doctoral students and interns. Based on this review, traditional methods of selecting students in doctoral programs should be reexamined and this examination will result in the development of new procedures to select doctoral students and interns.
One possible way to examine the selection process is to create training-research networks, which are similar to the practice-research networks, wherein multiple training programs collect the same information and pool the data for greater impact and generalizability. At this point, most research about (p. 247) student selection for counseling/clinical programs and for doctoral internship is incomplete. Although this is problematic in and of itself, there is very little scholarship in this area as it relates to clinical/counseling programs. The applied or industrial/organizational psychology literature is a useful source of broad-based understanding of selection processes (Berry, Sackett, & Landers, 2007); however, it may lack the needed domain specificity for counseling/clinical programs (e.g., doctoral students in chemistry likely differ from doctoral students in a counseling psychology program).
Based on current research, the GRE is beneficial; although, it may not help explain much of the variance in the clinical/counseling skills that students are being most directly trained in and will be essential for their careers as psychologists. The GRE, however, appears to help explain other facets of students’ performance—for example, ability in nonclinical/counseling courses—that are also essential for graduation and foundational to the practice of psychology. Simply put, the GRE is a standard well-accepted assessment of prospective students, which has acceptable levels of data to support its use and sufficient data to refute its use as well. Thus, the use of the GRE is a complex decision that is based on the preferences of the faculty, idiosyncratic experiences with students (e.g., one student with a high or low GRE score who did poorly/well/created significant problems), and pressures from outside sources to admit students with high GRE scores. At the end of the day, faculty will need to consider their ethical responsibilities for ensuring that their decisions are guided and supported by well-standardized tests.
Although the GRE has been used to assess potential readiness for graduate education, there currently is no examination to provide information about the readiness of students to proceed to internship training. As the profession of psychology continues to emphasize the “culture of competence” (Roberts, Borden, Christiansen, & Lopez, 2005) there have been increasing calls for a tool to assess readiness for internship. Specifically there have been discussions about the utility of requiring a passing score for entry to internship training on the current national licensing examination, the Examination for Professional Practice in Psychology (EPPP). However, the EPPP or the GRE do not fully assess students’ readiness for internship because they do not capture students’ skills or attitudes only their knowledge base. It will be helpful for the profession to continue to explore mechanisms to effectively assess readiness for internship for both academic directors of training as well as internship training directors.
There is limited support for letters of recommendation. It is likely that these letters make decision makers feel better about the decisions they make, as they find confirmatory evidence to support or deny acceptance into a program within vaguely written positive letters. There have been calls for more balanced letters (e.g., continue to include positive comments, but also include some areas of growth) that will provide more useful information to internship training directors. Responding to these calls for more balanced letters will be difficult. Academic Directors of Training and faculty are focused on helping their students find internships and as Rodolfa et al. (1999) found, one letter of recommendation that indicates problems or concerns has the power to eliminate an applicant from the internship pool. If there were to be a change in how letters are written, it is clear that there would have to be broad-based support and agreement by all members of the academic training councils. Given the low likelihood of a significant change in how letters are written, it may be useful to have the training councils review the current literature and come to a decision about future use of the letters of reference in the internship-selection process.
As Ginkel et al. (2010) found, interviews were highly rated by internship-training directors. However, level of structure will enhance its utility, as highly structured interviews appear to be more useful than unstructured interviews. It is also necessary to have multiple sources of input (multiple raters, blinded to each other’s ratings as well as other sources of data [e.g., personal statement]) as these additional sources of data will provide a helpful context to the interview. When conducting the interview, faculty/staff will find it useful to remain aware of their biases, which may influence their views and evaluations of applicants.
The fit or match between student and program (be it doctoral program or internship site) is a prevailing theme in student-selection decisions. Although some work has been done attempting to define fit, additional research on defining “fit” and how it is explicated during the student selection process would be helpful to applicants and faculty alike. Specifically, it is very likely that match may mean different things to different students and faculty. Many questions could be examined that would benefit the profession and the process of selection. For examples: (a) Does the emphasis (p. 248) on match encourage students to search for programs and internship sites in a manner that is positive and career advancing? (b) Does the focus on match prompt students to better navigate the interview process? (c) Does match influence the agreement on expectations between student and program? (d) Is there a hierarchy of selection criteria (e.g., funding trumps research match) that influences match decisions?
In addition to enhancing the profession’s understanding of the critical concept of match, it will be beneficial to increase the attention paid to aspects that impact therapeutic and academic environments, which are difficult to accurately assess. For instance, empathy, genuineness, and being real in therapy are associated with better therapeutic outcomes. The interview process, however, is a stressful event, in which being “real” or genuine is desired but may not occur. Thus, assessing for applicants’ facilitative interpersonal skills will be a challenge, but one that may be better able to predict therapeutic skills when compared for instance, to the GRE. Similarly, faculty seek to accept doctoral students/interns who are more cognitively complex and not just vehicles of knowledge who reiterate what is taught to them. The ability to reason and make thoughtful complex decisions is at the heart of clinical/counseling psychology, and it will be beneficial to take cognitive complexity into consideration in the selection process.
Based on our review we recommend the following for consideration by both programs and the profession:
1. The GRE does not help provide a very clear picture of students’ clinical abilities. Thus, the use of the GRE in student selection should be accompanied by other sources of information. Using a GRE Score as a cut factor in selection without taking into account other sources of information may result in eliminating students with abilities not evaluated by the GRE.
2. Programs should develop structured interview procedures. The current literature strongly supports the use of structured interviews over less-structured interviews. The use of alternative forms of information within the interviews or in combination with interviews may be helpful, such as role plays, client/supervisor reports of therapeutic processes based on clinical measures, video recorded sessions, or mock sessions. Interviews should also be structured to provide applicants a chance to display their relationship-building skills as well as their ability to think in depth regarding issues they may potentially encounter during their training.
3 Letters of recommendation, although a standard process in the selection process, have many flaws, and the profession may wish to critically review their current use. Hours of faculty time go into writing letters, but if the letters are not taken seriously, then their use and utility in the selection process should be examined. Perhaps the training councils can agree to better structure the process of writing letters or it may be useful for the training councils to consider abandoning the use of letters of reference in the selection of internships.
4. Selecting students into doctoral programs is a critical process that will influence not only the future of the program, but the future of the profession. As the profession continues to take steps toward a culture of competence, the selection processes should reflect these changes and should incorporate an assessment of the students’ competencies as established by the profession.
As the selection process is improved, there will be increased confidence in the decisions made and training programs, as well as the profession, and, in turn, the public will benefit.
Allred, G., & Briggs, K. (1988). Selecting marriage and family therapy program applicants: One approach. American Journal of Family Therapy, 16, 328–336.Find this resource:
Ambady, N., Bernieri, F. J., & Richeson, J. A. (2000). Toward a histology of social behavior: Judgmental accuracy from thin slices of the behavioral stream. In M. P. Zanna (Ed.), Advances in experimental social psychology (pp. 201–271). San Diego, CA: Academic Press.Find this resource:
Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin, 111, 256–274.Find this resource:
Anderson, T., Ogles, B. M., Patterson, C. L., Lambert, M. J., & Vermeersch, D. A. (2009). Therapist effects: Facilitative interpersonal skills as a predictor of therapist success. Journal of Clinical Psychology, 65, 755–768.Find this resource:
Baker, J., McCutcheon, S., & Keilin, W. G. (2007). The internship supply–demand imbalance: The APPIC perspective. Training & Education in Professional Psychology, 1, 287–293.Find this resource:
Berry, C. M., Sackett, P. R., & Landers, R. N. (2007). Revisiting interview-cognitive ability relationships: Attending to specific range restriction mechanisms in meta-analysis. Personnel Psychology, 60, 837–874.Find this resource:
Chapman, G. B., & Johnson, E. J. (2002). Incorporating the irrelevant: Anchors in judgments of belief and value. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.). Heuristics and biases: The psychology of intuitive judgment (pp. 120–138). New York: Cambridge University Press.Find this resource:
Chernyshenko, O. S., & Ones, D. S. (1999). How selective are psychology graduate programs? The effects of the selection (p. 249) ratio on GRE score validity. Educational and Psychological Measurement, 59, 951–961.Find this resource:
Conway, J. M., Jako, R. A., & Goodman, D. F. (1995). A meta-analysis of inter-rater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80, 565–579.Find this resource:
Dawes, R. M. (1994). House of cards: Psychology and psychotherapy built on myth. New York: Free Press.Find this resource:
Elliott, R., Bohart, A. C., Watson, J. C., & Greenberg, L. S. (2011). Empathy. Psychotherapy, 48, 43–49.Find this resource:
Faust, D. (1986). Research on human judgment and its application to clinical practice. Professional Psychology: Research and Practice, 17(5), 420–430.Find this resource:
Fouad, N. A., Grus, C. L., Hatcher, R. L., Kaslow, N. L., Hutchings, P. S., Madson, M.,...Crossman, R. E. (2009). Competency benchmarks: A model for understanding and measuring competence in professional psychology across training levels. Training and Education in Professional Psychology, 3(4 Supp), S5–S26.Find this resource:
Gambrill, E. (2005). Critical thinking in clinical practice: Improving the quality of judgments and decisions (2nd ed). Hoboken, NJ: Wiley.Find this resource:
Gelso C., J. (2009). The real relationship in a postmodern world: Theoretical and empirical explorations. Psychotherapy Research, 19, 253–264.Find this resource:
Ginkel, R. W., Davis, S. E., & Michael, P. (2010). An examination of inclusion and exclusion criteria in the predoctoral internship selection process. Training and Education in Professional Psychology, 4, 213–218.Find this resource:
Gladwell, M. (2005). Blink. New York: Little, Brown, and Company.Find this resource:
Goldberg, E. L., & Alliger, G. M. (1992). Assessing the validity of the GRE for students in psychology: A validity generalization approach. Educational and Psychological Measurement, 52, 1019–1027.Find this resource:
Grote, C. L., Robiner, W. N., Haut, A. (2001). Disclosure of negative information in letters of recommendation: Writers’ intentions and readers’ experiences. Professional Psychology, 32, 655–661.Find this resource:
Hatcher, R. L. (2011). The internship supply as a common-pool resource: A pathway to managing the imbalance problem. Training and Education in Professional Psychology, 5, 126–140.Find this resource:
Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 333–342.Find this resource:
Hines, D. (1986). Admissions criteria for ranking master’s level applicants to clinical doctoral programs. Teaching of Psychology, 13, 64–67.Find this resource:
Holloway, E. L., & Wampold, B. E. (1986). Relation between conceptual level and counselingrelated tasks: A meta-analysis. Journal of Counseling Psychology, 33, 310–319.Find this resource:
Horvath, A. O., Del Re, A., Flückiger, C., & Symonds, D. (2011). Alliance in individual psychotherapy. Psychotherapy, 48, 9–16.Find this resource:
Huffcutt, A. I., Van Iddekinge, C. H. & Roth, P. L. (2011). Understanding applicant behavior in employment interviews: A theoretical model of interviewee performance. Human Resource Management Review, 21, 353–367.Find this resource:
Ickes W. Everyday mind reading. Understanding what other people think and feel. New York: Prometheus Books; 2003.Find this resource:
Kaslow, N. J., Border, K. A., Collins, Jr., F. L., Forrest, L., Illfelder-Kaye, J., Nelson, P. D., Rallo, J. S. (2004). Competencies conference: Future directions in education and credentialing in psychology. Journal of Clinical Psychology, 60, 699–712.Find this resource:
King, D. W., Beehr, T. A., & King, L. A. (1986). Doctoral student selection in one professional psychology program. Journal of Clinical Psychology, 42, 399–407.Find this resource:
King, P. M., & Kitchener, K. S. (1994). Developing reflective judgment: Understanding and promoting intellectual growth and critical thinking in adolescents and adult. San Francisco: Jossey-Bass.Find this resource:
Kuncel, N. R., Wee, S., Serafin, L., & Hezlett, S. A. (2010). The validity of the Graduate Record Examination for master’s and doctoral programs: A meta-analytic investigation. Educational and Psychological Measurement, 70, 340–352.Find this resource:
Lopez, S. J., Oehlert, M. E., & Moberly, R. L. (1996). Selection criteria for American Psychological Association-accredited internship programs: A survey of training directors. Professional Psychology: Research and Practice, 27, 518–520.Find this resource:
Market, L. F., & Monke, R. H. (1990). Changes in counselor education admissions criteria. Counselor Education & Supervision, 30, 48–58.Find this resource:
Miller, R. K., & Van Rybroek, G. J. (1988). Internship letters of recommendation: Where are the other 90%? Professional Psychology: Research and Practice, 19, 115–117.Find this resource:
Morrison, T., & Morrison, M. (1995). A meta-analytic assessment of the predictive validity of the Quantitative and Verbal components of the Graduate Record Examination with graduate grade point average representing the criterion of graduate success. Educational and Psychological Measurement, 55, 309–316.Find this resource:
Norcross, J. C., Ellis, J. L., & Sayette, M. A. (2010). Getting in and getting money: A comparative analysis of admission standards, acceptance rates, and financial assistance across the research/practice continuum in clinical psychology programs. Training and Education in Professional Psychology, 4, 99–104.Find this resource:
Norcross, J. C., Evans, K. L., & Ellis, J. L. (2010). The model does matter II: Admissions and training in APA-accredited counseling psychology programs. The Counseling Psychologist, 38, 257–268.Find this resource:
Norcross, J. C., Kohout, J. L., & Wicherski, M. (2005). Graduate study in psychology: 1971 2004. American Psychologist, 60, 959–975.Find this resource:
Norcross, J. C., & Wampold, B. E. (2011). Evidence-based therapy relationships: Research conclusions and clinical practice. Psychotherapy, 48, 98–102.Find this resource:
Owen, J. (2008). The nature of confirmatory strategies in the initial assessment process. Journal of Mental Health Counseling, 30, 362–374.Find this resource:
Owen, J., & Lindley, L. D. (2010). Therapists’ cognitive complexity: review of theoretical models and development of an integrated approach for training. Training and Education in Professional Psychology, 4, 128–137.Find this resource:
Parmley, M. C. (2007). The effects of the confirmation bias on diagnostic decision making. Dissertation Abstracts International: Section B, 67(8-B), 4719.Find this resource:
Piercy, F. P., Dickey, M., Case, B., Sprenkle, D., Beer, J., Nelson, T., & McCollum, E. (1995). Admissions criteria as predictors of performance in a family therapy doctoral program. The American Journal of Family Therapy, 23(3), 251–259.Find this resource:
Puplampu, B. B., Lewis, C., & Hogan, D. (2003). Reference taking in employee selection: Predication or verification? IFE Psychologist: An International Journal, 11, 1–11.Find this resource:
Rem, R. J., Oren, E. M., Childrey, G. (1987). Selection of graduate students in psychology: Use of cutoff scores and (p. 250) interviews. Professional Psychology: Research and Practice, 18, 485–488.Find this resource:
Roberts, M. C., Borden, K. A., & Christiansen, M. D., Lopez, S. J. (2005). Fostering a culture shift: Assessment of competence in the education and careers of professional psychologists. Professional Psychology: Research and Practice, 36, 355–361.Find this resource:
Rodolfa, E., Bent, R., Eisman, E., Nelson, P., Rehm, L., & Ritchie, P. (2005). A cube model for competency development: Implications for psychology educators and regulators. Professional Psychology: Research and Practice, 36, 47–354.Find this resource:
Rodolfa, E., Owen, J. & Clark, S. (2007). Practicum training hours: Fact & fiction. Training and Education in Professional Psychology, 1(1), 64–73.Find this resource:
Rodolfa, E. R., Vieille, R., Russell, P., Nijjer, S., Nguyen, D. Q., Mendoza, M., & Perrin, M. (1999). Internship selection: Inclusion and exclusion criteria. Professional Psychology: Research and Practice, 30, 415–419.Find this resource:
Sandifer, M. G., Hordern, A., & Green, L. M. (1970). The psychiatric interview: The impact of the first three minutes. American Journal of Psychiatry, 126, 968–973.Find this resource:
Spengler, P. M., Strohmer, D. C., Dixon, D. N., & Shivy, V. A. (1995). A scientist-practitioner model of psychological assessment: Implications for training, practice and research. Counseling Psychologist, 23, 506–534.Find this resource:
Stedman, J. M. (2007). What we know about predoctoral internship training: A 10-year update. Training and Education in Professional Psychology, 1, 74–88.Find this resource:
Stedman, J. M., Hatch, J. P., & Schoenfeld, L. S. (2009). Letters of recommendation for the predoctoral internship in medical schools and other settings: Do they enhance decision making in the selection process? Journal of Clinical Psychology in Medical Settings, 16, 339–345.Find this resource:
Wampold, B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., & Ahn, H. (1997). A metaanalysis of outcome studies comparing bona fide psychotherapies: Empirically, “all must have prizes.” Psychological Bulletin, 122, 203–225.Find this resource:
Ziegler, M., MacCann, C., & Roberts, R. D. (2011). Faking: knowns, unknowns, and points of contention. In M. Ziegler, C. MacCann, & R. D. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 3–16). New York: Oxford University Press.Find this resource: