Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 04 April 2020

Assessment in Instrumental Music

Abstract and Keywords

Assessment is a necessary and challenging task for many instrumental music educators. Limited instructional time, little to no assessment training, and large class sizes are but a few of the often cited reasons for the current state of assessment in instrumental music. Some steps, however, can be taken to improve student achievement in music through better assessment practices. In this chapter, I will focus on the assessment of student learning and achievement in the instrumental music classroom. I review the status of assessment, the differences between instrumental and choral assessment practices, rating scales, assessing musical knowledge, self-assessment, peer assessment, the psychological impact of assessment, technology in assessment, standardized tests, and the impact of case law on assessment. The chapter concludes with a series of general recommendations for improved assessment strategies.

Keywords: assessment, instrumental music, tests, rating scales, achievement, assessment strategies

The discussion around assessment in instrumental music education has gained a great deal of importance in educational rhetoric in recent years. Despite the current educational culture, in which assessment and data-driven instruction is at the forefront of many educational leaders’ minds, relatively little empirical research has been conducted on assessment in instrumental music education. A great deal of journal space has been dedicated to the discussion of assessment. However, much of this discussion has either focused on teacher evaluation procedures or the corporatization of student assessment in subjects such as math and English. Assessing student achievement in instrumental music is a topic that relatively few academic writers have addressed or explored in any methodical manner, and for many years much of the writing on this topic remains anecdotal (e.g., Spotlight on Assessment in Music Education, now National Association for Music Education).

As with many topics of research, music-education scholars have demonstrated ebb and flow in their productivity in research on assessment in instrumental music classrooms. In their discussion of future concerns of the measurement and evaluation of music experiences, Boyle and Radocy (1987) stated

Education experiences many trends, counter-trends, and would-be trends that excite theoreticians and generate many articles. Sometimes the general public becomes enamored of a trend, and legislators, corporate executives, and various education-minded activists direct attention to schools in accordance with the trend. After a time, the trend dissipates and attention turns elsewhere, with or without any enduring changes. (p. 305).

As one example of this, the first handbook of research in music education (Colwell, 1992) contained five chapters on the topic of evaluation. However, only one of those chapters focused on the evaluation of students’ musical ability. In this chapter, Boyle (1992), focused on several aspects of evaluation, including music ability, music aptitude, music intelligence, music capacity, music talent, music sensitivity, and musicality. Boyle spent relatively little space discussing musical achievement but defined it as “music accomplishments as a result of experience with music, musical phenomena, or music-related materials. Music achievement reflects what has been learned as a result of such experiences” (p. 251). A decade later in the New Handbook of Research in Music Teaching and Learning, only one chapter remained regarding assessment of student outcomes and as the author of that chapter acknowledged,

An entire section of the first Handbook of Research on Music Teaching and Learning was devoted to assessment. Those authors successfully summarized the history of assessment in music with chapters on assessment in five areas: teaching, creativity, program, general, and attitude. This chapter is an update of a few of the issues raised in the first Handbook (Colwell and Richardson, 2002, p. 1128).

In a decade, the focus on assessment in the handbooks diminished from five chapters to one chapter accurately and effectively updating the same topics covered artfully in the original handbook. Since that time, external forces, including the economy and the proliferation of technology, among other factors, have prompted a resurgence of interest in assessment. Nonetheless, as musical achievement is the outcome most directly impacted by instrumental music instruction and the skill that some authors believe should be the primary focus of assessment (e.g., Russell and Austin, 2010), I focus in this chapter on the assessment of musical achievement in instrumental music.

Cooksey (1982) claimed that performance assessment has been traditionally impaired by the often-cited subjective nature of musical performance. In response, some music educators have employed equally subjective or ill-conceived assessments that “are determined haphazardly, ritualistically, and/or with disregard for available objective information” (Boyle and Radocy 1987, p. 2). Despite such realities, music education scholars continue to defend the role that assessment can and should take in improving student learning and teacher practices (e.g., Asmus, 1999).

Despite the importance of assessment to student learning, many music teachers cite several barriers to effective assessment beyond a belief that music is a subjective art (e.g., Priest, 2006). Researchers have found that music educators believe that school size (Hanzlik, 2001; McCoy, 1991; Simanton, 2000), the large number of students being taught (Kancianic, 2006; Kotora, 2005; Lehman, 1998; McCreary, 2001; Nightingale-Abell, 1994; Tracy, 2002), inadequate instructional time (Kotora, 2005; Nightingale-Abell, 1994; Tracy, 2002), difficulty in recording results and maintaining control of student behavior while conducting assessments (Kotora, 2005), parent and student apathy toward assessment in music classes (Kotora, 2005), and lack of training in assessment techniques (Kotora, 2005; Nightengale-Abell, 1994) negatively impact their ability to adequately assess student learning in music classrooms. Nonetheless, researchers have yet to substantiate teachers’ claims that these factors do, in fact, impede better assessment strategies. Russell and Austin (2010), for example, found that

The majority of participants in our study appear to work under adequate, if not ideal, classroom conditions. Moreover, issues of instructional time and number of students taught had no substantive relationship with assessment decisions or grading priorities (i.e., weight assigned to achievement vs. non-achievement assessment criteria). Some music teachers who were responsible for very busy schedules and many students, for example, were among the most sophisticated in their choice of assessment strategies and the most credible in how they graded students. (p. 50)

Moreover, it seems that music teachers are content to employ assessment strategies that lead to grade inflation, and a large number of high grades as such assessment schemes are supported by students and parents (Hill, 1999). Russell and Austin (2010) also found evidence of grade inflation in secondary music classrooms. They found that the vast majority of students receive, on average, As (75%) or Bs (15%), while only 7% receive Cs and 3% receive Ds or Fs. Such attitudes and assessment strategies as well as a dearth of administrative support for improving assessment in music classrooms led Russell and Austin to identify the current assessment environment as one of “benign neglect” (p. 48).

In this chapter, I organize research conducted in the assessment of instrumental music performance into several categories, including the status of assessment, the differences between instrumental and choral assessment practices, rating scales, assessing musical knowledge, self-assessment, peer assessment, the psychological impact of assessment, technology in assessment, standardized tests, and the impact of case law on assessment. I conclude with a series of general recommendations for improved assessment strategies.

The Status of Assessment in Instrumental Music

LaCognata (2010) examined the manner in which high school band directors assessed student achievement. Band directors (N = 45) from two different states (North Carolina and Missouri) completed the questionnaire. The participants reported a relatively balanced distribution of school size, school population, director experience, grading practices, and school socioeconomic background. Band directors most commonly used participation and performance attendance as grading criteria (95.6%). They employed performance-based tests almost as often (91.1%) followed by daily rehearsal attendance (82.2%). As has been often cited, the band directors in LaCognata’s study believed that the amount of available class time was one of the most prevalent issues influencing assessment strategies. However, these directors believed the most important issues influencing their assessment practices were their philosophy of music education as well as the overall objectives for the class. These directors believed that the most important purposes of assessment were to identify student needs, provide feedback, and have a better understanding of overall program and instructional direction.

In a similar study of string educators, Duncan (2009) found that string teachers’ most common assessment method was teacher-given verbal critique, attendance, teacher-rated rubrics, and student evaluations. Similar to a later study by LaCognata (2013), Duncan also found that string teachers least often assessed comprehensive music skills (composition, music history, portfolios, improvisation, and interdisciplinary assignments). Interestingly, Duncan found that in successful string programs, teachers often employed written assessments, student reflections, teacher-rated rubrics, sight-reading assessments, student evaluations, music theory and history assignments, as well as portfolios and student-rated rubrics.

Differences Between Vocal and Instrumental Assessment Practices

One might assume that differences between vocal and instrumental music teachers would be either minimal or nonexistent. Both are often based on large performance ensemble experiences and are impacted by many of the same potentially influential circumstances (i.e., large number of students, performance expectations, and so on). However, Russell and Austin (2010) found that more established mores and practices of each genre of music may be more influential. For example, middle school choral directors gave significantly more weight to written assessment of musical knowledge than middle school instrumental directors, while they found no significant difference in the amount of weight given to musical knowledge by high school instrumental and choir directors. Conversely, middle school band directors more heavily weighted practice assessments than middle school choir directors. Russell and Austin did not find, however, a significant difference in the amount of weight given to practice by high school instrumental and high school choir directors. Overall, choral directors (both middle school and high school) gave greater weight to attitude (M = 37.5%) than instrumental directors (M = 21.0%), while instrumental directors gave greater weight to performance assessments of musical skill (M = 31.5%) than choral directors (M = 21.2%). These findings left Russell and Austin to conclude

These differences may indicate that performance skills are either more valued or considered easier to assess in instrumental contexts. Alternatively, choral music teachers may emphasize attitudinal assessments, despite the challenges inherent in documenting and reliably assessing attitude, because of a stronger desire to cultivate social goals and sense of community than instrumental teachers. (p. 50)

Rating Scales

Several researchers have attempted to create a valid and reliable rating scale for assessing instrumental music performance. Saunders and Holahan (1997) posed three guiding questions regarding criteria for developing specific rating scales for wind instruments. They wanted to know (a) if such scales yielded accurate or adequate results, (b) if they helped judges discriminate between students’ instrumental performance, and (c) which scores were best able to predict students’ overall scores. In general, Saunders and Holahan found relatively high internal consistency in their rating scale and that overall scores were best predicted by five individual dimensions (i.e., tone, technique/articulation, rhythmic accuracy and interpretation (both during solo evaluation and sight reading).

Zdzinski and Barnes (2002) developed a valid and reliable rating scale to evaluate string performance. As with Saunders and Holahan, Zdzinski and Barnes found that five factors were best able to account for students’ string performance rating (interpretation/musical effect, articulation/tone, intonation, rhythm/tempo, and vibrato). What is clear from both of these studies of rating scales is that tone, articulation, rhythmic accuracy and interpretation or musical effect play major roles in listeners’ ability to assess accurately and consistently.

Assessing Instrumental Students’ Musical Knowledge

One of the more neglected aspects of assessment in the instrumental classroom is assessing student musical knowledge. Although 82% of secondary teachers indicated that they assessed musical knowledge, the average weighting in the overall grading scheme was 12% (Russell and Austin, 2010). In a survey of band directors only, however, LaCognata (2013) found that only just over half (58.1%) of directors use written tests of worksheets and similarly weight such assessments as 11.11% of a student’s grade.

Even if instrumental music teachers are assessing students’ musical knowledge, the formats most commonly used may not be the most sophisticated and often focus primarily on skills that will improve, understandably, ensemble outcomes. Of the secondary teachers who assessed students’ musical knowledge, the most common formats employed included quizzes (74%) and worksheets (68%), while projects and presentations were employed by only about one fifth (21%) of teachers. The most common objective of teachers’ assessment of students’ musical knowledge were knowledge of music terminology, symbols, or notation (97%); the ability to analyze and evaluate music performance (71%); and the ability to identify musical elements (62%). Objectives focusing on comprehensive musicianship were the least used objectives (i.e., knowledge of compositional techniques, 12%), ability to create compositions or arrangements (14%), and cultural context knowledge (42%; Russell and Austin, 2010).

Self-Assessment

Music educators have identified self-assessment as a means to enrich musical understanding, aesthetic sensitivity, and critical-listening skills (Burrack, 2002). Through self-assessment, students can gain a more positive and meaningful idea of their own progress and abilities (Zimmerman, 2005). Moreover, some authors have posited that, in addition to offering another opportunity for data driven instruction, self-assessment can help students remain engaged in their musical learning and encourage music-making throughout their lives as performers, creators, or responders (Shuler, 2011). Students who self-assess gain ownership of their own learning and provide themselves with a means for evaluating their growth and setting future goals (Wells, 1998). Through self-assessment, students can determine their performance weaknesses and devise practice plans to overcome those weaknesses. Instructors can teach students to monitor their playing by having them create rubrics for assessment, listen to recordings of themselves, and identify techniques or skills that are not up to standard (Burrack, 2002; Criss, 2011). Students who have more input into their self-assessments become more motivated to do their best and meet their goals, especially when recognizing their own achievement (Criss, 2011; Shuler, 2011). Despite the potential benefits of self-assessment, this assessment strategy is only employed by roughly one-fifth (22.5%) of instrumental music teachers and, on average, only accounts for less than one-tenth of students’ grades (8.10%; LaCognata, 2013).

It is unclear, however, if students are able to assess their own musical achievements accurately. Researchers have found that student self-assessments of musical performance have not always mirrored the evaluations of experts (Aitchison, 1995; Darrow, Johnson, Miller, and Williamson, 2002; Hewitt, 2001, 2002, 2005, 2011; Kostka, 1997; Morrison, Montemayor, and Wiltshire, 2004; Priest, 2006). Middle school (Aitchison, 1995; Darrow et al., 2002; Hewitt, 2002, 2005, 2011) and high school (Hewitt, 2005) students tended to overrate their abilities compared to scores of experts. Moreover, Hewitt (2005) found that students were better able to assess melody but were least successful when assessing technique or articulation. Some researchers have found, however, that students’ ability to self-assess may be improved using additional models. Morrison and colleagues (2004), for instance, found that high school students could self-evaluate with more discrimination when using a professional recording as an aural model. Nonetheless, Hewitt (2002, 2005) determined that the use of an aural model did not appear to assist junior high instrumentalists with self-evaluation accuracy but did improve high school students’ ability to self-assess.

Researchers examining students’ self-evaluation processes have employed various evaluation forms. While some researchers used self-designed evaluation forms (Aitchison, 1995; Darrow et al., 2002; Morrison et al., 2004; Priest, 2006), Hewitt (2001, 2002, 2005, 2011) used the Saunders and Holahan (1997) Solo Evaluation section of the Woodwind Brass Solo Evaluation Form either in its original form or in a modified version. Hewitt (2011) incorporated student feedback to create rubrics to clarify the form. Hewitt concluded that this modification may have negatively affected the student self-evaluation scores. Saunders and Holahan (1997) designed the form to be used by expert adjudicators who may better understand the wording. Middle school and high school students may need additional help in interpreting the grading criteria. In addition to the use of rating scales, models, and more specific rubric criteria, assessment specialists have suggested that self-evaluation training could lead to greater student accuracy when self-assessing (Darrow et al., 2002). However, Hewitt (2011) found that self-assessment training had little impact on middle school instrumentalists’ ability to self-assess.

Peer Assessment

Brew (1999) claimed that peer assessment is a crucial key to developing life-long music-makers. Many researchers, however, have posited that some students find it difficult to accurately assess their peers due to either little understanding of how to assess or social pressures leading to distorted feedback (Divaharan and Atputhasamy, 2002). In order to combat these issues of peer process assessments, teachers need to give students some training in assessment, as well as clearly defined directions of how to employ the assessment instrument or scale (Crooks, 1988; Boud, 1995) or include students in the process of designing the assessment criteria (Nightingale, Wiata, Toohey, Ryan, Hughes, and Magin, 1996). Clear directions and effective assessment instruments can also lead students to assess their peers, and themselves, in a similar manner to that of the teacher (Falchikov, 1995).

In order to create effective assessment or peer assessment instruments that will help develop adaptive responses to failure, music teachers need training and support. Shuler (1996), however, indicated that many music teachers do not receive the requisite training to adequately assess the musical achievement of their future students. The lack of specific assessment training may lead to a lack of assessment confidence. Russell and Austin (2010), for instance, found that music teachers who reported greater confidence in their ability to assess their students were more likely to include musical performance assessments in their overall grading criteria rather than nonachievement criteria such as attitude or attendance.

Despite all of the potential benefits of peer assessment, very few instrumental music teachers employ it as a strategy (9.5%; LaCognata, 2013). This may be due to the potential complications or conflicts that may arise. However, if prepared well and given specific parameters within which to work, students can be one of the more helpful assessors of their peers’ musical achievement. One should be wary, however, that peer assessments should not be a stand-alone recorded grade. Current educational practices often preclude students from knowing the grades of their peers. This does not mean, however, that peers cannot give each other assistance and guidance.

The Impact of Assessment on Instrumental Student Psychology and Motivation

It is possible that assessment that focuses on the ability of students can have subtle long-term negative influences (Dweck, 2002; Kamins and Dweck, 1999; Mueller and Dweck, 1998). Young instrumental students may develop the view that their ability to play their instrument, or indeed participate effectively in music at all, is a fixed entity rather than an incremental or malleable phenomenon. If a student develops an entity view of their ability, he or she could see failure as a more negative outcome than necessary (Willingham, 2005). Moreover, such students may view failure on a single test or outcome as an indicator of their ability for the rest of their lives (Stone and Dweck, 1998). Willingham (2005) claimed that students who believe their ability is malleable are less likely to view failure as an indicator of future success as they believe they can do something (expend effort) in their future trials.

Music education researchers have found similar responses in music students. Vispoel and Austin (1993), for instance, found that students who attributed their success or failure to effort were more likely to improve on future tasks than students who attributed their lack of success to ability. Asmus (1985) suggested that music educators should stress the internal and unstable attributions of musical success to students. In short, students who develop a malleable view of ability are more likely to develop adaptive psychological responses (i.e., mastery motivation) to failure while students with fixed views may develop maladaptive responses such as helplessness when confronted with failure.

General education researchers have examined the impact of ability feedback on the learning practices of students. Although researchers have found no short-term problems with praising a student’s ability, they have found negative long-term effects. A student who has experienced ample praise for his or her ability is more likely to continue to seek the “able” label and will strive to maintain that label even at the cost of future learning. The student may utilize maladaptive responses such as seeking out easy tasks in which he or she is guaranteed success or shut down entirely. When students who believe they possess great ability (as evidenced by positive ability and product feedback) first encounter failure, they may avoid future challenges (learning opportunities), lower their expectations, become angry or depressed, or give up entirely (O’Neill and McPherson, 2002). Dweck (2000) claimed that even the most talented of individuals who respond to early failures in a maladaptive manner could experience diminished overall skill or knowledge attainment.

Music education researchers have documented the negative outcomes of students’ maladaptive responses to failure. O’Neill (1997) found that students with an adaptive mastery oriented belief made more progress than maladaptive students after their first year of musical study. O’Neill discovered that maladaptive students practiced double the amount of mastery-oriented students in order to achieve the same outcomes. It is clear that music educators need to be aware of the impact that not only the tenor of assessment but also the focus of that assessment has on the psychological development of music students. Music educators should create valid assessments that give students useful feedback in their musical development and help shape their long-term adaptive responses to challenges and even failure. Austin and Vispoel (1998), for example, suggested that teachers give positive feedback for increase of student effort. Although the praise of increased effort may be a valid way of reinforcing that behavior, some educational theorists also warn against the long-term impact. Willingham (2005) claimed that some students who experience failure and are offered positive praise about their expended effort believe that praise to be disingenuous and even an indicator of poor ability. Rather than offering positive feedback about effort, Willingham suggested offering feedback that focuses on the students’ process. Austin and Vispoel (1998) also suggested that offering students encouragement about the implementation of different strategies might be a way to motivate them despite any short-term failure.

The ability for students to learn that musical skill and understanding is not a stoic entity is, in part, influenced by teachers’ ability to offer meaningful feedback that encourages long-term musical learning. As students develop the ability to recognize that their musical learning is a process rather than predetermined phenomenon, educators may be able to better combat some of the issues facing music education through process assessment. Teachers may be able to mitigate problems, including low student matriculation into music programs, dwindling student retention, and advocacy as we develop more valid assessment schemes to communicate with parents and administrators, and even improved teacher relations with parents and other stakeholders as we can offer documentable evidence of student short-term success and failure in music. For this, and a myriad of other reasons, assessment should be often and varied. If students are assessed often, their fear will decrease as students learn that the motivation behind assessment is learning, not punishment. If students are assessed in multiple ways, they will have more opportunities to find their particular skills while potentially demonstrating a wider spectrum of learning. More important, music educators may move toward the goal of engendering life-long participation in music for those whose self-efficacy in music increased as they developed adaptive responses to assessment.

Technology as an Assessment Tool

Many instrumental music educators use various forms of technology to help them assess their students from a simple audio-recording devices and spreadsheets to intricate software designed specifically for the instrumental classroom. These tools can be extremely beneficial in saving class time. Students often employ technology outside of the classroom allowing teachers to focus class time on musical development and allow for more individualized assessment. Russell and Austin (2010) found that 32% of secondary teachers used out-of-school recordings to assess their students. LaCognata (2013) found very similar results; 33% of teachers asked students to record themselves outside of class as a means of assessment. Individually assessing students using this method can save class time. It does not, however, save the instrumental music teacher’s time. Giving feedback to each student who turns in a recording of their playing can be time consuming.

Some music educators employ computer-assisted programs to assess their students. LaCognata (2013) found that while some use generic computer-assisted programs (5.1%), more band directors use a specific program, Smart Music™ (13.1%). This program can “listen” to students play lines from their method books and evaluate the accuracy of their pitch and rhythm to a desired threshold. In a study evaluating the efficacy of Smart Music™, Buck (2008) found that using Smart Music™ as an assessment tool benefited students etude performances, especially technically oriented passages and less so for lyrical passages.

The use of technology does present some possible barriers. Most technology costs students and their guardians’ capital. The cost of a subscription to an assessment program, the Internet access to use the program, and the hardware required to use the program may be cost prohibitive to many families, who have most likely already spent money on an instrument rental or purchase. The use of technology can be very helpful but must be tempered with ensuring all students have equal access to the technology.

Standardized Test(s) in Instrumental Music Performance

In 1987 when Boyle and Radocy published their seminal text on the measurement and evaluation of musical experiences, they claimed that the “only readily available published performance measures are the Watkins–Farnum Performance Scale (Watkins and Farnum, 1954), for wind instruments and snare drum, and the Farnum String Scale (Farnum, 1969), for orchestral strings” (Boyle and Radocy, 1987, p. 174). Twenty-four years later, Colwell and Hewitt (2011) noted a curious continuance of the dearth of standardized performance tests:

Surprisingly, performance skill, which receives much teaching emphasis, has had little attention from test makers. It is an on-demand task in teacher-constructed assessments. Only one performance test is in print, the Watkins-Farnum Performance Scale, which is available for wind, string, and percussion instruments. (p. 34)

The Watkins–Farnum Performance Scale is a test in which students sight-read increasingly difficult musical passages until they are unsuccessful. This test can be difficult to administer, however. In addition to the time it requires, many of the “errors” to be assessed can be difficulty to discriminate and include:

  • Pitch

    • o Errors: additional notes, omitted notes, incorrect notes

    • o Not errors: poor attack or changing pitch during a sustained tone, initially playing and incorrect note but fixing it (if a lip adjustment is made without retounguing or bowing)

  • Time

    • o Errors: a note is not sustained accurately within one beat, an omitted or incorrectly sustained rest

  • Change of time

    • o Errors: tempo change of +/–12 bpm, a measure played in a different tempo

  • Expression

    • o Errors: missed dynamics, missed expression markings (e.g., ritardando, etc.)

  • Slur

    • o Errors: missed articulations

  • Bowing

    • o Errors: same as slur errors but for the string players taking the test

Despite these somewhat difficult and time-consuming aspects of the test, many music educators use this test as a means of evaluating their students’ performance development. Additionally, researchers continue to use the Watkins–Farnum Performance Scale as a standardized test in high-quality research or modify it in order to more closely meet research needs.

The Potential Impact of Case Law on Instrumental Grading Practices

As Russell and Austin (2010) posited, music educators often rely heavily on attendance and other nonachievement criteria for grading; instrumental music teachers should be aware of the potential legal issues that may arise from such practices. In the New Handbook of Research on Music Teaching and Learning, Richmond (2002) elegantly cautioned the profession:

Throughout music education’s public school history, the significance and power of the law—both in terms of legislation and litigation—have become increasingly important considerations as vehicles for music education policy formation. The range and scope education issues touched by our nation’s laws are extensive, and a chronic naiveté about the power of the law to shape our professional lives can only mean an increasingly perilous state of affairs at best for American music education. (p. 33)

Moreover, in the seminal text regarding music education and the law, Hazard (1979) stated, “As case law and statues shape new directions in tort liability, educators must stay informed of such changes and modify their professional practice accordingly” (p. 5).

Russell (2011), based on the work of Dayton and Dupre (2005), offered several strategies that instrumental music teachers may employ to help avoid such grade challenges and legal entanglements. Generally speaking, instrumental music educators should establish their grading policy in conjunction with administrators, apply grading policies fairly and consistently, offer students and parents an opportunity to discuss grades (even to the point of establishing a grievance procedure), and focus on the individual student’s musical achievements.

General Recommendations for Instrumental Music Educators Based on the Body of Research

  • Assess (give feedback) often, not for the sake of grading (evaluation) but rather to give students more help and to mitigate their fear of feedback.

  • Find multiple ways for students to demonstrate learning and skill development.

  • Never use assessment as a punishment (e.g., “If you don’t settle down, percussionists, there will be a playing exam on Friday!”).

  • Focus all of your grading policy on the musical achievements and knowledge of the students. Focusing assessments on nonmusical outcomes (e.g., attitude, attendance, etc.) is either unreliable or invalid, gives no meaningful feedback to students (or parents and administrators), and can often hurt the curricular standing of music in the schools.

  • Teach instrumental music students how to self-assess. This takes time, however, and requires the use of models as well as self-assessment training. Hewitt (2011) found that students were better able to self-assess melody. Start there and build on those initial skills, working toward more nuanced and intricate self-assessment activities.

  • Use existing, validated performance scales to assess ensembles (not individual students). Modify them to fit the particular musical genres or specific ensemble needs.

  • Give students performance rating scales and rubrics that will help them assess the ensemble in which they are playing as well as their own achievements.

  • Create rubrics and rating scales with students so that they have input into the process, understand what is expected, and have the opportunity to think creatively about what constructs are required to be successful in a given musical task.

  • Employ rating scales that have been validated and found to be reliable.

  • To save rehearsal time, focus rating scales and performance assessments on the five skills best able to predict overall scores (tone, technique/articulation, rhythmic accuracy, and interpretation), during both solo evaluation and sight-reading.

  • Use the Watkins–Farnum Performance Scale as a means to objectively (relatively) assess students’ musical achievement to gain a better idea of what instruction is needed as well as the strengths and weaknesses of the program.

  • If using the Watkins–Farnum Performance Scale, spend some time with the score sheets practicing on student recordings or informally with students or colleagues. Learning how to use the test well can take time and effort.

  • Before creating and distributing any handbooks or written documentation of the grading policy to students or parents (a common practice for secondary instrumental music teachers), check with the administration to make sure they support the stated policies and that the policies do not conflict with other school or state policies or mandates. Then instrumental music teachers may apply their grading policy consistently with the fair hope of administrative support should any issues arise.

  • Do not rely heavily on nonachievement criteria such as attitude or attendance as major components of a grading policy. Instrumental music teachers who do so may be opening themselves up to a greater number of grade challenges. Many of these issues are not assessments of a student’s achievement or understanding of music and are, therefore, misleading as to the student’s actual musical achievement.

  • Find ways to give students an opportunity to discuss a grade or even a system to file a grievance. In such cases, if a grade has been well documented and based on student achievement, a fair and reasonable grading policy, and consistent application of the grading policy, the challenge to the grade assigned will most likely be denied. What might be very difficult for instrumental music teachers is not taking such challenges personally. Students deserve a fair hearing if they feel their assigned grade was reached in error, and errors can occur.

  • Ask students to record themselves playing outside of class. This will allow the teacher to give individual feedback to each student while not taking up too much class time.

  • Employ computer-assisted programs to help assess students, but do not use these as the only means of giving individual feedback. Using such programs (e.g., Smart Music™) can help students develop technical if not lyrical musical skill.

  • Teach students how to assess others by starting with very specific and observable phenomenon.

  • Give musical examples, such as: Did Billy start with a down bow or an up bow? Which should he have started on?

  • Employ grading policies as fairly and consistently as possible. A teacher may want to be more lenient to a student who plays a pivotal role in the band or orchestra (e.g., a soloist, the only bassoon player, the concert master/mistress). However, each student should be held to the same expectations or at least the initially agreed-on expectations (in case of established tiered assessment policies). Moreover, lowering student grades for relatively small offenses such as limited absences, tardiness, talking in class, forgetting a pencil in rehearsal, or not succeeding on a chair challenge may not be a fair or proportional response and may lead to a successful grade challenge.

  • Find ways for students to make up work that they miss. Contrary to common dogma, there is no such thing as a singular event that was so meaningful that what was learned could not be taught to those who were unable to attend. One may reasonably make the argument that attending a concert is an integral part of being in an instrumental ensemble class. However, students do not always have the opportunity to influence the circumstances that lead to their absence outside of school, and such punitive grading practices are usually more about the ego of the director than the more imperative student learning. Instrumental music teachers should create make-up work that is meaningful and gives students the opportunity to demonstrate the learning that students in the concert demonstrated.

  • Employ a wide range of others to help students. Oftentimes marching band directors often have a staff to help students learn; instrumental music teachers bring in specialists to give students private lessons and conduct sectionals. It is a staple of high-quality programs to seek out those who can help students. However, this practice has implications for assessment as well as grade challenges. When instrumental music teachers work with these other professionals or any other individual that may have interactions with students, they must make sure that they know the current grading policy and that it is enforced in the same manner by all. They also must protect themselves by documenting all grading policies, grades, and any information that may help defend any grades assigned to students.

  • Create, utilize, and continually improve assessments that focus on the process of musical performance (i.e., fingerings, embouchure, breathe support, bowings, etc.). This will give students the developmentally appropriate feedback they need to improve and is easily observable and objective. Music educators often claim that they do not have the time to adequately assess music performance skills and even that music is too subjective of a subject to objectively assess (Russell and Austin, 2010). However, in addition to the short-term benefits of improved performance feedback for students and a focus on more easily assessed musical skills for educators, the long-term effects of process-focused feedback are vital. Students who were praised for their process (e.g. “I liked how you used more bow in the up bow so you could play the next note accented near the frog”) rather than being given an overall positive entity assessment (e.g., “You are a great trumpet player”) will be more able to incorporate future criticisms or failures in an adaptive manner. Although this process-based assessment strategy does not allow for more complex musical assessment, it does give the music educator a valid and reliable means to assess students’ performance development.

  • Create checklists or rubrics that focus on small processes that students can change in a relatively short amount of time. These process-based assessments are an easy way to effectively and efficiently give students feedback that will likely lead to a malleable view of musical success. By utilizing an analytic rather than holistic rubric, an instrumental music teacher will be able to give students specific feedback on several processes.

  • Offer feedback or assessments that allow students to improve (and understand how to improve) through new strategies or increased effort. Avoid labeling a student as either talented or not or musical or not. These labels are not helpful and can have a negative impact in the long run on the student.

  • Point out the praiseworthy actions of students, not the students themselves.

    • o Musical example: “You worked on your fingerings and bowings silently while I worked with the other orchestra section; that’s what I call a great use of time that will help lead you to success.”

  • Avoid giving students feedback that reminds them of past failures while praising new skills. It is more beneficial for students to focus on their new skills and successes rather than past weaknesses or failures.

    • o Musical example: Instead of “After playing that scale with an out of tune 7th scale degree for weeks, you finally played it with good intonation!” say, “Nice job playing that scale with good intonation! Your hard work has paid off.”

  • When giving students feedback, try to avoid feeding the egos of students who are too eager to be praised. When students are too eager for praise, be positive without explicitly praising. This can help students develop a more intrinsically motivated view of music rather than working for external praise.

    • o Musical example: “Your tone production made me want to listen to more!”

Despite many instrumental music educators’ reticence to assess student achievement, whether due to a perceived lack of time, an intense performance schedule, a belief that music is too subjective to accurately assess, or any other ingrained bias against assessment, it is a required and imperative facet of effective instruction. Although not enough research has been conducted, teachers could use the information in this chapter to compare their practices to those throughout the profession and to improve the efficacy and efficiency of their assessment practices. As instrumental music teachers improve feedback, students will gain more skill at an increased rate, resulting in increased motivation to learn more, remain in the program, and continue to find opportunities to perform and share music with others throughout their lives.

References

Aitchison, R. E. (1995). “The Effects of Self-Evaluation Techniques on the Musical Performance, Self-Evaluation Accuracy, Motivation, and Self-Esteem of Middle School Instrumental Music Students.” Doctoral dissertation, University of Iowa, Iowa City.Find this resource:

Asmus, E. (1985). “Sixth Graders’ Achievement Motivation: Their Views of Success and Failure in Music.” Bulletin of the Council for Research in Music Education 85: 1–13.Find this resource:

Asmus, E. P. (1999). “Music Assessment Concepts: A Discussion of Assessment Concepts and Models for Student Assessment Introduces this Special Focus Issue.” Music Educators Journal 86, no. 2: 19–24. doi: 10.2307/3395Find this resource:

Austin, J. R., and Vispoel, W. P. (1998). “How American Adolescents Interpret Success and Failure in Classroom Music: Relationships among Attributional Beliefs, Self-Concept and Achievement.” Psychology of Music 26, no. 1: 26–45.Find this resource:

Boud, D. (1995). Enhancing Learning Through Self Assessment. London: Kogan Page.Find this resource:

Boyle, J. D. (1992). Evaluation of music ability. In. R. Colwell (Ed.). Handbook of Research on Music Teaching and Learning, pp. 247-265. New York: Schirmer Books.Find this resource:

Boyle, J. D., and Radocy, R. E. (1987). Measurement and Evaluation of Musical Experiences. New York: Schirmer Books.Find this resource:

Brew, A. (1999). “Towards Autonomous Assessment: Using Self-Assessment and Peer Assessment.” In Sally Brown and Angela Glasner (Eds.), Assessment Matters in Higher Education: Choosing and Using Diverse Approaches, pp. 159–71. Buckingham, UK: Open University Press.Find this resource:

Buck, M. W. (2008). “The Efficacy of Smart Music® Assessment as a Teaching and Learning Tool.” Doctoral dissertation, University of Southern Mississippi, Hattiesburg.Find this resource:

Burrack, F. (2002). “Enhanced Assessment in Instrumental Programs.” Music Educators Journal 88, no. 6: 27–32. doi: 10.2307/3399802Find this resource:

Colwell, R. (1992). Handbook of research on music teaching and learning. New York: Schirmer Books.Find this resource:

Colwell, R. J., and Hewitt, M. P. (2011). The Teaching of Instrumental Music, 4th ed. Boston: Prentice Hall.Find this resource:

Colwell, R., and Richardson, C. (Eds.). (2002). The new handbook of research on music teaching and learning: A project of the Music Educators National Conference. Oxford University Press.Find this resource:

Cooksey, J. M. (1982). “Developing an Objective Approach to Evaluating Music Performance.” In Richard Colwell (Ed.), Symposium in Music Education, pp. 197–229. Urbana: University of Illinois.Find this resource:

Criss, E. (2011). “Dance All Night; Motivation in Education.” Music Educators Journal 97, no. 3: 61–6. doi: 10.1177/0027432110393022Find this resource:

Crooks, T. J. (1988) Assessing Student Performance. Sydney: University of New South Wales, Higher Education Research and Development Society.Find this resource:

Darrow, A., Johnson, C. M., Miller, A. M., and Williamson, P. (2002). “Can Students Accurately Assess Themselves? Predictive Validity of Student Self-Reports.” Update: Applications of Research in Music Education 20, no. 2: 8–11. doi: 10.1177/875512330202000203Find this resource:

Dayton, J., and Dupre, A. (2005). “Grades: Achievement, Attendance, or Attitude.” West’s Education Law Reporter 199: 569–92.Find this resource:

Divaharan, S., and Atputhasamy, L. (2002) “An Attempt to Enhance the Quality of Cooperative Learning Through Peer Assessment.” Journal of Educational Enquiry 3, no. 2: 72–83.Find this resource:

Duncan, S. A. (2009). “Assessment Practices of String Teachers.” Master’s thesis, University of Miami, Coral Gables, FL.Find this resource:

Dweck, C. S. (2000). Self-theories: Their role in motivation, personality and development. Philadelphia: Psychology Press.Find this resource:

Dweck, C. S. (2002). “Messages that Motivate: How Praise Molds Students’ Beliefs, Motivation, and Performance (in Surprising Ways).” In J. Aronson (Ed.), Improving Academic Achievement: Impact of Psychological Factors on Education (pp 61-87). New York: Academic Press.Find this resource:

Falchikov, N. (1995). “Peer Feedback Marking: Developing Peer Assessment.” Innovations in Education and Training International 32, no. 2: 175–87.Find this resource:

Farnum, S. E. (1969). “Farnum String Scale.” Winona, MN: Hal Leonard.Find this resource:

Hanzlik, T. J. (2001). “An Examination of Iowa High School Instrumental Band Directors’ Assessment Practices and Attitudes Toward Assessment.” Doctoral dissertation, University of Nebraska, Lincoln. Dissertation Abstracts International 62: 955A.Find this resource:

Hazard, W. R. (1979). Tort Liability and the Music Educator. Reston, VA: Music Educators National Conference.Find this resource:

Hewitt, M. (2001). “The Effects of Modeling, Self-Evaluation, and Self-Listening on Junior High Instrumentalists’ Music Performance and Practice Attitude.” Journal of Research in Music Education 49, no. 4: 307–22. doi: 10.2307/3345614Find this resource:

Hewitt, M. (2002). “Self-Evaluation Tendencies of Junior High Instrumentalists.” Journal of Research in Music Education 50, no. 3: 215–26. doi: 10.2307/3345799Find this resource:

Hewitt, M. (2005). “Self-Evaluation Accuracy Among High School and Middle School Instrumentalists.” Journal of Research in Music Education 53, no. 2: 148–61. doi: 10.1177/002242940505300205Find this resource:

Hewitt, M. (2011). “The Impact of Self-Evaluation Instruction on Student Self-Evaluation, Music Performance, and Self-Evaluation Accuracy.” Journal of Research in Music Education 59, no. 1: 6–20. doi: 10.1177/0022429410391541Find this resource:

Hill, K. W. (1999). “A Descriptive Study of Assessment Procedures, Assessment Attitudes, and Grading Policies in Selected Public High School Band Performance Classrooms in Mississippi.” Doctoral dissertation, University of Southern Mississippi. Dissertation Abstracts International 60: 1954A.Find this resource:

Kamins, M., and Dweck, C. S. (1999). “Person Verses Process Praise: Implications for Contingent Worth and Coping.” Developmental Psychology 35: 835–47.Find this resource:

Kancianic, P. M. (2006). “Classroom Assessment in United States High School Band Programs: Methods, Purposes, and Influences.” Doctoral dissertation, University of Maryland, College Park. Dissertation Abstracts International 67(06).Find this resource:

Kostka, M. (1997). “Effects of Self-Assessment and Successive Approximations on ‘Knowing’ and ‘Valuing’ Selected Keyboard Skills.” Journal of Research in Music Education 45, no. 2: 273–81. doi: 10.2307/3345586Find this resource:

Kotora, E. J. (2005). “Assessment Practices in the Choral Music Classroom: A Survey of Ohio High School Choral Music Teachers and College Choral Methods Professors.” Contributions to Music Education 32, no. 2: 65–80.Find this resource:

LaCognata, J. (2010). “Student Assessment in the High School Band Ensemble Class.” In Timothy S. Brophy (Ed.), The Practice of Assessment in Music Education: Frameworks, Models, and Designs, pp. 227–38. Chicago: GIA Publications.Find this resource:

LaCognata, J. (2013). “Current Student Assessment Practices of High School Band Directors in the United States.” In Timothy S. Brophy and Andreas Lehmann-Wermser (Eds.), Music Assessment Across Cultures and Continents: The Culture of Shared Practice. Chicago: GIA Publications.Find this resource:

Lehman, P. R. (1998). “Grading Practices in Music: A Report of the Music Educators National Conference.” Music Educators Journal 84, no. 5: 37–40.Find this resource:

McCoy, C. W. (1991). “Grading Students in Performing Groups: A Comparison of Principals’ Recommendations with Directors’ Practices.” Journal of Research in Music Education 39, no. 3: 181–90. doi: 10.2307/2244718Find this resource:

McCreary, T. J. (2001). “Methods and Perceptions of Assessment in Secondary Instrumental Music.” Doctoral dissertation, University of Hawaii, Honolulu.Find this resource:

Morrison, S. J., Montemayor, M., and Wiltshire, E. S. (2004). “The Effect of a Recorded Model on Band Students’ Performance, Self-Evaluations, Achievement, and Attitude.” Journal of Research in Music Education 52, no. 2: 116–29. doi: 10.2307/3345434Find this resource:

Mueller, C. M., and Dweck, C. S. (1998). “Intelligence Praise Can Undermine Motivation and Performance.” Journal of Personality and Social Psychology 75: 33–52.Find this resource:

Nightingale-Abell, S. E. (1994). “Teacher Evaluation Practices in the Elementary General Music Classroom: A Study of Three Teachers.” Doctoral dissertation, University of Cincinnati. Dissertation Abstracts International 55: 900A.Find this resource:

Nightingale, P., Wiata, I., Toohey, S., Ryan, G., Hughes, C., and Magin, D. (1996). Assessing Learning in Universities. Sydney: University of New South Wales Press.Find this resource:

O’Neill, S. A. (1997). “The Role of Practice in Children’s Early Musical Performance Achievement.” In H. Jorgensen and A. C. Lehmann (Eds.), Does Practice Make Perfect? Current Theory and Research on Instrumental Practice, pp. 53–70. Oslo: Norges Musikhogkole.Find this resource:

O’Neill, S. A., and McPherson, G. E. (2002). “Motivation.” In R. Parncutt and G. E. McPherson (Eds.), The Science and Psychology of Music Performance: Creative Strategies for Teaching and Learning, pp. 31-46. New York: Oxford University Press.Find this resource:

Priest, T. (2006). “Self-Evaluation, Creativity, and Musical Achievement.” Psychology of Music 34, no. 1: 47–61. doi: 10.1177/0305735606059104Find this resource:

Richmond, J. W. (2002). “Law Research and Music Education.” In R. Colwell and C. Richardson (Eds.), The New Handbook of Research on Music Teaching and Learning, pp. 33–47. New York: Oxford University Press.Find this resource:

Russell, J. A. (2011). “Assessment and Case Law: Implications for the Grading Practices of Music Educators.” Music Educators Journal 97, no. 3: 35–9.Find this resource:

Russell, J. A., and Austin, J. R. (2010). “The Assessment Practices of Secondary Music Educators.” Journal of Research in Music Education 58, no. 1: 37–54.Find this resource:

Saunders, T. C., and Holahan, J. M. (1997). “Criteria-Specific Rating Scales in the Evaluation of High School Instrumental Performance.” Journal of Research in Music Education 45, no. 2: 259–72. doi: 10.2307/3345585Find this resource:

Shuler, S. C. (1996). The Effects of the National Standards on Assessment (and Vice Versa). Reston, VA: MENC.Find this resource:

Shuler, S. (2011). “Music Education for Life: Music Assessment, Part 2—Instructional Improvement and Teacher Evaluation. Music Educators Journal 98, no. 3: 7–10. doi: 10.1177/00274321112439000.Find this resource:

Simanton, E. G. (2000). “Assessment and Grading Practices Among High School Band Teachers in the United States: A Descriptive Study.” Doctoral dissertation, University of North Dakota, Grand Forks. Dissertation Abstracts International 61: 3500A.Find this resource:

Stone, J., and Dweck, C. S. (1998). Implicit Theories of Intelligence and the Meaning of Achievement Goals. Unpublished raw data, Columbia University, New York.Find this resource:

Tracy, L. H. (2002). “Assessing Individual Students in the High School Choral Ensemble: Issues and Practices.” Doctoral dissertation, Florida State University, Tallahassee. Dissertation Abstracts International 63(09): 3143.Find this resource:

Vispoel, W. P., and Austin, J. R. (1993). “Constructive Response to Failure in Music: The Role of Attribution Feedback and Classroom Goal Structure.” British Journal of Educational Psychology 63: 110–29.Find this resource:

Watkins, J. G., and Farnum, S. E. (1954). “The Watkins–Farnum Performance Scale: Form A.” Winona, MN: Hal Leonard.Find this resource:

Wells, R. (1998). “The Student’s Role in the Assessment Process.” Teaching Music 6, no. 2: 32–3.Find this resource:

Willingham, D. T. (2005). “How Praise Can Motivate or Stifle.” American Educator 29, no. 4: 23–7.Find this resource:

Zdzinski, S. F., and Barnes, G. V. (2002). “Development and Validation of a String Performance Rating Scale.” Journal of Research in Music Education 50, no. 3: 245–55.Find this resource:

Zimmerman, J. R. (2005). “The Effects of Periodic Self-Recording, Self-Listening and Self-Evaluation on the Motivation and Music Self-Concept of High School Instrumentalists.” Doctoral dissertation, University of Minnesota, Minneapolis.Find this resource: