Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 18 February 2020

Opportunity to Learn

Abstract and Keywords

Opportunity to learn (OTL) is an evolving construct from which to better understand and use the intricacy of the schooling process. Progress to date includes considerations of how it might serve as an index of key factors of teachers’ and students’ contributions to learning and as a tool to guide fair and productive measurement of its operation. In this chapter, we provide an account of OTL measurement where classrooms have been the unit of analysis and where concentrations of focus have moved to include calibration of the quality of instruction alongside considerations of time and content elements of a learning opportunity. The account highlights the significance of this inclusion and presents current developments in creating feasible, reliable measurement and ongoing challenges where additional research is needed to further refine our conceptualization and current tooling for measuring OTL.

Keywords: Opportunity to learn, instructional time, enacted curriculum, quality instruction, grouping variables

Opportunity to learn (OTL) generally refers to inputs and processes within a school context necessary for producing student achievement of intended outcomes. Most theorists and researchers studying OTL have focused on the classroom as their unit of analysis and have privileged the actions of teachers (e.g., Kurz, 2011; Porter, 2002). In doing so, instructional time and content consistently have been characterized as core elements of OTL, along with a number of instructional quality indicators. A few investigators, in particular those using large extant data sets, have focused on schools or programs (e.g., mathematics), as their unit of analysis. With this approach to studying OTL, coarser indicators, such as courses taken, educational program types (e.g., remedial, gifted), and technology use, have been studied as predictors of student achievement (e.g., Muthen, Huang, Jo, Khoo, Goff, Novak, & Shih, 1995).

The present examination of OTL is based on the majority approach where the classroom is the unit of analysis. Consequently, we characterize OTL as a multi-dimensional construct central to quality teaching and prerequisite to student achievement. Our examination indicates there is 50+ years of OTL research focusing on instructional time, content, and quality. Collectively, this research has influenced many investigators to think of OTL as a teacher effect, and this has characterized the construct as a fundamental aspect of fairness and test validity and stimulated its better measurement.

The Concept of Opportunity to Learn: Its Context and Evolution

Educators and educational stakeholders concerned about achievement gaps between various groups of students and between actual and expected learning outcomes are focusing on opportunity to learn and its measurement as a critical instructional variable that contributes to student achievement (Kurz, 2011; Elliott, 2014). Teachers across the globe are challenged to improve the results of instruction and thus to improve their students’ opportunities to learn. For example, for the 7.2 million teachers and 60+ million students in the United States and their nearly 290,000 teacher and 3.6 million student counterparts in Australia, this is their reality. Virtually all states in the United States have allocated nearly 63,900 min across 180 days each year for teachers to deliver instruction. In Australia, teachers have 63,000 min across 200 days to accomplish all their annual learning objectives. For a typical daily class in mathematics or language arts, a US teacher has a maximum of 8,100 min per year, and an Australian teacher has about 9,000 min to cover his/her learning goals and the intended curriculum. Whether in the United States, Australia, or other industrialized countries, this looks like a lot of time for learning, but recognize that it represents the maximum allocated time, not the actual time used for instruction nor the amount of that instructional time where a majority of students are actively engaged in learning. Evidence reported by teachers in the United States indicates that instructional time is likely to average 81% of the allocated time across an entire school year (Kurz, Elliott, Kettler, & Yel, 2014; Elliott, Kurz, Tindal, & Yel, 2015). Given this typical use of allocated time, opportunities to learn at school are less frequent than expected for many students. Making time for learning and for ensuring its opportune focus are issues faced by educators around the world who strive to be productive and improve the efficiency of student learning.

Researchers and educational theorists have written about OTL for nearly five decades. Carroll (1963) provided one of the first operational definitions by emphasizing the variable of time allocated to instruction in a school’s schedule. Specifically, he defined OTL as “the amount of time allowed for learning, for example by a school schedule or program” (1989, p. 26). Carroll went on to include OTL as one of five variables in a formula that he used to express a student’s degree of learning (i.e., ratio of the time spent on a task to the total amount of time needed for learning the task). Subsequently, researchers have developed more instructionally sensitive time indices to examine contributions to student achievement. Such indices have been based on the proportion of allocated time dedicated to instruction (i.e., instructional time), the proportion of instructional time during which students were engaged (i.e., engaged time), or the proportion of engaged time during which students experienced a high success rate (i.e., academic learning time) (Borg, 1980).

Researchers also have defined OTL in relation to the content covered during instruction. The focus of this approach was the extent to which the content of instruction overlapped with the content of assessments (i.e., content overlap). The work of Husén (1967) with the International Association of the Evaluation of Educational Achievement exemplified the initial investigation in this line of research where teachers rated their coverage of the constructs assessed by test items. Concern about content shifted with the standards-based educational reform in the United States and Australia. Specifically, policy makers shifted the desirable target of instruction from tested content to the broader intended curriculum (i.e., academic content standards), content of which was sampled by large-scale achievement tests (Klenowski, & Wyatt-Smith, 2012; Rowan, Camburn, & Correnti, 2004). Under the US No Child Left Behind Act (NCLB, 2001) and the Every Student Success Act (ESSA, 2015), states have been required to define their subject- and grade-specific intended curricula through a set of rigorous academic standards. Subsequently, stakeholders became more interested in taxonomies that allowed experts to judge the alignment between the content of various curricula such as a teacher’s enacted curriculum and a state’s intended curriculum. To accomplish this measurement of alignment, Porter (2002) developed the Surveys of Enacted Curriculum (SEC) to quantify alignment between academic content standards and assessments via ratings along a comprehensive list of content topics and cognitive demands (Roach, Niebling, & Kurz, 2008).

Researchers further considered aspects of instructional quality to operationalize OTL. For example, teachers’ uses of evidence-based instructional practices and instructional resources have become common considerations when characterizing quality. More recently, meta-analytic findings have been used by researchers and practitioners to identify specific instructional practices that contribute to student achievement (Slavin, 2002), including the achievement of specific subgroups such as students with disabilities (e.g., Gersten, Chard, Jayanthi, Baker, Morphy, & Flojo, 2009).

Stevens (1996), according to Kurz (2011), provided the first comprehensive conceptual framework of OTL, bringing together four elements: content coverage, content exposure (i.e., time on task), content emphasis (i.e., emphasis of cognitive processes), and quality of instructional delivery (i.e., emphasis of instructional practices). Despite its lack of direct empirical evidence, Stevens’s framework has guided numerous researchers interested in OTL (e.g., Abedi, Courtney, Leon, Kao, & Azzam, 2006; Herman & Abedi, 2004; Wang, 1998). Most important, Stevens clarified OTL as a teacher effect related to the allocation of adequate instructional time covering a core curriculum via different cognitive demands and instructional practices that can produce student achievement.

Opportunity to Learn

Figure 1. Conceptual model of OTL.

Source: From Kurz (2011). Access to what should be taught and will be tested: Students’ opportunity to learn the intended curriculum. In S. Elliott, R. Kettler, P. Beddow, & A. Kurz (Eds.) Handbook of accessible achievement tests for all students: Bridging the gaps between research, practice, and policy. New York: Springer. Reprinted with permission from the author and publisher.

Clearly, for several decades researchers have examined instructional indicators of the enacted curriculum under the larger concept of OTL (Rowan & Correnti, 2009). Kurz (2011) reviewed the respective research literature and identified major lines of OTL research related to the time, content, and quality of classroom instruction. His conceptual synthesis of OTL acknowledged the co-occurrence of all three enacted curriculum dimensions during instruction. That is, teachers use a variety of pedagogical approaches in allocating instructional time and content coverage to the standards that define the intended curriculum. This conceptual model depicts OTL as a matter of degree along three orthogonal axes with distinct zero points (Figure 1).

According to the model advanced by Kurz (2011), OTL is a matter of degree related to the temporal, curricular, and qualitative aspects of a teacher’s instruction. In this model, OTL is defined as “the degree to which a teacher dedicates instructional time and content coverage to the intended curriculum objectives emphasizing high-order cognitive processes, evidence-based instructional practices, and alternative grouping formats” (Kurz & Elliott, 2011). Thus, to provide OTL, a teacher must dedicate instructional time to covering the content prescribed by the intended curriculum using pedagogical approaches that address a range of cognitive processes, instructional practices, and grouping formats.

Summary of Research on Opportunity to Learn

As noted in our account of the evolution of the concept of OTL, researchers have examined a number of OTL indices predictive of student achievement that can be grouped into the broad strands of time, content, and quality of classroom instruction (e.g., Borg, 1980; Brophy & Good, 1986; Porter, 2002). Based on a review of these three OTL research strands, Kurz (2011) provided a conceptual synthesis of OTL in the context of a curriculum and assessment framework. Accordingly, OTL can be operationalized along three key dimensions of the enacted curriculum—time, content, and quality—all of which co-occur during instruction. Teachers distribute OTL for what they want students to know and be able to do by allocating instructional time and content coverage to intended objectives using a variety of instructional approaches. Key research supporting the relevance of the variables of time, content, and quality is reviewed next.

Instructional time.

To provide students with opportunities to learn the intended curriculum (i.e., academic content standards), teachers must invest instructional time addressing the prescribed knowledge and skills. Following Carroll (1963), researchers began to examine this OTL conceptualization empirically, using general indicators such as allocated time (i.e., time scheduled for instruction), or more instructionally sensitive and student-oriented indicators such as instructional time (i.e., proportion of allocated time used for instruction), engaged time (i.e., proportion of instructional time during which students are engaged in learning), and academic learning time (i.e., proportion of engaged time during which students are experiencing a high success rate of learning). Researchers have found time-based OTL indices to be related moderately to student achievement after controlling for student ability and socioeconomic status. For example, Frederick and Walberg (1980) conducted one of the first reviews of studies that examined the relation between time and student learning outcomes. Overall, they found persistent, if moderate, correlations across various time and outcome measures ranging from .13 to .71. They noted that refining the measure of time to reflect actual time devoted to the outcome being measured increased the association.

Scheerens and Bosker (1997), who used multilevel modeling, also examined the effect of allocated time on student achievement in 21 studies with a total of 56 replications across studies. They reported an average Cohen’s d effect size for time of .39, indicating again that allocated time has a moderate overall effect on learning.

Content covered during instruction.

Teachers must also cover the content implicated in the academic standards to provide students with opportunities to learn the content most assessments actually measure. Following Husén’s (1967) study on international testing in more than 10 major industrialized countries, researchers interested in content-based conceptualizations of OTL created indices according to content overlap between enacted and assessed curricula. Thus, as noted by Anderson (1986), the “opportunity to learn from the Husén perspective is best understood as the match between what is taught and what is tested” (p. 3682). As such, mean correlations between teachers’ content coverage and student achievement in mathematics across 10 countries were found to range between .11 and .20. Comber and Keeves (1973) obtained similar results with a mean correlation of .12 for their study of science education. Both of these international studies relied on teacher recall of test content-based OTL for individual students across multiple years. Such an approach to documenting content overlap unfortunately has some clear limitations, notably in relation to tested-content matching and recall reliability.

A more refined line of research on content overlap focused on students’ opportunity to learn important content objectives rather than tested content (e.g., Jenkins & Pany, 1978; Porter, Schmidt, Floden, & Freeman, 1978). For example, Porter et al. developed a basic taxonomy for classifying content included in mathematics curricula and measured whether different standardized mathematics achievement tests covered the objectives specified in the taxonomy. Porter continued his research on measuring the content of the enacted curriculum during the standards-based reforms in the United States and developed the Surveys of Enacted Curriculum, a survey-based set of measures to examine the content of instruction along the dimensions of topics and categories of cognitive demand (Gamoran, Porter, Smithson, & White, 1997; Porter, 2002; Porter & Smithson, 2001). The findings of Gamoran et al. indicated that alignment between instruction and a test of student achievement in high school mathematics accounted for 25% of the variance among teachers.

Rowan and colleagues also examined content-based OTL, but they used multiple teacher logs across the school year (e.g., Rowan et al., 2004; Rowan & Correnti, 2009) rather than retrospective reports like Porter’s SEC. Rowan et al. created the Study of Instructional Improvement (SII) and examined students’ opportunity to learn and engage in important literacy skills and activities in grades 1 through 5. They found that (a) content and difficulty of skills varied widely from day to day in a given teacher’s classroom (even among teachers from the same grade level at the same school) and (b) students in the same classroom received little instructional differentiation in terms of amount or skill level of reading comprehension or writing instruction. In addition, reading/language arts instruction was of low cognitive demand across all grade levels, with little variation in instructional practices based on students’ prior achievement or learning histories (Rowan & Correnti, 2009).

Finally, Scheerens and Bosker (1997), having conducted a meta-analytic review of 19 studies on teachers’ content coverage of tested content, reported an average Cohen’s d effect size of .18, indicating only a modest effect of content coverage on content tested. However, the desirable target of classroom instruction in current accountability frameworks is the broader intended curriculum, which subsumes the content of the assessed curriculum. As such, the more appropriate content-based indicator of OTL is a teacher’s content coverage of the general curriculum standards rather than the assessed curriculum.

Quality of instruction.

Knowing how much time is spent on instruction and what content of the intended curriculum is covered is important. However, these two variables alone do not indicate how instructional time and content affect learning. Information also is needed on the quality of instruction. As we have seen, models of school learning have featured quality of instruction alongside quantity of instruction (e.g., Carroll, 1963; Walberg, 1980). The operationalization of instructional quality has focused mainly on evidence-based instructional practices such as direct instruction, guided feedback, student think-alouds, and instructional grouping formats. Depending on the instructional practice and student population, reported effect sizes for these indices range between .43 and 1.17, indicating a moderate effect to a very large effect (Kurz, 2011).

Walberg (1986) reviewed 91 studies that examined the effect of quality indicators on student achievement, such as frequency of praise statements, corrective feedback, classroom climate, and instructional groupings. Walberg reported the highest mean effect sizes (ES) for (positive) reinforcement and corrective feedback with 1.17 and .97, respectively. Brophy and Good’s (1986) seminal review of the process-product literature identified aspects of giving information (e.g., pacing), questioning students (e.g., cognitive level), and providing feedback as important instructional quality variables with consistent empirical support.

Research on OTL related to the quality of instruction also has addressed teachers’ use of instructional resources such as textbooks, calculators, and computers (Yarbro, McKnight, Elliott, & Kurz, in press) and cognitive expectations for student learning (e.g., Herman, Klein, & Abedi, 2000; Porter, 2002). However, the research on instructional resources has been largely descriptive regarding use of materials and has not yet yielded evidence to support its inclusion as a meaningful element of OTL. With respect to cognitive expectations, several classification approaches, such as Bloom’s taxonomy of educational objectives (Bloom, 1976), emphasize a range of cognitive processes from lower-order to higher-order. The research of Elliott, Kurz, Tindal, and Yel (2015) has consistently indicated that cognitive processes emphasized via instructional tasks do meaningfully contribute to the construct of OTL and account for 4% to 6% of variance in students’ achievement test scores.

Finally, instructional group size and grouping format have been variables of interest for many researchers and teachers. Elbaum, Vaughn, Hughes, Moody, and Schumm (2000) conducted a meta-analytic review of instructional grouping formats related to reading outcomes for students with disabilities. These investigators found that, in comparison to whole class instruction, alternative grouping formats such as pairs, small groups, and multiple grouping formats (e.g., pairing and small groups) resulted in an average effect size of .43. This indicates a stronger effect.

In summary, each of the instructional quality indicators of instructional practices, cognitive expectations, and grouping formats has a reasonable evidence base to support confidence about its influence on student achievement. Thus, it seems a robust model of OTL that includes an instructional quality dimension should account for these three aspects of quality.

Measuring Opportunity to Learn

Measurement of any construct is intimately tied to its definition. Thus, at this time the dominant conceptualization of OTL is that it is a construct involving three primary dimensions embodied in the actions of classroom teachers—instructional time, instructional content covered, and instructional quality. With this understanding, an examination of the measurement of OTL is in order.

A number of reasons exist for measuring OTL even though there are no federal or state requirements to do so, either in the United States or Australia, or in any other country to our knowledge. A legal rationale for the importance of examining OTL for students with and without disabilities has been made in US federal legislation (i.e., Individuals with Disabilities Education Act, 1990, 1997, Individuals with Disabilities Education Improvement Act, 2004, as well as the No Child Left Behind Act, 2001 and the Every Student Succeeds Act, 2015) and in Australia with the Australian Education Act, 2013 and the Disability Discrimination Act, 1992 revised to include the Disability Standards for Education Policy (2005). An empirical rationale for the measurement of OTL is derived from research in special education where there is evidence concerning the limited use of allocated time for instruction (Vannest & Hagan-Burke, 2010), low exposure to standards-aligned content (Kurz, Elliott, Wehby, & Smithson, 2010), inconsistent use of evidenced-based instructional practices (Burns & Ysseldyke, 2009), and poor instructional quality (Vaughn, Levy, Coleman, & Bos, 2002). The measurement of OTL is also critical to the validity of inferences made from test scores and to the premise underlying standards-based reform. This point was accentuated by the publication of the 2014 Standards for Educational and Psychological Testing, in which opportunity to learn is highlighted as a fundamental element of fairness and central to the validity inference made about achievement test scores of all students. Measuring OTL also is fundamental to providing teachers valuable feedback about their instruction and for monitoring changes in instruction. Finally, the pragmatic rationale also applies that what gets measured gets attention.

Opportunity to Learn has been difficult to measure and consequently it seems to have gotten little attention on a day-to-day basis in schools across the globe or from researchers interested in effective instruction and learning. Recent research has changed this situation, resulting in the development of three viable tools to advance the assessment of OTL with the potential to improve access to an intended curriculum, and ultimately to positively affect students’ achievement. Specifically, these tools are the Surveys of Enacted Curriculum (SEC; Porter, 2002), the Study of Instructional Improvement (SII; Rowan et al., 2004), and My Instructional Learning Opportunity Guidance System (MyiLOGS; Kurz & Elliott, 2012).

The SEC is a set of annual teacher surveys designed to provide information on the content alignment between intended, enacted, and assessed curricula. Its method relies on content translations by teachers or curriculum experts who code a particular curriculum into a content framework that features a list of over 200 topics (K–12). The SEC does not measure instructional quality. Psychometric evidence to support the use of the SEC as a measure of two dimensions of OTL—content and time—is provided by Polikoff (2010) and Porter (2002). The SEC has been used in more than 20 studies analyzing instructional curriculum and its alignment with large-scale assessments (e.g., Porter, McMaken, Hwang, & Yang, 2011) and is available for use at www.ccsso.org/Resources/Programs/Surveys_of_Enacted_Curriculum_(SEC).html.

The SII logs (Rowan et al., 2004) are paper-and-pencil self-report measures on the time, content, and quality of instruction for a particular student and day rather than for a whole class. The logs are in two content areas at the elementary level only and have been used by thousands of teachers. Rowan and Correnti (2009) reported substantial evidence for the measures’ interrater reliability and predictive validity. The measure has been used in over a dozen studies to document the nature of classroom instruction (e.g., Cohen, Raudenbush, & Ball, 2003; Glazer, 2008) and is available at www.sii.soe.umich.edu/about/pubs.html.

MyiLOGS is an online tool designed as a daily teacher self-report measure for the primary purpose of providing comprehensive feedback about instructional variables known to effect student learning. The MyiLOGS technology can be used to document all three instructional dimensions of OTL via indicators of instructional time, content coverage, and instructional quality that is operationalized by cognitive process expectations, instructional practices, and grouping formats. Kurz et al. (2014) have published reliability and validity evidence for MyiLOGS. An observation corollary to MyiLOGS called MyiOBS (Kurz & Elliott, 2015) was recently developed and offers researchers an additional tool and a means to establish the inter-rater reliability of teachers’ self-reports regarding each dimension of OTL. Evidence on the psychometric soundness of MyiLOGS was published by Kurz et al (2014). This measure is available at www.myilogs.com.

Future Directions

This examination of OTL research and the importance of the construct to educational research suggest at least four areas that would benefit from more research. Each of these areas is now briefly described.

First, although three theoretically strong measurement tools exist, much more needs to be done with each of them to establish both their technical soundness and their usability. Specifically, the SEC, SII, and MyiLOGS all have limitations in terms of efficient usability by teachers. Although both SEC and MyiLOGS are online measures, users must invest several hours in training in the use of these measures and interpretation of their results. In addition, these measures are all self-reports about teachers’ own instructional actions. More research regarding the reliability and validity of teachers’ self-reports under varying conditions and assessment purposes is warranted.

Second, once there is a high degree of confidence in one or more measures of OTL, it will open up a line of research to examine relationships between content taught and content tested at a detail level. Muthen et al. (1995) in their research on OTL effects on achievement advanced the concept of OTL-sensitive test items and OTL-insensitive test items. If one’s assessment purpose is to document what students have learned from specific courses, then items used to test this learning must be OTL sensitive. This type of research and application in practice has the potential to improve both instruction and testing practices.

Third, researchers concerned with understanding and reducing students’ achievement gaps are encouraged to examine the role that differences in students’ OTL contributes to gaps in their achievement. If end-of-year tests are designed to align highly with academic content that represents the intended curriculum, it is logical yet untested that differences in classroom instruction and learning opportunities play a significant role in test performance gaps for students from different racial groups or learning groups (e.g., English language learners, students with disabilities). Perhaps the achievement gaps actually start with or are exacerbated by gaps in OTL. More research in this area clearly is needed.

Finally, although effective teachers do many things, it would seem that the variables central to OTL that we have examined would be highly related to many definitions of effective teaching and the various systems used to evaluate teachers. To our knowledge, none of the commonly used teacher evaluation systems feature the construct of OTL. Perhaps this is because of the absence of commercially available OTL measures or that the existing research measures are typically self-report tools and this source of evidence is not seen as appropriate when the purpose of the assessment concerns a teacher’s own performance. However, at least for research purposes, it seems the relationship between a teacher’s efforts to increase OTL for students and his/her teaching evaluation needs to be better understood.

Conclusions

In this report, we defined OTL as a classroom-focused teacher effect and asserted that it is a central instructional construct worthy of educational researchers’ and teachers’ attention, but that until recently it has been difficult to measure. With the advancement of technology tools like the SEC, SII, and MyiLOGS, teachers and researchers are now able to assess and examine key instructional variables of time, content, and quality. These variables are critical because they are malleable and predictive of student achievement, and they provide evidence-based answers to questions that educators often see as fundamental to instruction. We have asserted also that OTL is, by definition and evidence, a fundamental part of effective teaching and fair testing regardless of racial, ethnic, and socioeconomic diversity. These assertions, driven by a strong theory and years of study, still require more research to advance OTL measurement and application to instruction.

References

Abedi, J., Courtney, M., Leon, S., Kao, J., & Azzam, T. (2006). English language learners and math achievement: A study of opportunity to learn and language accommodation. National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Los Angeles, California: University of California Los Angeles.Find this resource:

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for educational and psychological testing. Washington, DC: Authors.Find this resource:

Anderson, L. W. (1986). Opportunity to learn. In T. Husén & T. Postlethwaite (Eds.), International encyclopedia of education: Research and studies. Oxford, UK: Pergamon.Find this resource:

Australian Government. (2005). Disability standards for education. Retrieved 11March, 2016 from https://www.education.gov.au/disability-standards-education

Borg, W. R. (1980). Time and school learning. In C. Denham & A. Lieberman (Eds.), Time to learn (pp. 33–72). Washington, DC: National Institute of Education.Find this resource:

Brophy, J., & Good, T. L. (1986). Teacher behavior and student achievement. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 328–375). New York, NY: Macmillian.Find this resource:

Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evidence-based instructional practices in special education. Journal of Special Education, 43(1), 3–11.Find this resource:

Carroll, J. B. (1989). The Carroll model: A 25-year retrospective and prospective view. Educational Researcher, 18(1), 26–31.Find this resource:

Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64(8), 723–733.Find this resource:

Cohen, D. K., Radenbush, S. W., & Ball, D. L. (2003). Resources, instruction, and research. Educational Evaluation and Policy Analysis, 25(2), 1–24.Find this resource:

Comber, L. C., & Keeves, J. P. (1973). Science education in nineteen countries. New York: Halsted Press.Find this resource:

Commonwealth of Australia (2014). Australian Education Act, 2013. Retrieved 3 September, 2014 from http://www.education.gov.au/australian-education-act-2013

Commonwealth Government (1992). Disability discrimination act 1992. Canberra, Australia: Author.Find this resource:

Elbaum, B., Vaughn, S., Hughes, M. T., Moody, S. W., & Schumm, J. S. (2000). How reading outcomes for students with learning disabilities are related to instructional grouping formats: A meta-analytic review. In R. Gersten, E. P. Schiller, & S. Vaughn (Eds.), Contemporary special education research: Syntheses of the knowledge base on critical instructional issues (pp. 105–135). Mahwah, NJ: Erlbaum.Find this resource:

Elliott, S. N. (2014). Measuring opportunity to learn and achievement growth: Key research issues with implications for the effective education of all students. Remedial and Special Education, 36(1), 58–64. doi: 10.1177/0741932514551282Find this resource:

Elliott, S. N., Kurz, A., Tindal, G., & Yel, N. (2015, April). Predicting end-of-year mathematics achievement of students with and without disabilities: The role of opportunity to learn and CBM measures. Paper presented at the National Council on Measurement in Education, Chicago.Find this resource:

Every Student Succeeds Act of 2015, S. 1177, 114th Cong. December 10, 2015.

Frederick, W. C., & Walberg, H. J. (1980). Learning as a function of time. Journal of Educationa Research, 73, 183–194.Find this resource:

Gamoran, A., Porter, A. C., Smithson, J., & White, P. A. (1997). Upgrading high school mathematics instruction: Improving learning opportunities for low-achieving, low-income youth. Educational Evaluation and Policy Analysis, 19(4), 325–338.Find this resource:

Gersten, R., Chard, D. J., Jayanthi, M., Baker, S. K., Morphy, P., & Flojo, J. (2009). Mathematics instruction for students with learning disabilities: A meta-analysis of instructional components. Review of Educational Research, 79(3), 1202–1242.Find this resource:

Glazer, J. L. (2008). External efforts at district-level reform: The case of the National Alliance for Restructuring Education. Journal of Educational Change, 10(4), 295–314.Find this resource:

Herman, J. L., & Abedi, J. (2004). Issues in assessing English language learners’ opportunity to learn mathematics (CSE Report No. 633). Los Angeles, CA: Center for the Study of Evaluation, National Center for Research on Evaluation, Standards, and Student Testing.Find this resource:

Herman, J. L., Klein, D. C., & Abedi, J. (2000). Assessing students’ opportunity to learn: Teacher and student perspectives. Educational Measurement: Issues and Practice, 19(4), 16–24.Find this resource:

Husén, T. (1967). International study of achievement in mathematics: A comparison of twelve countries. New York, NY: Wiley.Find this resource:

Individuals with Disabilities Education Improvement Act of 2004, (amending 20 U.S.C. §§ 1400 et seq.)

Jenkins, J. R., & Pany, D. (1978). Curriculum biases in reading achievement tests. Journal of Reading Behavior, 10(4), 345–357.Find this resource:

Klenowski, V., & Wyatt-Smith, C. (2012). The impact of high stakes testing: The Australian story. Assessment in education: Principles, policy & practice, 19(1), 65–79.Find this resource:

Kurz, A. (2011). Access to what should be taught and will be tested: Students’ opportunity to learn the intended curriculum. In S. N. Elliott, R. J. Kettler, P. A. Beddow, & A. Kurz (Eds.), The handbook of accessible achievement tests for all students: Bridging the gaps between research, practice, and policy. New York, NY: Springer.Find this resource:

Kurz, A., & Elliott, S. N. (2011). Overcoming barriers to access for students with disabilities: Testing accommodations and beyond. In M. Russell (Ed.), Assessing students in the margins: Challenges, strategies, and techniques (pp. 31–58). Charlotte, NC: Information Age Publishing.Find this resource:

Kurz, A., & Elliott, S. N. (2012). MyiLOGS: My instructional learning opportunities guidance system. Tempe, AZ: Arizona State University.Find this resource:

Kurz, A., & Elliott, S. N. (2015). MyiOBS: My instructional observation system. Tempe, AZ: Arizona State University.Find this resource:

Kurz, A., Elliott, S. N., Kettler, R. J., & Yel, N. (2014). Assessing students’ opportunity to learn the intended curriculum using an online teacher log: Initial validity evidence. Educational Assessment, 19(1), 159–184. doi: 10.1080/10627197.2014.934606Find this resource:

Kurz, A., Elliott, S. N., Wehby, J. H., & Smithson, J. L. (2010). Alignment of the intended, planned, and enacted curriculum in general and special education and its relation to student achievement. Journal of Special Education, 44(3), 1–20.Find this resource:

Muthen, B., Huang, L., Jo, B., Khoo, S., Goff, G. N., Novak, J. R., & Shih, J. C. (1995). Opportunity-to-learn effects on achievement: Analytical aspects. Educational Evaluation and Policy Analysis, 17(3), 371–403.Find this resource:

No Child Left Behind Act of 2001, 20 U.S.C. §§ 6301 et seq. (2001).

Polikoff, M. S. (2010). Instructional sensitivity as a psychometric property of assessments. Educational Measurement: Issues and Practice, 29(4), 3–14.Find this resource:

Porter, A. C. (2002). Measuring the content of instruction: Uses in research and practice. Educational Researcher, 31(7), 3–14.Find this resource:

Porter, A. C., McMaken, J., Hwang, J., & Yang, R. (2011). Common core standards the new US intended curriculum. Educational Researcher, 40 (3), 103–116.Find this resource:

Porter, A. C., Schmidt, W. H., Floden, R. E., & Freeman, D. J. (1978). Impact on what? The importance of content covered (Research Series No. 2). East Lansing, MI: Michigan State University, Institute for Research on Teaching.Find this resource:

Porter, A. C., & Smithson, J. L. (2001). Are content standards being implemented in the classroom? A methodology and some tentative answers. In S. Fuhrman (Ed.), From the Capitol to the classroom: Standards-based reform in the states. One Hundredth Yearbook of the National Society for the Study of Education (pp. 60–80). Chicago, IL: University of Chicago Press.Find this resource:

Roach, A. T., Niebling, B. C., & Kurz, A. (2008). Evaluating the alignment among curriculum, instruction, and assessments: Implications and applications for research and practice. Psychology in the Schools, 45(2), 158–176.Find this resource:

Rowan, B., Camburn, E., & Correnti, R. (2004). Using teacher logs to measure the enacted curriculum: A study of literacy teaching in third-grade classrooms. Elementary School Journal, 105(1), 75–101.Find this resource:

Rowan, B., & Correnti, R. (2009). Studying reading instruction with teacher logs: Lessons from the Study of Instructional Improvement. Educational Researcher, 38(2), 120–131.Find this resource:

Scheerens, J., & Bosker, R. (1997). The foundations of educational effectiveness. New York, NY: Pergamon.Find this resource:

Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21.Find this resource:

Stevens, F. I. (1996, April). The need to expand the opportunity to learn conceptual framework: Should students, parents, and school resources be included? Paper presented at the annual meeting of the American Educational Research Association, New York, NY.Find this resource:

Vannest, K. J., & Hagan-Burke, S. (2010). Teacher time use in special education. Remedial and Special Education, 31(2), 126–142.Find this resource:

Vaughn, S., Levy, S., Coleman, M., & Bos, C. S. (2002). Reading instruction for students with LD and EBD: A synthesis of observation studies. Journal of Special Education, 36(1), 2–13.Find this resource:

Walberg, H. J. (1980). A psychological theory of educational productivity. In F. H. Farley & N. Gordon (Eds.), Psychology and education (pp. 81–110). Berkeley, CA: McCutchan.Find this resource:

Walberg, H. J. (1986). Syntheses of research on teaching. In M. C. Wittrock (Ed.), Handbook of Research on Teaching (3rd ed., pp. 214–229). New York, NY: Macmillan.Find this resource:

Wang, J. (1998). Opportunity to learn: The impacts and policy implications. Educational Evaluation and Policy Analysis, 20(3), 137–156.Find this resource:

Yarbro, J., McKnight, K., Elliott, S. N., & Kurz, A. (in press). The use of digital instructional strategies and students’ opportunity to learn. Journal of Research on Technology in Education.Find this resource: