Commentary

Which Materials?

Evaluating Curricular Effectiveness
Authors
Rachana Bhatt
Georgia State University
Cory Koedel
University of Missouri–Columbia

Most students use textbooks in the classroom every single day. It stands to reason that the choice of which curriculum materials to adopt is one of the more important choices that educational administrators make. But did you know that only one state in the entire country, Florida, currently collects information about the curriculum materials being used in schools? As surprising as it sounds, despite the mountains of data we are collecting in the arena of public education we generally do not know which textbooks are being used in which schools.

In “Large-Scale Evaluations of Curricular Effectiveness: The Case of Elementary Mathematics in Indiana” (Educational Evaluation and Policy Analysis, 2012), we use data from Indiana to show the potential value of increased data collection on curriculum adoptions (Indiana has stopped collecting data since our study, leaving Florida as the only remaining state to still track curriculum adoptions). Specifically, we perform a student-outcome based evaluation of the three most popular curricula in Indiana during the late 1990s and early 2000s. Because data are so scarce, our study is one of only a handful of studies that aims to link student achievement to different curriculum alternatives. It is hard to believe that most education officials charged with making curriculum adoption decisions are making these decisions without any direct evidence about efficacy, but this is precisely what is occurring in school districts across the nation.

One contribution of our study is simply to provide proof-of-concept. If state education departments would begin collecting data on which curriculum materials are being used in which schools, like the data we use for our study from Indiana, these data could be integrated with pre-existing longitudinal data systems that have already been put in place in most states, including California. Then, studies similar to ours could be replicated elsewhere, which would provide valuable information to education officials about the effectiveness of their various curriculum alternatives (to this end, our paper and the accompanying technical appendix describe our analytic procedure in detail⁠—this procedure could easily be replicated in other states if data became available).

Our findings from Indiana also highlight the potential value of this line of inquiry. For example, we identify statistically significant and meaningful differences in curriculum performance as measured by school-level test scores on the Indiana state test. The most notable difference arises between two curricula that are similarly priced and share the same general pedagogical approach. We also show that the publisher of the curriculum we found to be the least effective did not lose market share in the following adoption cycle in Indiana. There are several potential explanations for this finding. Perhaps the most compelling is that decision makers have virtually no information about which curricula are most effective.

Our full study is available here (gated): Rachana Bhatt and Cory Koedel. Large-Scale Evaluations of Curricular Effectiveness: The Case of Elementary Mathematics in Indiana. Educational Evaluation and Policy Analysis, Volume 34, Issue 4, pages 391-412 (December 2012).

We also direct interested readers to: Mathew M. Chingos and Grover J. “Russ” Whitehurst. Choosing Blindly: Instructional Materials, Teacher Effectiveness, and the Common Core. Report from the Brown Center on Education Policy at Brookings

Suggested citationKoedel, C., & Bhatt, R. (2013, April). Which materials? Evaluating curricular effectiveness [Commentary]. Policy Analysis for California Education. https://edpolicyinca.org/newsroom/which-materials