Data Research and Development Center header
Picture of a circuit board
About the DRDC Navigation link
What We Do Navigation link
Our Research navigation link which falls under the What We Do category
Building Capacity navigation link which falls under the What We Do category
IERI Research Community navigation link

News navigation link
Related Links navigation link
IERI Just the Facts brochure

 

Using Experiments in Scale-Up Research

Given the nested contexts within which student learning occurs, efforts to explain the success or failure of interventions solely with reference to individual-level characteristics may be misdirected. Advanced statistical techniques allow us to develop empirical models that simultaneously capture both individual- and school-level influences on student achievement. A key component of scale-up research becomes developing designs which could be executed in field settings and be generalized to inform educational policy. DRDC research in this area focused on: (1) the design of experiments (or quasi-experiments) to study scale-up; (2) procedures to obtain sample sizes required for minimal power; (3) how design choices affect sample choices; and (4) computing statistical power for multi-level designs of educational. Center investigators have made numerous presentations on these issues, including at a symposium organized for the 2006 Annual Meeting of AAAS titled “Implementation of Clinical Trials and Experimental Research in Science Education.” Related DRDC papers and publications include:

  • Nye, B., Konstantopoulos, S., & Hedges, L.V. (2004). How large are teacher effects? Educational Evaluation and Policy Analysis, 26: 237-257.
  • Petrin, R.A. (2005). Item nonresponse and multiple imputation for multilevel models: An overview of issues and solutions. Data Research and Development Center Working Paper Series. Chicago, Illinois: Data Research and Development Center.
  • Petrin, R.A. (2005). Item nonresponse and multiple imputation for multilevel models: Results from a simulation study. Data Research and Development Center Working Paper Series. Chicago, Illinois: Data Research and Development Center.
  • Schneider, B., Wyse, A., & Keesler, V. (2007). “Is small really better: Testing some assumptions about school size.” Brookings Papers on Education Policy: 2006/2007, eds. T. Loveless and F. Hess. Washington DC: Brookings Institution Press.
  • Hedges, L.V. (2007). Correcting a significance test for clustering. Journal of Educational and Behavioral Statistics, 32(2): 151-179.
  • Hedges, L.V. (2007). Effect sizes in cluster randomized designs. Journal of Educational and Behavioral Statistics, 32(4): 341-370.
  • Konstantopoulos, S., & Hedges, L.V. (2008). How large an effect can we expect from school reforms? Teachers College Record, 110: 1613-1640.
  • Hedges, L.V. (in press). Effect sizes in three level designs. Journal of Educational and Behavioral Statistics.
  • Hedges, L.V. (in press). What are effect sizes and why do we need them? Developmental Psychology Perspectives.
  • Hedges, L.V. (in press). Effect sizes in studies with nested designs. In H. Cooper, L. V. Hedges, and J. Valentine (Eds.). The Handbook of Research Synthesis, 2nd Edition.
  • Konstantopoulos, S. (in press). Computing Power of Tests for the Variability of Treatment Effects in Designs with Two Levels of Nesting, Multivariate Behavioral Research.
  • Konstantopoulos, S. (in press). The Power of the Test for Treatment Effects in Three-level Cluster Randomized Designs, Journal for Research on Educational Effectiveness.
 
Logos of the University of Chicago and the National Organization Research Center