Presentation at the Society for Research in Educational Effectiveness (SREE) Explores Methods for Studying Achievement Gaps
Frequently in Empirical Education’s experimental evaluations for school districts, the question of local concern is an achievement gap identified between two student groups. The analysis of these experiments also often finds significant differences between these subgroups in how effective the intervention was (that is, if it increased or decreased the gap) while not finding a significant overall difference. In his 2005 book, Howard Bloom suggested why there may be more statistical power to detect subgroup differences than to detect the average effect. The exploration presented at SREE, which was held in Washington March 1-3, examined the statistical characteristics of eight experiments conducted over the last three years to find out whether a critical assumption of Bloom’s approach held. His assumption is that the average performance gap does not vary across the units that are randomized. The work, led by Andrew P. Jaciw, Empirical Education’s Director of Experimental Design and Analysis, found that the assumption held. This finding is important because it suggests that local experiments focusing on achievement gaps may be less expensive than experiments addressing only the overall average effect of an intervention. (Click here for a copy of the poster and handout.)
Bloom, H. S., (2005). Randomizing groups to evaluate place-based programs. In H. S. Bloom (Ed). Learning More From Social Experiments. New York, NY: Sage.