Download full text
Download full text
ERIC Number: ED567217
Record Type: Non-Journal
Publication Date: 2016
Reference Count: 18
Developing a Theory of Treatment Effect Heterogeneity through Better Design: Where Do Behavioral Science Interventions Work Best?
Tipton, Elizabeth; Yeager, David; Iachan, Ronaldo
Society for Research on Educational Effectiveness
Questions regarding the generalizability of results from educational experiments have been at the forefront of methods development over the past five years. This work has focused on methods for estimating the effect of an intervention in a well-defined inference population (e.g., Tipton, 2013; O'Muircheartaigh and Hedges, 2014); methods for assessing similarity between the students and schools in a study to those in an inference population (e.g., Stuart, Cole, Bradshaw, and Leaf, 2011; Tipton, 2014a); and methods for improved site selection with generalization in mind (e.g., Tipton et al, 2014; Tipton, 2014b). To date, this work has been developed in the context of the large-scale education experiments typically funded by IES, wherein schools are typically randomly assigned to receive a yearlong curricular intervention. Given this context, this work on generalization has focused largely on estimation of the average "treatment effect" in a population, particularly under the assumption that random selection of schools into the study is infeasible (see Olsen, Orr, Bell, and Stuart, 2013). This paper argues that behavioral science interventions are different in two important ways from these standard large-scale education experiments, and that these differences provide an opportunity to develop new methods for generalization. First, behavioral science interventions are more often randomly assigned to students, not to intact schools; this results in the more powerful multi-site (i.e., random-block) design that, in addition to an average treatment impact, also allows for estimation of the distribution of treatment impacts across schools. Second, and perhaps of even greater importance, the fact that these behavioral science interventions do not focus on curricular changes and do not require large and lasting changes to school routines means that it is easier to recruit schools and students into the studies. The paper argues further that these differences allow researchers to shift from answering only questions about the average effect, to questions regarding variability in treatment impacts as well. Behavioral science interventions are becoming more common in education and policy more generally. With this comes the opportunity to develop generalizable theories of treatment effect heterogeneity--something that is difficult to do with whole-school reform. By doing so, researchers will be better situated to help policy makers and school officials understand where these brief interventions hold most promise, and where further research is needed. Tables and figures are appended.
Descriptors: Behavioral Sciences, Behavioral Science Research, Intervention, Educational Experiments, Generalization, Evaluation Methods, Randomized Controlled Trials, Probability, Sampling, Public Schools, High School Students, Grade 9, Research Design, Hierarchical Linear Modeling, Generalizability Theory, Surveys
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; Fax: 202-640-4401; e-mail: email@example.com; Web site: http://www.sree.org
Publication Type: Reports - Research
Education Level: High Schools; Secondary Education; Grade 9; Junior High Schools; Middle Schools
Authoring Institution: Society for Research on Educational Effectiveness (SREE)