首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Average Effect Sizes in Developer-Commissioned and Independent Evaluations
Authors:Rebecca Wolf  Jennifer Morrison  Amanda Inns  Robert Slavin  Kelsey Risman
Institution:1. Center for Research and Reform in Education, Johns Hopkins University, Baltimore, Maryland, USAbetsywolf@jhu.edu;3. Center for Research and Reform in Education, Johns Hopkins University, Baltimore, Maryland, USA
Abstract:Abstract

Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties. Using study data from the What Works Clearinghouse, we find evidence of a “developer effect,” where program evaluations carried out or commissioned by developers produced average effect sizes that were substantially larger than those identified in evaluations conducted by independent parties. We explore potential reasons for the existence of a “developer effect” and provide evidence that interventions evaluated by developers were not simply more effective than those evaluated by independent parties. We conclude by discussing plausible explanations for this phenomenon as well as providing suggestions for researchers to mitigate potential bias in evaluations moving forward.
Keywords:Program evaluation  What Works Clearinghouse  meta-analysis  preregistration  publication bias
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号