首页 | 本学科首页   官方微博 | 高级检索  
     检索      


The effectiveness of machine score-ability ratings in predicting automated scoring performance
Authors:Susan Lottridge  Scott Wood  Dan Shaw
Institution:1. Psychometrics, American Institutes for Research, Washington, District of Columbiaslottridge@air.org;3. ACT, Research Technology, Data Science, &4. Analytics, Lakewood, Colorado, USA;5. Writing and Communications, ACT, Iowa City, Iowa
Abstract:ABSTRACT

This study sought to provide a framework for evaluating machine score-ability of items using a new score-ability rating scale, and to determine the extent to which ratings were predictive of observed automated scoring performance. The study listed and described a set of factors that are thought to influence machine score-ability; these factors informed the score-ability rating applied by expert raters. Five Reading items, six Science items, and 10 Math items were examined. Experts in automated scoring served as reviewers, providing independent ratings of score-ability before engine calibration. Following the rating, engines were calibrated and their performances were evaluated using common industry criteria. Three derived criteria from the engine evaluations were computed: the score-ability value in the rating scale based on the empirical results, the number of industry evaluation criteria met by the engine, the approval status of the engine based on the number of criteria met. The results indicated that the score-ability ratings were moderately correlated with Science score-ability, the ratings were weakly correlated with Math score-ability, and were not correlated with Reading score-ability.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号