首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Writing evaluation: rater and task effects on the reliability of writing scores for children in Grades 3 and 4
Authors:Young-Suk Grace Kim  Christopher Schatschneider  Jeanne Wanzek  Brandy Gatlin  Stephanie Al Otaiba
Institution:1.University of California, Irvine,Irvine,USA;2.Florida Center for Reading Research, Florida State University,Tallahassee,USA;3.Vanderbilt University,Nashville,USA;4.Georgia State University,Atlanta,USA;5.Southern Methodist University,Dallas,USA
Abstract:We examined how raters and tasks influence measurement error in writing evaluation and how many raters and tasks are needed to reach a desirable level of .90 and .80 reliabilities for children in Grades 3 and 4. A total of 211 children (102 boys) were administered three tasks in narrative and expository genres, respectively, and their written compositions were evaluated in widely used evaluation methods for developing writers: holistic scoring, productivity, and curriculum-based writing scores. Results showed that 54 and 52% of variance in narrative and expository compositions were attributable to true individual differences in writing. Students’ scores varied largely by tasks (30.44 and 28.61% of variance), but not by raters. To reach the reliability of .90, multiple tasks and raters were needed, and for the reliability of .80, a single rater and multiple tasks were needed. These findings offer important implications about reliably evaluating children’s writing skills, given that writing is typically evaluated by a single task and a single rater in classrooms and even in some state accountability systems.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号