首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
针对影响演讲比赛打分结果的各种误差来源,本文引入多面Rasch模型对评分进行分析。此模型在分析评分结果中的应用不但有利于有效测量考生的能力水平,而且为识别问题评委、完善评分规则、及评委培训等问题都提供了全新的解决思路。本文同时也介绍了多面Rasch模型的理论及其在演讲比赛评分中的应用框架。  相似文献   

2.
目前电大系统英语考试的口试和作文部分多采用语言运用测试的方式.语言运用测试由于引入评分者而使评分的主观性变大.如何控制评分者差异对考生分数的影响成为保证语言运用测试评分质量的重要环节.本文在比较了行为测试中评分质量控制方面常用的三种理论的基础上,着重介绍了多面Rasch模型在提高评分质量方面的贡献,并探讨了在电大系统如何采用该模型对英语运用测试中的评分者进行培训,以控制评分质量和提高考试信度.  相似文献   

3.
本文针对来自评委影响演讲比赛打分结果的各种误差,引入多面Rasch模型对评分进行分析。此模型在分析评分结果中的应用可以为识别问题评委、诊断评委自身的一致性及评委之间的一致性和评委培训等问题提供全新的解决思路。  相似文献   

4.
本研究的目的是描述一个用于测量写作能力的多面Rasch(FACETS)模型。该FACETS模型是Rasch测量模型的多元变量拓展,它可为写作测评中的校标评分员和写作题目提供框架。本文展示了如何应用FACETS模型解决大型写作测评中遇到的测量问题。参加全州写作考试的1000个随机抽取的学生样本被用来显示该FACETS模型。数据表明即使经过强化训练,评分员的严格度有显著区别。同时,本研究还发现,写作题目难度的区分,虽然微小,却具有统计意义上的显著性。该FACETS模型为解决以作文测评写作能力的大型考试遇到的测量问题提供了一个有前景的途径。  相似文献   

5.
本研究以概化理论和多面Rasch模型为工具,对某市教育教学能力测试的一批实测结果进行了分析,旨在探索影响此类测试评分的因素及其作用机制,为完善测试设计和评分培训提供依据。研究结果表明,影响教育教学能力测试的主要因素是任务难度、评委宽严、评委的跨任务一致性和任务的跨考生难度。当前的教育教学能力测试只适宜做相对决策,不适宜做绝对决策。建议在以后的测试中通过提高测试任务的数量和加强对评分员的针对性培训来提高评分可靠性。  相似文献   

6.
文章是关于大规模计算机辅助英语口语测试效果的实证研究报告。文章首先通过对比发现,计算机系统自动化判分与教师评分所得成绩的相关度为0.911,说明计算机评分基本可代替教师评分完成直接型口试任务。其次采用定量和定性分析方法,从受试者和教师角度对大规模计算机口语测试的效度和信度进行分析,论证了高校口语机考的可行性和整体测试效果。  相似文献   

7.
Rasch模型已经被广泛的应用于教育测量领域,在考试相关的各个方面都产生了重大的影响。拟合统计量分析是应用Rasch模型的一个重要环节,在Rasch分析中起着关键的作用。本文以PETS为例,介绍了Rasch拟合统计量的特点和类别、实际使用方法及其局限性。  相似文献   

8.
以海南省农村初中物理教师为调查对象,发现他们在接受职后教育的过程中存在培训过于密集,有些内容不适合一线教师,培训内容针对性不强等问题,提出通过统筹规划、模块管理、建立学习共同体、加强评价等方式加以改进。  相似文献   

9.
选取MHK口语试题实测数据,运用多面Rasch模型的理论和方法对人工评分与计算机自动评分的严厉程度、评分员内部以及各评分员之间的一致性程度进行研究,分析了两种评分方式在严厉度、一致性等方面存在的具体差异,以及不同试题之间是否存在难度差异,希望据此为提高MHK阅卷的科学化水平及命题质量提供依据和建议。  相似文献   

10.
本研究利用多面rasch模型(MFRM)评估大学生"多元统计方法分析"课程的能力水平,并分析题目的难度和评分者的严苛度。研究结果显示,多面Rasch分析可以很好地解决开放式考试中对于学科能力的评估,其评估结果与学生的反馈一致。  相似文献   

11.
This study investigates how experienced and inexperienced raters score essays written by ESL students on two different prompts. The quantitative analysis using multi-faceted Rasch measurement, which provides measurements of rater severity and consistency, showed that the inexperienced raters were more severe than the experienced raters on one prompt but not on the other prompt, and that differences between the two groups of raters were eliminated following rater training. The qualitative analysis, which consisted of analysis of raters' think-aloud protocols while scoring essays, provided insights into reasons for these differences. Differences were related to the ease with which the scoring rubric could be applied to the two prompts and to differences in how the two groups of raters perceived the appropriateness of the prompts.  相似文献   

12.
13.
Classical test theory (CTT), generalizability theory (GT), and multi-faceted Rasch model (MFRM) approaches to detecting and correcting for rater variability were compared. Each of 4,930 students' responses on an English examination was graded on 9 scales by 3 raters drawn from a pool of 70. CTT and MFRM indicated substantial variation among raters; the MFRM analysis identified far more raters as different than the CTT analysis did. In contrast, the GT rater variance component and the Rasch histograms suggested little rater variation. CTT and MFRM correction procedures both produced different scores for more than 50% of the examinees, but 75% of the examinees received identical results after each correction. The demonstrated value of a correction for systems of well-trained multiple graders has implications for all systems in which subjective scoring is used.  相似文献   

14.
The decision-making behaviors of 8 raters when scoring 39 persuasive and 39 narrative essays written by second language learners were examined, first using Rasch analysis and then, through think aloud protocols. Results based on Rasch analysis and think aloud protocols recorded by raters as they were scoring holistically and analytically suggested that rater background may have contributed to rater expectations that might explain individual differences in the application of the performance criteria of the rubrics when rating essays. The results further suggested that rater ego engagement with the text and/or author may have helped mitigate rater severity and that self-monitoring behaviors by raters may have had a similar mitigating effect.  相似文献   

15.
When good model-data fit is observed, the Many-Facet Rasch (MFR) model acts as a linking and equating model that can be used to estimate student achievement, item difficulties, and rater severity on the same linear continuum. Given sufficient connectivity among the facets, the MFR model provides estimates of student achievement that are equated to control for differences in rater severity. Although several different linking designs are used in practice to establish connectivity, the implications of design differences have not been fully explored. Research is also limited related to the impact of model-data fit on the quality of MFR model-based adjustments for rater severity. This study explores the effects of linking designs and model-data fit for raters on the interpretation of student achievement estimates within the context of performance assessments in music. Results indicate that performances cannot be effectively adjusted for rater effects when inadequate linking or model-data fit is present.  相似文献   

16.
Rater‐mediated assessments exhibit scoring challenges due to the involvement of human raters. The quality of human ratings largely determines the reliability, validity, and fairness of the assessment process. Our research recommends that the evaluation of ratings should be based on two aspects: a theoretical model of human judgment and an appropriate measurement model for evaluating these judgments. In rater‐mediated assessments, the underlying constructs and response processes may require the use of different rater judgment models and the application of different measurement models. We describe the use of Brunswik's lens model as an organizing theme for conceptualizing human judgments in rater‐mediated assessments. The constructs vary depending on which distal variables are identified in the lens models for the underlying rater‐mediated assessment. For example, one lens model can be developed to emphasize the measurement of student proficiency, while another lens model can stress the evaluation of rater accuracy. Next, we describe two measurement models that reflect different response processes (cumulative and unfolding) from raters: Rasch and hyperbolic cosine models. Future directions for the development and evaluation of rater‐mediated assessments are suggested.  相似文献   

17.
本文以某届国际奥林匹克运动会女子跳水决赛为例,综合应用CTT、GT和IRT三大测量理论进行评分者信度分析,从不同角度揭示评分者之间和评分者内部的差异情况。结果表明:CTT的评分者信度分别为0.981和078;GT的概化系数和可靠性指数分别为0.8279和0.8271,比赛中所采用的7名评委分别对选手在5轮上的跳水表现进行评定的决策是比较适宜的决策;在IRT中,相对而言,评委5在7名评委中最为严厉,评委2最为宽松,但评委之间在宽严程度上的差异不显著,评委1和评委4在自身一致性上存在问题,不同评委在评定不同选手、不同难度系数动作和不同轮数上存在偏差,但未达到显著性水平。基于本文的分析,可以了解三种评分者信度分析方法的特点及各自优势,为评分者培训和提高评分信度提供有用信息。  相似文献   

18.
The purpose of this study was to investigate the stability of rater severity over an extended rating period. Multifaceted Rasch analysis was applied to ratings of 16 raters on writing performances of 8, 285 elementary school students. Each performance was rated by two trained raters over a period of seven rating days. Performances rated on the first day were re-rated at the end of the rating period. Statistically significant differences between raters were found within each day and in all days combined. Daily estimates of the relative severity of individual raters were found to differ significantly from single, on-average estimates for the whole rating period. For 10 raters, severity estimates on the last day were significantly different from estimates on the first day. These fndings cast doubt on the practice of using a single calibration of rater severity as the basis for adjustment of person measures.  相似文献   

19.
Machine learning has been frequently employed to automatically score constructed response assessments. However, there is a lack of evidence of how this predictive scoring approach might be compromised by construct-irrelevant variance (CIV), which is a threat to test validity. In this study, we evaluated machine scores and human scores with regard to potential CIV. We developed two assessment tasks targeting science teacher pedagogical content knowledge (PCK); each task contains three video-based constructed response questions. 187 in-service science teachers watched the videos with each had a given classroom teaching scenario and then responded to the constructed-response items. Three human experts rated the responses and the human-consent scores were used to develop machine learning algorithms to predict ratings of the responses. Including the machine as another independent rater, along with the three human raters, we employed the many-facet Rasch measurement model to examine CIV due to three sources: variability of scenarios, rater severity, and rater sensitivity of the scenarios. Results indicate that variability of scenarios impacts teachers’ performance, but the impact significantly depends on the construct of interest; for each assessment task, the machine is always the most severe rater, compared to the three human raters. However, the machine is less sensitive than the human raters to the task scenarios. This means the machine scoring is more consistent and stable across scenarios within each of the two tasks.  相似文献   

20.
多面Rasch模型在主观题评分培训中的应用   总被引:7,自引:2,他引:7  
主观题的评分受到很多因素的影响,如评分者的知识水平、综合能力和个人偏好等。这些评分者偏差不仅会导致不同评分者之间存在主观差异,也会到导致同一评分者在不同的时间也具有主观不稳定性,最终导致主观题评分信度的降低。本研究将多面Rasch模型运用到某国家级考试论述题的评分培训中。通过分析6名有经验评分者对58份试卷的试评数据,鉴别出四种评分者偏差,然后据此对每个评分者进行个别反馈,从而提高评分的客观性和精确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号