首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In typical differential item functioning (DIF) assessments, an item's DIF status is not influenced by its status in previous test administrations. An item that has shown DIF at multiple administrations may be treated the same way as an item that has shown DIF in only the most recent administration. Therefore, much useful information about the item's functioning is ignored. In earlier work, we developed the Bayesian updating (BU) DIF procedure for dichotomous items and showed how it could be used to formally aggregate DIF results over administrations. More recently, we extended the BU method to the case of polytomously scored items. We conducted an extensive simulation study that included four “administrations” of a test. For the single‐administration case, we compared the Bayesian approach to an existing polytomous‐DIF procedure. For the multiple‐administration case, we compared BU to two non‐Bayesian methods of aggregating the polytomous‐DIF results over administrations. We concluded that both the BU approach and a simple non‐Bayesian method show promise as methods of aggregating polytomous DIF results over administrations.  相似文献   

2.
在认知诊断模型中进行题目功能差异(DIF)的检测,目的在于保证测验的质量与效果。在以往研究的基础上,本研究重点探索在CDMs框架下,MH、LR、CSIBTEST、WObs、WSw、WXPD 6种DIF检测方法在Q矩阵是否正确设定以及有关DIF影响因素等条件下的表现。结果表明:在Q矩阵正确设定时,WObs、WSw和WXPD统计量表现要好于MH、LR和CSIBTEST方法;在Q矩阵错误设定时,6种方法都会出现Ⅰ类错误率膨胀和统计检验力较低的现象。相对而言,MH、LR和CSIBTEST方法的表现比较稳定,WObs、WSw和WXPD统计量的表现变化较大,WObs、WSw和WXPD统计量的Ⅰ类错误率和统计检验力的结果依然好于MH、LR、CSIBTEST方法。  相似文献   

3.
本文旨在考察HSK应试者的专业背景是否会对他们的阅读成绩产生影响。运用MH方法和SIBTEST方法对2009年HSK(初中等)考试阅读题目进行DIF筛查,把专业背景为自然科学的HSK考生设为目标组,专业背景为人文社会科学的HSK考生设为参照组。MH方法的结果是没有找到含有DIF的题目;SIBTEST方法的结果如下:第一轮DIF筛查检测到一个题目,第二轮DBF筛查检测到一组题目。这组题目有利于人文社会学科专业背景的被试。就检测DIF的方法而言,本研究认为SIBTEST方法更加敏感,DBF检验更加适合像阅读理解测验这样的一组或多组相互关联的题目。  相似文献   

4.
Trend estimation in international comparative large‐scale assessments relies on measurement invariance between countries. However, cross‐national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare trend estimation performances to two linking methods employing international item parameters across several conditions. The trend estimates based on the national item parameters were more accurate than the trend estimates based on the international item parameters when cross‐national DIF was present. Moreover, the use of fixed common item parameter calibrations led to biased trend estimates. The detection and elimination of DIF can reduce this bias but is also likely to increase the total error.  相似文献   

5.
In this paper a new approach to graphical differential item functioning (DIF) is offered. The methodology is based on a sampling-theory approach to expected response functions (Lewis, 1985; Mislevy, Wingersky, & Sheehan, 1994). Essentially error in item calibrations is modeled explicitly, and repeated samples are taken from the posterior distributions of the item parameters. Sampled parameter values are used to estimate the posterior distribution of the difference in item characteristic curves (ICCs)for two groups. A point-wise expectation is taken as an estimate of the true difference between the ICCs, and the sampled-difference functions indicate uncertainty in the estimate. Tbe approach is applied to a set of pretest items, and the results are compared to traditional Mantel-Haenszel DIF statistics. The expected-response-function approach is contrasted with Pashley's (1992) graphical DIF approach.  相似文献   

6.
Two simulation studies investigated Type I error performance of two statistical procedures for detecting differential item functioning (DIF): SIBTEST and Mantel-Haenszel (MH). Because MH and SIBTEST are based on asymptotic distributions requiring "large" numbers of examinees, the first study examined Type 1 error for small sample sizes. No significant Type I error inflation occurred for either procedure. Because MH has the potential for Type I error inflation for non-Rasch models, the second study used a markedly non-Rasch test and systematically varied the shape and location of the studied item. When differences in distribution across examinee group of the measured ability were present, both procedures displayed inflated Type 1 error for certain items; MH displayed the greater inflation. Also, both procedures displayed statistically biased estimation of the zero DIF for certain items, though SIBTEST displayed much less than MH. When no latent distributional differences were present, both procedures performed satisfactorily under all conditions.  相似文献   

7.
Analyzing examinees’ responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study explored the effectiveness of the Wald test in detecting both uniform and nonuniform DIF in the DINA model through a simulation study. Results of this study suggest that for relatively discriminating items, the Wald test had Type I error rates close to the nominal level. Moreover, its viability was underscored by the medium to high power rates for most investigated DIF types when DIF size was large. Furthermore, the performance of the Wald test in detecting uniform DIF was compared to that of the traditional Mantel‐Haenszel (MH) and SIBTEST procedures. The results of the comparison study showed that the Wald test was comparable to or outperformed the MH and SIBTEST procedures. Finally, the strengths and limitations of the proposed method and suggestions for future studies are discussed.  相似文献   

8.
In this study, we investigate the logistic regression (LR), Mantel-Haenszel (MH), and Breslow-Day (BD) procedures for the simultaneous detection of both uniform and nonuniform differential item functioning (DIF). A simulation study was used to assess and compare the Type I error rate and power of a combined decision rule (CDR), which assesses DIF using a combination of the decisions made with BD and MH to those of LR. The results revealed that while the Type I error rate of CDR was consistently below the nominal alpha level, the Type I error rate of LR was high for the conditions having unequal ability distributions. In addition, the power of CDR was consistently higher than that of LR across all forms of DIF.  相似文献   

9.
We analyzed a pool of items from an admissions test for differential item functioning (DIF) for groups based on age, socioeconomic status, citizenship, or English language status using Mantel-Haenszel and item response theory. DIF items were systematically examined to identify its possible sources by item type, content, and wording. DIF was primarily found in the citizenship group. As suggested by expert reviewers, possible sources of DIF in the direction of U.S. citizens was often in Quantitative Reasoning in items containing figures, charts, tables depicting real-world (as opposed to abstract) contexts. DIF items in the direction of non-U.S. citizens included “mathematical” items containing few words. DIF for the Verbal Reasoning items included geocultural references and proper names that may be differentially familiar for non-U.S. citizens. This study is responsive to foundational changes in the fairness section of the Standards for Educational and Psychological Testing, which now consider additional groups in sensitivity analyses, given the increasing demographic diversity in test-taker populations.  相似文献   

10.
11.
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are multidimensional and binary. This study proposes a very general DIF assessment method in the CDM framework which is applicable for various CDMs, more than two groups of examinees, and multiple grouping variables that are categorical, continuous, observed, or latent. The parameters can be estimated with Markov chain Monte Carlo algorithms implemented in the freeware WinBUGS. Simulation results demonstrated a good parameter recovery and advantages in DIF assessment for the new method over the Wald method.  相似文献   

12.
We make a distinction between the operational practice of using an observed score to assess differential item functioning (DIF) and the concept of departure from measurement invariance (DMI) that conditions on a latent variable. DMI and DIF indices of effect sizes, based on the Mantel-Haenszel test of common odds ratio, converge under restricted conditions if a simple sum score is used as the matching or conditioning variable in a DIF analysis. Based on theoretical results, we demonstrate analytically that matching on a weighted sum score can significantly reduce the difference between DIF and DMI measures over what can be achieved with a simple sum score. We also examine the utility of binning methods that could facilitate potential operational use of DIF with weighted sum scores. A real data application was included to show this feasibility.  相似文献   

13.
Once a differential item functioning (DIF) item has been identified, little is known about the examinees for whom the item functions differentially. This is because DIF focuses on manifest group characteristics that are associated with it, but do not explain why examinees respond differentially to items. We first analyze item response patterns for gender DIF and then illustrate, through the use of a mixture item response theory (IRT) model, how the manifest characteristic associated with DIF often has a very weak relationship with the latent groups actually being advantaged or disadvantaged by the item(s). Next, we propose an alternative approach to DIF assessment that first uses an exploratory mixture model analysis to define the primary dimension(s) that contribute to DIF, and secondly studies examinee characteristics associated with those dimensions in order to understand the cause(s) of DIF. Comparison of academic characteristics of these examinees across classes reveals some clear differences in manifest characteristics between groups.  相似文献   

14.
Differential item functioning (DIF) may be caused by an interaction of multiple manifest grouping variables or unexplored manifest variables, which cannot be detected by conventional DIF detection methods that are based on a single manifest grouping variable. Such DIF may be detected by a latent approach using the mixture item response theory model and subsequently explained by multiple manifest variables. This study facilitates the interpretation of latent DIF with the use of background and cognitive variables. The PISA 2009 reading assessment and student survey are analyzed. Results show that members in manifest groups were not homogenously advantaged or disadvantaged and that a single manifest grouping variable did not suffice to be a proxy of latent DIF. This study also demonstrates that DIF items arising from the interaction of multiple variables can be effectively screened by the latent DIF analysis approach. Background and cognitive variables jointly well predicted latent class membership.  相似文献   

15.
Investigations of differential item functioning (DIF) have been conducted mostly on ability tests and have found little evidence of easily interpretable differences across various demographic subgroups. In this study, we examined the degree to which DIF in biographical data items referencing academically relevant background, experiences, and interests was related to differences in judgments about access to these experiences by members of different gender and race subgroups. DIF in the location parameter was significantly related (r = –.51, p < .01) to gender differences in perceived accessibility to experience. No significant relationships with accessibility were observed for DIF in the slope parameter across gender groups or for the slope and location parameters associated with DIF across Black and White groups. Practical implications for use of biodata and theoretical implications for DIF research are discussed.  相似文献   

16.
Large-scale assessments of student competencies address rather broad constructs and use parsimonious, unidimensional measurement models. Differential item functioning (DIF) in certain subpopulations usually has been interpreted as error or bias. Recent work in educational measurement, however, assumes that DIF reflects the multidimensionality that is inherent in broad competency constructs and leads to differential achievement profiles. Thus, DIF parameters can be used to identify the relative strengths and weaknesses of certain student subpopulations. The present paper explores profiles of mathematical competencies in upper secondary students from six countries (Austria, France, Germany, Sweden, Switzerland, the US). DIF analyses are combined with analyses of the cognitive demands of test items based on psychological conceptualisations of mathematical problem solving. Experts judged the cognitive demands of TIMSS test items, and these demand ratings were correlated with DIF parameters. We expected that cultural framings and instructional traditions would lead to specific aspects of mathematical problem solving being fostered in classroom instruction, which should be reflected in differential item functioning in international comparative assessments. Results for the TIMSS mathematics test were in line with expectations about cultural and instructional traditions in mathematics education of the six countries.  相似文献   

17.
A logistic regression model for characterizing differential item functioning (DIF) between two groups is presented. A distinction is drawn between uniform and nonuniform DIF in terms of the parameters of the model. A statistic for testing the hypothesis of no DIF is developed. Through simulation studies, it is shown that the logistic regression procedure is more powerful than the Mantel-Haenszel procedure for detecting nonuniform DIF and as powerful in detecting uniform DIF.  相似文献   

18.
Although the root-mean squared deviation (RMSD) is a popular statistical measure for evaluating country-specific item-level misfit (i.e., differential item functioning [DIF]) in international large-scale assessment, this paper shows that its sensitivity to detect misfit may depend strongly on the proficiency distribution of the considered countries. Specifically, items for which most respondents in a country have a very low (or high) probability of providing a correct answer will rarely be flagged by the RMSD as showing misfit, even if very strong DIF is present. With many international large-scale assessment initiatives moving toward covering a more heterogeneous group of countries, this raises issues for the ability of the RMSD to detect item-level misfit, especially in low-performing countries that are not well-aligned with the overall difficulty level of the test. This may put one at risk of incorrectly assuming measurement invariance to hold, and may also inflate estimated between-country difference in proficiency. The degree to which the RMSD is able to detect DIF in low-performing countries is studied using both an empirical example from PISA 2015 and a simulation study.  相似文献   

19.
ABSTRACT

Differential item functioning (DIF) analyses have been used as the primary method in large-scale assessments to examine fairness for subgroups. Currently, DIF analyses are conducted utilizing manifest methods using observed characteristics (gender and race/ethnicity) for grouping examinees. Homogeneity of item responses is assumed denoting that all examinees respond to test items using a similar approach. This assumption may not hold with all groups. In this study, we demonstrate the first application of the latent class (LC) approach to investigate DIF and its sources with heterogeneous (linguistic minority groups). We found at least three LCs within each linguistic group, suggesting the need to empirically evaluate this assumption in DIF analysis. We obtained larger proportions of DIF items with larger effect sizes when LCs within language groups versus the overall (majority/minority) language groups were examined. The illustrated approach could be used to improve the ways in which DIF analyses are typically conducted to enhance DIF detection accuracy and score-based inferences when analyzing DIF with heterogeneous populations.  相似文献   

20.
Traditional methods for examining differential item functioning (DIF) in polytomously scored test items yield a single item‐level index of DIF and thus provide no information concerning which score levels are implicated in the DIF effect. To address this limitation of DIF methodology, the framework of differential step functioning (DSF) has recently been proposed, whereby measurement invariance is examined within each step underlying the polytomous response variable. The examination of DSF can provide valuable information concerning the nature of the DIF effect (i.e., is the DIF an item‐level effect or an effect isolated to specific score levels), the location of the DIF effect (i.e., precisely which score levels are manifesting the DIF effect), and the potential causes of a DIF effect (i.e., what properties of the item stem or task are potentially biasing). This article presents a didactic overview of the DSF framework and provides specific guidance and recommendations on how DSF can be used to enhance the examination of DIF in polytomous items. An example with real testing data is presented to illustrate the comprehensive information provided by a DSF analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号