首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this ITEMS module, we provide a didactic overview of the specification, estimation, evaluation, and interpretation steps for diagnostic measurement/classification models (DCMs), which are a promising psychometric modeling approach. These models can provide detailed skill‐ or attribute‐specific feedback to respondents along multiple latent dimensions and hold theoretical and practical appeal for a variety of fields. We use a current unified modeling framework—the log‐linear cognitive diagnosis model (LCDM)—as well as a series of quality‐control checklists for data analysts and scientific users to review the foundational concepts, practical steps, and interpretational principles for these models. We demonstrate how the models and checklists can be applied in real‐life data‐analysis contexts. A library of macros and supporting files for Excel, SAS, and Mplus are provided along with video tutorials for key practices.  相似文献   

2.
Drawing valid inferences from modern measurement models is contingent upon a good fit of the data to the model. Violations of model‐data fit have numerous consequences, limiting the usefulness and applicability of the model. As Bayesian estimation is becoming more common, understanding the Bayesian approaches for evaluating model‐data fit models is critical. In this instructional module, Allison Ames and Aaron Myers provide an overview of Posterior Predictive Model Checking (PPMC), the most common Bayesian model‐data fit approach. Specifically, they review the conceptual foundation of Bayesian inference as well as PPMC and walk through the computational steps of PPMC using real‐life data examples from simple linear regression and item response theory analysis. They provide guidance for how to interpret PPMC results and discuss how to implement PPMC for other model(s) and data. The digital module contains sample data, SAS code, diagnostic quiz questions, data‐based activities, curated resources, and a glossary.  相似文献   

3.
In this digital ITEMS module, Dr. Jeffrey Harring and Ms. Tessa Johnson introduce the linear mixed effects (LME) model as a flexible general framework for simultaneously modeling continuous repeated measures data with a scientifically defensible function that adequately summarizes both individual change as well as the average response. The module begins with a nontechnical overview of longitudinal data analyses drawing distinctions with cross-sectional analyses in terms of research questions to be addressed. Nuances of longitudinal designs, timing of measurements, and the real possibility of missing data are then discussed. The three interconnected components of the LME model—(1) a model for individual and mean response profiles, (2) a model to characterize the covariation among the time-specific residuals, and (3) a set of models that summarize the extent that individual coefficients vary—are discussed in the context of the set of activities comprising an analysis. Finally, they demonstrate how to estimate the linear mixed effects model within an open-source environment (R). The digital module contains sample R code, diagnostic quiz questions, hands-on activities in R, curated resources, and a glossary.  相似文献   

4.
In this digital ITEMS module, Dr. Jue Wang and Dr. George Engelhard Jr. describe the Rasch measurement framework for the construction and evaluation of new measures and scales. From a theoretical perspective, they discuss the historical and philosophical perspectives on measurement with a focus on Rasch's concept of specific objectivity and invariant measurement. Specifically, they introduce the origins of Rasch measurement theory, the development of model‐data fit indices, as well as commonly used Rasch measurement models. From an applied perspective, they discuss best practices in constructing, estimating, evaluating, and interpreting a Rasch scale using empirical examples. They provide an overview of a specialized Rasch software program (Winsteps) and an R program embedded within Shiny (Shiny_ERMA) for conducting the Rasch model analyses. The module is designed to be relevant for students, researchers, and data scientists in various disciplines such as psychology, sociology, education, business, health, and other social sciences. It contains audio‐narrated slides, sample data, syntax files, access to Shiny_ERMA program, diagnostic quiz questions, data‐based activities, curated resources, and a glossary.  相似文献   

5.
In this digital ITEMS module, Dr. Roy Levy describes Bayesian approaches to psychometric modeling. He discusses how Bayesian inference is a mechanism for reasoning in a probability-modeling framework and is well-suited to core problems in educational measurement: reasoning from student performances on an assessment to make inferences about their capabilities more broadly conceived, as well as fitting models to characterize the psychometric properties of tasks. The approach is first developed in the context of estimating a mean and variance of a normal distribution before turning to the context of unidimensional item response theory (IRT) models for dichotomously scored data. Dr. Levy illustrates the process of fitting Bayesian models using the JAGS software facilitated through the R statistical environment. The module is designed to be relevant for students, researchers, and data scientists in various disciplines such as education, psychology, sociology, political science, business, health, and other social sciences. It contains audio-narrated slides, diagnostic quiz questions, and data-based activities with video solutions as well as curated resources and a glossary.  相似文献   

6.
7.
In this ITEMS module, we provide a two‐part introduction to the topic of reliability from the perspective of classical test theory (CTT). In the first part, which is directed primarily at beginning learners, we review and build on the content presented in the original didactic ITEMS article by Traub and Rowley (1991). Specifically, we discuss the notion of reliability as an intuitive everyday concept to lay the foundation for its formalization as a reliability coefficient via the basic CTT model. We then walk through the step‐by‐step computation of key reliability indices and discuss the data collection conditions under which each is most suitable. In the second part, which is directed primarily at intermediary learners, we present a distribution‐centered perspective on the same content. We discuss the associated assumptions of various CTT models ranging from parallel to congeneric, and review how these affect the choice of reliability statistics. Throughout the module, we use a customized Excel workbook with sample data and basic data manipulation functionalities to illustrate the computation of individual statistics and to allow for structured independent exploration. In addition, we provide quiz questions with diagnostic feedback as well as short videos that walk through sample exercises within the workbook.  相似文献   

8.
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these frameworks. They review the different stages of test development and associated item analyses to identify poorly performing items and effective item selection. Moreover, they walk through the computational and interpretational steps for CTT‐ and IRT‐based evaluation statistics using simulated data examples and review various graphical displays such as distractor response curves, item characteristic curves, and item information curves. The digital module contains sample data, Excel sheets with various templates and examples, diagnostic quiz questions, data‐based activities, curated resources, and a glossary.  相似文献   

9.
10.
We describe an approach to characterizing and diagnosing complex professional competencies (CPCs) for the field of Intrapreneurship, i.e. activities of an entrepreneurial nature engaged by employees within their existing organizations. Our approach draws upon prior conceptual, empirical, and analytical efforts by researchers in Germany. Results are presented from an application of a cognitive diagnostic modeling approach to the performance of late stage apprentices on tasks derived from a previously developed competence model of Intrapreneurship. The results are discussed in terms of the type of cognitive diagnosis model (CDM) most appropriate for the domain and task battery, and patterns of performance are presented for seven diagnosable Intrapreneurship skills. By interpreting the assessment task response data in terms of a CDM, diagnostic, skill‐based information is obtained which verifies the strengths and weaknesses of the apprentices at a late stage in their training and has the potential to provide feedback to training programs triggering the improvement of individual apprentice learning and subsequent work‐related performance.  相似文献   

11.
12.
This article used the Wald test to evaluate the item‐level fit of a saturated cognitive diagnosis model (CDM) relative to the fits of the reduced models it subsumes. A simulation study was carried out to examine the Type I error and power of the Wald test in the context of the G‐DINA model. Results show that when the sample size is small and a larger number of attributes are required, the Type I error rate of the Wald test for the DINA and DINO models can be higher than the nominal significance levels, while the Type I error rate of the A‐CDM is closer to the nominal significance levels. However, with larger sample sizes, the Type I error rates for the three models are closer to the nominal significance levels. In addition, the Wald test has excellent statistical power to detect when the true underlying model is none of the reduced models examined even for relatively small sample sizes. The performance of the Wald test was also examined with real data. With an increasing number of CDMs from which to choose, this article provides an important contribution toward advancing the use of CDMs in practical educational settings.  相似文献   

13.
Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used in conjunction with the DINA model. The proposed method can be implemented with other considerations such as substantive information about the items, or expert knowledge about the domain, to produce a more integrative framework of Q‐matrix validation. The paper presents the theoretical foundation for the proposed method, develops an algorithm for its practical implementation, and provides real and simulated data applications to examine its viability. Relevant issues regarding the implementation of the method are discussed.  相似文献   

14.
近年来关于DINA模型的相关研究显示,样本量、先验分布、经验贝叶斯或完全贝叶斯估计方法、样本的代表性、项目功能差异和Q阵误指等,均可能是导致DINA项目参数估计发生偏差的原因。使用Monte Carlo模拟试验,对DINA项目参数(猜测参数和失误参数)的组合变化类型和偏差量进行考察,通过条件极大似然估计法估计知识状态,发现项目参数估计值与真值偏差不大时,对知识状态估计的精度影响不大;但是项目参数偏离真值较大时,尤其是在三种组合类型上,对属性掌握存在明显的高估或低估现象。研究结果对于诊断测验等值有一定的启示:若两个测验上锚题的项目参数出现了较大的偏差(0.1),则需要考虑等值的必要性。  相似文献   

15.
Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model‐data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing this module, the reader will have an understanding of traditional and Bayesian approaches for evaluating model‐data fit of IRT models, the relative advantages of each approach, and the software available to implement each method.  相似文献   

16.
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random‐effect approach to allow examinees to have different probabilities of slipping and guessing. Two extensions of the DINA model were developed and tested to represent the random components of slipping and guessing. The first model assumed that a random variable can be incorporated in the slipping parameters to allow examinees to have different levels of caution. The second model assumed that the examinees’ ability may increase the probability of a correct response if they have not mastered all of the required attributes of an item. The results of a series of simulations based on Markov chain Monte Carlo methods showed that the model parameters and attribute‐mastery profiles can be recovered relatively accurately from the generating models and that neglect of the random effects produces biases in parameter estimation. Finally, a fraction subtraction test was used as an empirical example to demonstrate the application of the new models.  相似文献   

17.
Course Management Systems (CMSs) in higher education have emerged as one of the most widely adopted e‐learning platforms. This study examines the success of e‐learning CMSs based on user satisfaction and benefits. Using DeLone and McLean's information system success model as a theoretical framework, we analyze the success of e‐learning CMSs in five dimensions: system quality, information quality, instructional quality, user satisfaction, and CMS benefits. An analysis of survey data collected from students participating in a university‐wide CMS shows that system quality, information quality, and instructional quality positively influence user satisfaction, which, in turn, increases the benefits of CMSs. By providing a comprehensive framework for the critical success factors in e‐learning CMSs and their causal relationships, this study provides practical implications for managing e‐learning courses and resources for a more flexible and effective CMS‐centered, e‐learning environment.  相似文献   

18.
Most of the existing classification accuracy indices of attribute patterns lose effectiveness when the response data is absent in diagnostic testing. To handle this issue, this article proposes new indices to predict the correct classification rate of a diagnostic test before administering the test under the deterministic noise input “and” gate (DINA) model. The new indices include an item‐level expected classification accuracy (ECA) for attributes and a test‐level ECA for attributes and attribute patterns, and both of them are calculated based solely on the known item parameters and Q ‐matrix. Theoretical analysis showed that the item‐level ECA could be regarded as a measure of correct classification rates of attributes contributed by an item. This article also illustrates how to apply the item‐level ECA for attributes to estimate the correct classification rate of attributes patterns at the test level. Simulation results showed that two test‐level ECA indices, ECA_I_W (an index based on the independence assumption and the weighted sum of the item‐level ECAs) and ECA_C_M (an index based on Gaussian Copula function that incorporates the dependence structure of the events of attribute classification and the simple average of the item‐level ECAs), could make an accurate prediction for correct classification rates of attribute patterns.  相似文献   

19.
Cognitive diagnosis models (CDMs) have been developed to evaluate the mastery status of individuals with respect to a set of defined attributes or skills that are measured through testing. When individuals are repeatedly administered a cognitive diagnosis test, a new class of multilevel CDMs is required to assess the changes in their attributes and simultaneously estimate the model parameters from the different measurements. In this study, the most general CDM of the generalized deterministic input, noisy “and” gate (G‐DINA) model was extended to a multilevel higher order CDM by embedding a multilevel structure into higher order latent traits. A series of simulations based on diverse factors was conducted to assess the quality of the parameter estimation. The results demonstrate that the model parameters can be recovered fairly well and attribute mastery can be precisely estimated if the sample size is large and the test is sufficiently long. The range of the location parameters had opposing effects on the recovery of the item and person parameters. Ignoring the multilevel structure in the data by fitting a single‐level G‐DINA model decreased the attribute classification accuracy and the precision of latent trait estimation. The number of measurement occasions had a substantial impact on latent trait estimation. Satisfactory model and person parameter recoveries could be achieved even when assumptions of the measurement invariance of the model parameters over time were violated. A longitudinal basic ability assessment is outlined to demonstrate the application of the new models.  相似文献   

20.
This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model emerged as the dominant model. It then fit the dominant model and the other models to confirm that the model provides the best fit to the data. The model found to represent the most number of items in the test was the Compensatory Reparameterized Unified Model (C-RUM) and other models compared were the Deterministic-Input, Noisy-And (DINA), Deterministic Input, Noisy-Or-gate (DINO), and Noisy Input, Deterministic-Or-gate (NIDO). The absolute (item-association root mean square error values) and relative (information criteria) model fit indices also indicated that the LCDM and the C-RUM were the best fit to the data. More detailed analyses on the functioning of the C-RUM were conducted and the interpretation of the results was included in the discussion section. The article ends with some suggestions for future research based on the limitations of the study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号