首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A Monte Carlo approach was used to examine bias in the estimation of indirect effects and their associated standard errors. In the simulation design, (a) sample size, (b) the level of nonnormality characterizing the data, (c) the population values of the model parameters, and (d) the type of estimator were systematically varied. Estimates of model parameters were generally unaffected by either nonnormality or small sample size. Under severely nonnormal conditions, normal theory maximum likelihood estimates of the standard error of the mediated effect exhibited less bias (approximately 10% to 20% too small) compared to the standard errors of the structural regression coefficients (20% to 45% too small). Asymptotically distribution free standard errors of both the mediated effect and the structural parameters were substantially affected by sample size, but not nonnormality. Robust standard errors consistently yielded the most accurate estimates of sampling variability.  相似文献   

2.
The purpose of this study is to investigate the effects of missing data techniques in longitudinal studies under diverse conditions. A Monte Carlo simulation examined the performance of 3 missing data methods in latent growth modeling: listwise deletion (LD), maximum likelihood estimation using the expectation and maximization algorithm with a nonnormality correction (robust ML), and the pairwise asymptotically distribution-free method (pairwise ADF). The effects of 3 independent variables (sample size, missing data mechanism, and distribution shape) were investigated on convergence rate, parameter and standard error estimation, and model fit. The results favored robust ML over LD and pairwise ADF in almost all respects. The exceptions included convergence rates under the most severe nonnormality in the missing not at random (MNAR) condition and recovery of standard error estimates across sample sizes. The results also indicate that nonnormality, small sample size, MNAR, and multicollinearity might adversely affect convergence rate and the validity of statistical inferences concerning parameter estimates and model fit statistics.  相似文献   

3.
This study compared diagonal weighted least squares robust estimation techniques available in 2 popular statistical programs: diagonal weighted least squares (DWLS; LISREL version 8.80) and weighted least squares–mean (WLSM) and weighted least squares—mean and variance adjusted (WLSMV; Mplus version 6.11). A 20-item confirmatory factor analysis was estimated using item-level ordered categorical data. Three different nonnormality conditions were applied to 2- to 7-category data with sample sizes of 200, 400, and 800. Convergence problems were seen with nonnormal data when DWLS was used with few categories. Both DWLS and WLSMV produced accurate parameter estimates; however, bias in standard errors of parameter estimates was extreme for select conditions when nonnormal data were present. The robust estimators generally reported acceptable model–data fit, unless few categories were used with nonnormal data at smaller sample sizes; WLSMV yielded better fit than WLSM for most indices.  相似文献   

4.
This study examined the performance of the weighted root mean square residual (WRMR) through a simulation study using confirmatory factor analysis with ordinal data. Values and cut scores for the WRMR were examined, along with a comparison of its performance relative to commonly cited fit indexes. The findings showed that WRMR illustrated worse fit when sample size increased or model misspecification increased. Lower (i.e., better) values of WRMR were observed when nonnormal data were present, there were lower loadings, and when few categories were analyzed. WRMR generally illustrated expected patterns of relations to other well-known fit indexes. In general, a cutoff value of 1.0 appeared to work adequately under the tested conditions and the WRMR values of “good fit” were generally in agreement with other indexes. Users are cautioned that when the fitted model is misspeficifed, the index might provide misleading results under situations where extremely large sample sizes are used.  相似文献   

5.
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error of approximation (RMSEA), comparative fit index (CFI), and Tucker–Lewis Index (TLI) to reject misspecified models with varying degrees of misspecification. With a sample size of 20, RMSEA, CFI, and TLI are high in both Type I and Type II error rates, whereas LRT has a high Type II error rate. With a sample size of 100, these indexes generally have satisfactory performance, but CFI and TLI are affected by a confounding effect of their baseline model. Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC) have high success rates in identifying the true model when sample size is 100. A comparison with the mixed model approach indicates that separately modeling the means and covariance structures in structural equation modeling dramatically improves the success rate of AIC and BIC.  相似文献   

6.
The utility of Orlando and Thissen’s (2000, 2003) S-X2 fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X2 in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having various conditions typically encountered in applied testing situations. The results show that the Type I error rates were controlled adequately around the nominal alpha by S-X2. The power of the S-X2 statistic was much lower when the source of misfit was multidimensionality than when it was due to discrepancy from the true GRM curves. Once the data size increased sufficiently, however, appropriate power was obtained regardless of the source of the item-misfit. In summary, the generalized S-X2 appears to be a promising index for investigating item fit for polytomous items in educational and psychological assessments.  相似文献   

7.
The comparative fit index (CFI) is one of the most widely-used fit indices in structural equation modeling (SEM). When applying the CFI to model evaluation, although it is universally recognized that the focus should be the population fit, in practice one often considers only the CFI value within a sample and neglects the uncertainty in point estimation. Confidence interval (CI) methods for CFI appeared only recently, but these methods assume multivariate normality, which often fails to hold in practice. In addition, the current methods are applications of the bootstrap and are thus computationally intensive. To better handle nonnormal data and simplify CI construction, in this paper we propose an analytic CI method for CFI without assuming normality. We then carry out simulation studies to compare the new and current methods at various levels of model misfit and nonnormality. Simulation results verify the effectiveness and advantages of the new method.  相似文献   

8.
The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and reliably. In this study, 4 fit indexes (RMSEA, CFI, McDonald's Fit Index, and Steiger's gamma) were used to compute the noncentrality parameter and sample size to achieve certain level of power. The resulting power and sample size varied as a function of (a) choice of fit index, (b) number of variables/degrees of freedom, (c) relation among the variables, and (d) value of the fit index. However, if the level of misspecification were held constant, then the resulting power and sample size would be identical.  相似文献   

9.
In observed‐score equipercentile equating, the goal is to make scores on two scales or tests measuring the same construct comparable by matching the percentiles of the respective score distributions. If the tests consist of different items with multiple categories for each item, a suitable model for the responses is a polytomous item response theory (IRT) model. The parameters from such a model can be utilized to derive the score probabilities for the tests and these score probabilities may then be used in observed‐score equating. In this study, the asymptotic standard errors of observed‐score equating using score probability vectors from polytomous IRT models are derived using the delta method. The results are applied to the equivalent groups design and the nonequivalent groups design with either chain equating or poststratification equating within the framework of kernel equating. The derivations are presented in a general form and specific formulas for the graded response model and the generalized partial credit model are provided. The asymptotic standard errors are accurate under several simulation conditions relating to sample size, distributional misspecification and, for the nonequivalent groups design, anchor test length.  相似文献   

10.
The posterior predictive model checking method is a flexible Bayesian model‐checking tool and has recently been used to assess fit of dichotomous IRT models. This paper extended previous research to polytomous IRT models. A simulation study was conducted to explore the performance of posterior predictive model checking in evaluating different aspects of fit for unidimensional graded response models. A variety of discrepancy measures (test‐level, item‐level, and pair‐wise measures) that reflected different threats to applications of graded IRT models to performance assessments were considered. Results showed that posterior predictive model checking exhibited adequate power in detecting different aspects of misfit for graded IRT models when appropriate discrepancy measures were used. Pair‐wise measures were found more powerful in detecting violations of the unidimensionality and local independence assumptions.  相似文献   

11.
Data collected from questionnaires are often in ordinal scale. Unweighted least squares (ULS), diagonally weighted least squares (DWLS) and normal-theory maximum likelihood (ML) are commonly used methods to fit structural equation models. Consistency of these estimators demands no structural misspecification. In this article, we conduct a simulation study to compare the equation-by-equation polychoric instrumental variable (PIV) estimation with ULS, DWLS, and ML. Accuracy of PIV for the correctly specified model and robustness of PIV for misspecified models are investigated through a confirmatory factor analysis (CFA) model and a structural equation model with ordinal indicators. The effects of sample size and nonnormality of the underlying continuous variables are also examined. The simulation results show that PIV produces robust factor loading estimates in the CFA model and in structural equation models. PIV also produces robust path coefficient estimates in the model where valid instruments are used. However, robustness highly depends on the validity of instruments.  相似文献   

12.
Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance   总被引:1,自引:0,他引:1  
Measurement invariance is usually tested using Multigroup Confirmatory Factor Analysis, which examines the change in the goodness-of-fit index (GFI) when cross-group constraints are imposed on a measurement model. Although many studies have examined the properties of GFI as indicators of overall model fit for single-group data, there have been none to date that examine how GFIs change when between-group constraints are added to a measurement model. The lack of a consensus about what constitutes significant GFI differences places limits on measurement invariance testing. We examine 20 GFIs based on the minimum fit function. A simulation under the two-group situation was used to examine changes in the GFIs (ΔGFIs) when invariance constraints were added. Based on the results, we recommend using Δcomparative fit index, ΔGamma hat, and ΔMcDonald's Noncentrality Index to evaluate measurement invariance. These three ΔGFIs are independent of both model complexity and sample size, and are not correlated with the overall fit measures. We propose critical values of these ΔGFIs that indicate measurement invariance.  相似文献   

13.
Many statistics used in the assessment of differential item functioning (DIF) in polytomous items yield a single item-level index of measurement invariance that collapses information across all response options of the polytomous item. Utilizing a single item-level index of DIF can, however, be misleading if the magnitude or direction of the DIF changes across the steps underlying the polytomous response process. A more comprehensive approach to examining measurement invariance in polytomous item formats is to examine invariance at the level of each step of the polytomous item, a framework described in this article as differential step functioning (DSF). This article proposes a nonparametric DSF estimator that is based on the Mantel-Haenszel common odds ratio estimator ( Mantel & Haenszel, 1959 ), which is frequently implemented in the detection of DIF in dichotomous items. A simulation study demonstrated that when the level of DSF varied in magnitude or sign across the steps underlying the polytomous response options, the DSF-based approach typically provided a more powerful and accurate test of measurement invariance than did corresponding item-level DIF estimators.  相似文献   

14.
McDonald goodness‐of‐fit indices based on maximum likelihood, asymptotic distribution free, and the Satorra‐Bentler scale correction estimation methods are investigated. Sampling experiments are conducted to assess the magnitude of error for each index under variations in distributional misspecification, structural misspecification, and sample size. The Satorra‐Bentler correction‐based index is shown to have the least error under each distributional misspecification level when the model has correct structural specification. The scaled index also performs adequately when there is minor structural misspecification and distributional misspecification. However, when a model has major structural misspecification with distributional misspecification, none of the estimation methods perform adequately.  相似文献   

15.
Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be compared. When applied to a binary data set, our experience suggests that IRT and FA models yield similar fits. However, when the data are polytomous ordinal, IRT models yield a better fit because they involve a higher number of parameters. But when fit is assessed using the root mean square error of approximation (RMSEA), similar fits are obtained again. We explain why. These test statistics have little power to distinguish between FA and IRT models; they are unable to detect that linear FA is misspecified when applied to ordinal data generated under an IRT model.  相似文献   

16.
Though the common default maximum likelihood estimator used in structural equation modeling is predicated on the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to utilize distribution-free estimation methods. Fortunately, promising alternatives are being integrated into popular software packages. Bootstrap resampling, which is offered in AMOS (Arbuckle, 1997), is one potential solution for estimating model test statistic p values and parameter standard errors under nonnormal data conditions. This study is an evaluation of the bootstrap method under varied conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Accuracy of the test statistic p values is evaluated in terms of model rejection rates, whereas accuracy of bootstrap standard error estimates takes the form of bias and variability of the standard error estimates themselves.  相似文献   

17.
This research focuses on the problem of model selection between the latent change score (LCS) model and the autoregressive cross-lagged (ARCL) model when the goal is to infer the longitudinal relationship between variables. We conducted a large-scale simulation study to (a) investigate the conditions under which these models return statistically (and substantively) different results concerning the presence of bivariate longitudinal relationships, and (b) ascertain the relative performance of an array of model selection procedures when such different results arise. The simulation results show that the primary sources of differences in parameter estimates across models are model parameters related to the slope factor scores in the LCS model (specifically, the correlation between the intercept factor and the slope factor scores) as well as the size of the data (specifically, the number of time points and sample size). Among several model selection procedures, correct selection rates were higher when using model fit indexes (i.e., comparative fit index, root mean square error of approximation) than when using a likelihood ratio test or any of several information criteria (i.e., Akaike’s information criterion, Bayesian information criterion, consistent AIC, and sample-size-adjusted BIC).  相似文献   

18.
Orlando and Thissen's S‐X 2 item fit index has performed better than traditional item fit statistics such as Yen's Q1 and McKinley and Mill's G2 for dichotomous item response theory (IRT) models. This study extends the utility of S‐X 2 to polytomous IRT models, including the generalized partial credit model, partial credit model, and rating scale model. The performance of the generalized S‐X 2 in assessing item model fit was studied in terms of empirical Type I error rates and power and compared to G2. The results suggest that the generalized S‐X 2 is promising for polytomous items in educational and psychological testing programs.  相似文献   

19.
Ratings given to the same item response may have a stronger correlation than those given to different item responses, especially when raters interact with one another before giving ratings. The rater bundle model was developed to account for such local dependence by forming multiple ratings given to an item response as a bundle and assigning fixed‐effect parameters to describe response patterns in the bundle. Unfortunately, this model becomes difficult to manage when a polytomous item is graded by more than two raters. In this study, by adding random‐effect parameters to the facets model, we propose a class of generalized rater models to account for the local dependence among multiple ratings and intrarater variation in severity. A series of simulations was conducted with the freeware WinBUGS to evaluate parameter recovery of the new models and consequences of ignoring the local dependence or intrarater variation in severity. The results revealed a good parameter recovery when the data‐generating models were fit, and a poor estimation of parameters and test reliability when the local dependence or intrarater variation in severity was ignored. An empirical example is provided.  相似文献   

20.
In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of not-reached items is high, these traditional approaches may yield biased scores and thereby threatening the validity of test results. In this study, we propose a polytomous scoring approach for handling not-reached items and compare its performance with those of the traditional scoring approaches. Real data from a low-stakes math assessment administered to second and third graders were used. The assessment consisted of 40 short-answer items focusing on addition and subtraction. The students were instructed to answer as many items as possible within 5 minutes. Using the traditional scoring approaches, students’ responses for not-reached items were treated as either not-administered or incorrect in the scoring process. With the proposed scoring approach, students’ nonmissing responses were scored polytomously based on how accurately and rapidly they responded to the items to reduce the impact of not-reached items on ability estimation. The traditional and polytomous scoring approaches were compared based on several evaluation criteria, such as model fit indices, test information function, and bias. The results indicated that the polytomous scoring approaches outperformed the traditional approaches. The complete case simulation corroborated our empirical findings that the scoring approach in which nonmissing items were scored polytomously and not-reached items were considered not-administered performed the best. Implications of the polytomous scoring approach for low-stakes assessments were discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号