首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Over the past decade and a half, methodologists working with structural equation modeling (SEM) have developed approaches for accommodating multilevel data. These approaches are particularly helpful when modeling data that come from complex sampling designs. However, most data sets that are associated with complex sampling designs also include observation weights, and methods to incorporate these sampling weights into multilevel SEM analyses have not been addressed. This article investigates the use of different weighting techniques and finds, through a simulation study, that the use of an effective sample size weight provides unbiased estimates of key parameters and their sampling variances. Also, a popular normalization technique of scaling weights to reflect the actual sample size is shown to produce negatively biased sampling variance estimates, as well as negatively biased within-group variance parameter estimates in the small group size case.  相似文献   

2.
Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects modeling (LMM) such as cross-sectional multilevel modeling and latent growth modeling. It is well known that LMM can be formulated as structural equation models. However, one main difference between the implementations in SEM and LMM is that maximum likelihood (ML) estimation is usually used in SEM, whereas restricted (or residual) maximum likelihood (REML) estimation is the default method in most LMM packages. This article shows how REML estimation can be implemented in SEM. Two empirical examples on latent growth model and meta-analysis are used to illustrate the procedures implemented in OpenMx. Issues related to implementing REML in SEM are discussed.  相似文献   

3.
The current widespread availability of software packages with estimation features for testing structural equation models with binary indicators makes it possible to investigate many hypotheses about differences in proportions over time that are typically only tested with conventional categorical data analyses for matched pairs or repeated measures, such as McNemar’s chi-square. The connection between these conventional tests and simple longitudinal structural equation models is described. The equivalence of several conventional analyses and structural equation models reveals some foundational concepts underlying common longitudinal modeling strategies and brings to light a number of possible modeling extensions that will allow investigators to pursue more complex research questions involving multiple repeated proportion contrasts, mixed between-subjects × within-subjects interactions, and comparisons of estimated membership proportions using latent class factors with multiple indicators. Several models are illustrated, and the implications for using structural equation models for comparing binary repeated measures or matched pairs are discussed.  相似文献   

4.
The Bollen-Stine bootstrap can be used to correct for standard error and fit statistic bias that occurs in structural equation modeling (SEM) applications due to nonnormal data. The purpose of this article is to demonstrate the use of a custom SAS macro program that can be used to implement the Bollen-Stine bootstrap with existing SEM software. Although this article focuses on missing data, the macro can be used with complete data sets as well. A series of heuristic analyses are presented, along with detailed programming instructions for each of the commercial SEM software packages.  相似文献   

5.
Structural equation modeling: Back to basics   总被引:1,自引:0,他引:1  
Major technological advances incorporated into structural equation modeling (SEM) computer programs now make it possible for practitioners who are basically unfamiliar with the purposes and limitations of SEM to use this tool within their research contexts. The current move by program developers to market more user friendly software packages is a welcomed trend in the social and behavioral science research community. The quest to simplify the data analysis step in the research process has—at least with regard to SEM—created a situation that allows practitioners to apply SEM but forgetting, knowingly ignoring, or most dangerously, being ignorant of some basic philosophical and statistical issues that must be addressed before sound SEM analyses should be conducted. This article focuses on some of the almost forgotten topics taken here from each step in the SEM process: model conceptualization, identification and parameter estimation, and data‐model fit assessment and model modification. The main objective is to raise awareness among researchers new to SEM of a few basic but key philosophical and statistical issues. These should be addressed before launching into any one of the new generation of SEM software packages and being led astray by the seemingly irresistible temptation to prematurely start “playing” with the data.  相似文献   

6.
Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional and longitudinal models for mixed independent variable dyadic data, and to clarify questions regarding various dyadic data analysis specifications that have not been addressed elsewhere. Artificially generated data similar to the Newlywed Project and the Swedish Adoption Twin Study on Aging were used to illustrate analysis models for distinguishable and indistinguishable dyads, respectively. Due to their widespread use among applied researchers, the AMOS and Mplus statistical analysis software packages were used to analyze the dyadic data structural equation models illustrated here. These analysis models are presented in sufficient detail to allow researchers to perform these analyses using their preferred statistical analysis software package.  相似文献   

7.
A great obstacle for wider use of structural equation modeling (SEM) has been the difficulty in handling categorical variables. Two data sets with known structure between 2 related binary outcomes and 4 independent binary variables were generated. Four SEM strategies and resulting apparent validity were tested: robust maximum likelihood (ML), tetrachoric correlation matrix input followed by SEM ML analysis, SEM ML estimation for the sum of squares and cross-products (SSCP) matrix input obtained by the log-linear model that treated all variables as dependent, and asymptotic distribution-free (ADF) SEM estimation. SEM based on the SSCP matrix obtained by the log-linear model and SEM using robust ML estimation correctly identified the structural relation between the variables. SEM using ADF added an extra parameter. SEM based on tetrachoric correlation input did not specify the data generating process correctly. Apparent validity was similar for all models presented. Data transformation used in log-linear modeling can serve as an input for SEM.  相似文献   

8.
从分布形状、趋中度以及离散程度来看,反应时都具有区别于其他数据类型的非常鲜明的特点。 因此,反应时数据的统计分析处理往往也有不同的技术要求和门槛。基于R 语言的混合效应模型为反应时分布上的正偏斜、各数据点之间强关联以及异常值等问题提供了很好的解决方案。本文在回顾传统的反应时数据分析方法后,以一项具体的研究为实例介绍了使用“混合效应模型”来拟合反应时数据的基本原理、概念内涵以及如何拟合最佳模型等问题。  相似文献   

9.
Cohen’s kappa coefficient was originally proposed for two raters only, and it later extended to an arbitrarily large number of raters to become what is known as Fleiss’ generalized kappa. Fleiss’ generalized kappa and its large-sample variance are still widely used by researchers and were implemented in several software packages, including, among others, SPSS and the R package “rel.” The purpose of this article is to show that the large-sample variance of Fleiss’ generalized kappa is systematically being misused, is invalid as a precision measure for kappa, and cannot be used for constructing confidence intervals. A general-purpose variance expression is proposed, which can be used in any statistical inference procedure. A Monte-Carlo experiment is presented, showing the validity of the new variance estimation procedure.  相似文献   

10.
Multivariate heterogenous data with latent variables are common in many fields such as biological, medical, behavioral, and social-psychological sciences. Mixture structural equation models are multivariate techniques used to examine heterogeneous interrelationships among latent variables. In the analysis of mixture models, determination of the number of mixture components is always an important and challenging issue. This article aims to develop a full Bayesian approach with the use of reversible jump Markov chain Monte Carlo method to analyze mixture structural equation models with an unknown number of components. The proposed procedure can simultaneously and efficiently select the number of mixture components and conduct parameter estimation. Simulation studies show the satisfactory empirical performance of the method. The proposed method is applied to study risk factors of osteoporotic fractures in older people.  相似文献   

11.
Failure to meet the sphericity assumption in repeated measurements analysis of variance can have serious consequences for both omnibus and specific comparison tests. It is shown that, in educational research journals, the relevance of this assumption has hardly been recognized. The risk of an inflated Type I error rate can be minimized by calculating separate error terms and by applying conservative tests. This paper illustrates how this is done. Some notes on the use of three mainframe computer packages are also provided. It is argued that when using these packages, as specific comparisons are typically tested as planned instead of post hoc tests, the outcomes should be interpreted in a conservative sense.  相似文献   

12.
This article applies Bollen’s (1996) 2-stage least squares/instrumental variables (2SLS/IV) approach for estimating the parameters in an unconditional and a conditional second-order latent growth model (LGM). First, the 2SLS/IV approach for the estimation of the means and the path coefficients in a second-order LGM is derived. An empirical example is then used to show that 2SLS/IV yields estimates that are similar to maximum likelihood (ML) in the estimation of a conditional second-order LGM. Three subsequent simulation studies are then presented to show that the new approach is as accurate as ML and that it is more robust against misspecifications of the growth trajectory than ML. Together, these results suggest that 2SLS/IV should be considered as an alternative to the commonly applied ML estimator.  相似文献   

13.
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that might also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches could result in a data analysis problem for which the missingness is ignorable. This article considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates and statistical inferences to assumptions about missing data, a strategy that could be easily implemented using SEM software.  相似文献   

14.
This study examined and compared various statistical methods for detecting individual differences in change. Considering 3 issues including test forms (specific vs. generalized), estimation procedures (constrained vs. unconstrained), and nonnormality, we evaluated 4 variance tests including the specific Wald variance test, the generalized Wald variance test, the specific likelihood ratio (LR) variance test, and the generalized LR variance test under both constrained and unconstrained estimation for both normal and nonnormal data. For the constrained estimation procedure, both the mixture distribution approach and the alpha correction approach were evaluated for their performance in dealing with the boundary problem. To deal with the nonnormality issue, we used the sandwich standard error (SE) estimator for the Wald tests and the Satorra–Bentler scaling correction for the LR tests. Simulation results revealed that testing a variance parameter and the associated covariances (generalized) had higher power than testing the variance solely (specific), unless the true covariances were zero. In addition, the variance tests under constrained estimation outperformed those under unconstrained estimation in terms of higher empirical power and better control of Type I error rates. Among all the studied tests, for both normal and nonnormal data, the robust generalized LR and Wald variance tests with the constrained estimation procedure were generally more powerful and had better Type I error rates for testing variance components than the other tests. Results from the comparisons between specific and generalized variance tests and between constrained and unconstrained estimation were discussed.  相似文献   

15.
16.
This article outlines an interval estimation procedure that can be used in a 3-level setting to evaluate the proportion of outcome variance attributable to the second level of clustering. The method is useful for examining the necessity of including a possibly omitted intermediate level of nesting in analyses of data from a multilevel study, and represents an informative addendum to current statistical tests of second-level variance. The approach is developed within the framework of latent variable modeling and can be used as an aid in the process of choosing between 2-level and 3-level models in a hierarchical design. The discussed procedure is illustrated with an empirical example.  相似文献   

17.
Change in learning strategies during higher education is an important topic of research when considering students’ approaches to learning. Regarding the statistical techniques used to analyse this change, repeated measures ANOVA is mostly relied upon. Recently, multilevel and multi-indicator latent growth (MILG) analyses have been used as well. The present study provides details concerning the differences between these three techniques. By applying them to the same dataset, we aim to answer two research questions. Firstly, how are findings on the average trend complementary, convergent or divergent? Secondly, how are results on the differential growth over time complementary, convergent or divergent? Data originates from a longitudinal study on the change in learning strategies during the transition from secondary to higher education in Flanders (Belgium). 425 students provided complete data at each of the three waves of data collection. Results on the significance of average trends are convergent while the strength of the growth over time diverges across analysis techniques. Regarding the differential change, the MILG seems more able to detect variance in growth over time. Recommendations for future research on the changeability of learning strategies over time are provided.  相似文献   

18.
This study compared diagonal weighted least squares robust estimation techniques available in 2 popular statistical programs: diagonal weighted least squares (DWLS; LISREL version 8.80) and weighted least squares–mean (WLSM) and weighted least squares—mean and variance adjusted (WLSMV; Mplus version 6.11). A 20-item confirmatory factor analysis was estimated using item-level ordered categorical data. Three different nonnormality conditions were applied to 2- to 7-category data with sample sizes of 200, 400, and 800. Convergence problems were seen with nonnormal data when DWLS was used with few categories. Both DWLS and WLSMV produced accurate parameter estimates; however, bias in standard errors of parameter estimates was extreme for select conditions when nonnormal data were present. The robust estimators generally reported acceptable model–data fit, unless few categories were used with nonnormal data at smaller sample sizes; WLSMV yielded better fit than WLSM for most indices.  相似文献   

19.
Generalizability theory (G theory) employs random-effects ANOVA to estimate the variance components included in generalizability coefficients, standard errors, and other indices of precision. The ANOVA models depend on random sampling assumptions, and the variance-component estimates are likely to be sensitive to violations of these assumptions. Yet, generalizability studies do not typically sample randomly. This kind of inconsistency between assumptions in statistical models and actual data collection procedures is not uncommon in science, but it does raise fundamental questions about the substantive inferences based on the statistical analyses. This article reviews criticisms of sampling assumptions in G theory (and in reliability theory) and examines the feasibility of using representative sampling, stratification, homogeneity assumptions, and replications to address these criticisms.  相似文献   

20.
The multiple-matrix item sampling designs that provide information about population characteristics most efficiently administer too few responses to students to estimate their proficiencies individually. Marginal estimation procedures, which estimate population characteristics directly from item responses, must be employed to realize the benefits of such a sampling design. Numerical approximations of the appropriate marginal estimation procedures for a broad variety of analyses can be obtained by constructing, from the results of a comprehensive extensive marginal solution, files of plausible values of student proficiencies. This article develops the concepts behind plausible values in a simplified setting, sketches their use in the National Assessment of Educational Progress (NAEP), and illustrates the approach with data from the Scholastic Aptitude Test (SA T).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号