首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations of teaching (SETs); however, this global correlation did not hold true for individual teachers and courses. In fact, there was a large variance in the correlations between GPAs and SETs, including some teachers with a negative correlation and a large variance between courses.  相似文献   

2.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

3.
Abstract

The validity of traditional opinion-based student evaluations of teaching (SETs) may be compromised by inattentive responses and low response rates due to evaluation fatigue, and/or by personal response bias. To reduce the impact of evaluation fatigue and personal response bias on SETs, this study explores peer prediction-based SETs as an alternative to opinion-based SETs in a multicultural environment. The results suggest that statistically significant fewer respondents are needed to reach stable average outcomes when peer prediction-based SETs are used than when opinion-based SETs are used. This implies that peer prediction-based SETs could reduce evaluation fatigue, as not all students would need to do each evaluation. The results also report that the peer prediction-based method significantly reduces the bias evident in the opinion-based method, in respect of gender and prior academic performance. However, in respect of the cultural variables, race and home language, bias was identified in the peer prediction-based method, where none was evident in the opinion-based method. These observations, interpreted through the psychology literature on the formulation of perceptions of others, imply that although peer prediction-based SETs may in some instances reduce some personal response bias, it may introduce the perceived bias of others.  相似文献   

4.
Student evaluation of teaching (SET) is now common practice across higher education, with the results used for both course improvement and quality assurance purposes. While much research has examined the validity of SETs for measuring teaching quality, few studies have investigated the factors that influence student participation in the SET process. This study aimed to address this deficit through the analysis of an SET respondent pool at a large Canadian research-intensive university. The findings were largely consistent with available research (showing influence of student gender, age, specialisation area and final grade on SET completion). However, the study also identified additional influential course-specific factors such as term of study, course year level and course type as statistically significant. Collectively, such findings point to substantively significant patterns of bias in the characteristics of the respondent pool. Further research is needed to specify and quantify the impact (if any) on SET scores. We conclude, however, by recommending that such bias does not invalidate SET implementation, but instead should be embraced and reported within standard institutional practice, allowing better understanding of feedback received, and driving future efforts at recruiting student respondents.  相似文献   

5.
Student evaluations of teaching (SETs) are an important point of assessment for faculty in curriculum development, tenure and promotion decisions, and merit raises. Faculty members utilise SETs to gain feedback on their classes and, hopefully, improve them. The question of the validity of student responses on SETs is a continuing debate in higher education. The current study uses data from two universities (n = 596) to determine whether and under what conditions students are honest on in-class and online SETs, while also assessing their knowledge and attitudes about SETs. Findings reveal that, while students report a high level of honesty on SETs, they are more likely to be honest when they believe that evaluations effectively measure the quality of the course, the results improve teaching and benefit students rather than the administration, and when they are given at the end of the term. Honesty on evaluations is not associated with socio-demographic characteristics.  相似文献   

6.
Students’ evaluations of teacher performance (SETs) are increasingly used by universities. However, SETs are controversial mainly due to two issues: (1) teachers value various aspects of excellent teaching differently, and (2) SETs should not be determined on exogenous influences. Therefore, this paper constructs SETs using a tailored version of the non-parametric Data Envelopment Analysis approach. In particular, we account for different values and interpretations that teachers attach to ‘good teaching’. Moreover, we reduce the impact of measurement errors and a-typical observations, and account explicitly for heterogeneous background characteristics arising from teacher, student and course characteristics.  相似文献   

7.
Relating students?? evaluations of teaching (SETs) to student learning as an approach to validate SETs has produced inconsistent results. The present study tested the hypothesis that the strength of association of SETs and student learning varies with the criteria used to indicate student learning. A multisection validity approach was employed to investigate the association of SETs and two different criteria of student learning, a multiple-choice test and a practical examination. Participants were N?=?883 medical students, enrolled in k?=?32 sections of the same course. As expected, results showed a strong positive association between SETs and the practical examination but no significant correlation between SETs and multiple-choice test scores. Furthermore, students?? subjective perception of learning significantly correlated with the practical examination score whereas no relation was found for subjective learning and the multiple choice test. It is discussed whether these results might be due to different measures of student learning varying in the degree to which they reflect teaching effectiveness.  相似文献   

8.
Using data on 4 years of courses at American University, regression results show that actual grades have a significant, positive effect on student evaluations of teaching (SETs), controlling for expected grade and fixed effects for both faculty and courses, and for possible endogeneity. Implications are that the SET is a faulty measure of teaching quality and grades a faulty signal of future job performance. Students, faculty, and provost appear to be engaged in an individually rational but socially destructive game of grade inflation centered on the link between SETs and grades. When performance is hard to measure, pay-for-performance, embodied by the link between SETs and faculty pay, may have unintended adverse consequences.  相似文献   

9.
There is a plethora of research on student evaluations of teaching (SETs) regarding their validity, susceptibility to bias, practical use and effective implementation. Given that there is not one study summarising all these domains of research, a comprehensive overview of SETs was conducted by combining all prior meta-analyses related to SETs. Eleven meta-analyses were identified, and nine meta-analyses covering 193 studies were included in the analysis, which yielded a small-to-medium overall weighted mean effect size (r = .26) between SETs and the variables studied. Findings suggest that SETs appear to be valid, have practical use that is largely free from gender bias and are most effective when implemented with consultation strategies. Research, teaching and policy implications are discussed.  相似文献   

10.

Although part-time (p/t) faculties constitute a growing proportion of college instructors, there is little work on their level of teaching effectiveness relative to full-time (f/t) faculty. Previous work on a key indicator of perceived teaching effectiveness, student evaluation of teaching (SET), and faculty status (p/t/ vs f/t) is marked by a series of shortcomings including lack of a systematic theoretical framework and lack of multivariate statistical analysis techniques to check for possible spuriousness. The present study corrects for these shortcomings. Data consist of SETs from 175 sections of criminal justice classes taught at a Midwestern urban university. Controls are introduced for variables drawn from the literature and include ascribed characteristics of the professor, grade distribution, and structural features of the course (e.g., level, size). The results of a multivariate regression analysis indicate that even after controlling for the other predictors of SETs, p/t faculty receive significantly higher student evaluation scores than f/t faculty. Further, faculty status was the most important predictor of SETs. The results present the first systematic evidence on faculty status and SETs.  相似文献   

11.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

12.
Abstract

Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.  相似文献   

13.
To advance the discussion on the validity of student evaluations of university teaching, student ratings of two teaching dimensions – student involvement and rapport – were compared with corresponding observer ratings. Seven potential bias variables were tested with regard to their impact on the students’ teaching assessment: three teacher characteristics (first impression, enthusiasm, humour) and four student characteristics (prior interest, expected grades, study experience, class attendance). Bias was defined as an impediment of the students’ assessment of teaching on course level. By means of bivariate correlations with course averages and two-level latent moderated structural equations, data of 1,716 students in 80 courses were analysed. Results showed that all three teacher characteristics were genuinely connected to rapport, and even explained variance of the student-rated variable when controlling for observer-rated rapport. The assessment of student involvement was not modified by the teacher characteristics except for teacher enthusiasm, which affected the student evaluation when controlling for observed involvement and, moreover, moderated the relation between the observed and the student-rated variable. For the examined student characteristics, no biasing effects were found – neither on rapport nor on student involvement.  相似文献   

14.
This paper evaluates the impact of teaching innovations, introduced in public primary schools under the Children Resources International (CRI) Program, on student outcomes. We estimate students’ learning based on their scores on standardized tests. We match schools and children within the treatment and comparison group and find that the CRI Program has been effective in raising learning achievement. Moreover, the results are robust to unobserved selection bias. The average gain for a CRI student represents an improvement of 0.40 standard deviations. The results stay unchanged when we use alternative estimators for the treatment effect including the bias-corrected estimator proposed by Abadie and Imbens (2006).  相似文献   

15.
Counseling instructors using evaluations made by their students has shown to be a fruitful approach to enhancing teaching quality. However, prior experimental studies are questionable in terms of external validity. Therefore, we conducted a non-experimental intervention study in which all of the courses offered by a specific department at a German university were evaluated twice with a standardized student evaluation questionnaire (HILVE-II; overall 44 instructors, 140 courses, and 2,546 student evaluations). Additionally, twelve full time instructors received counseling after the first measurement point. Long-term effects over a period of 2 years and transfer effects to other courses were analyzed using multi-level analyses with three levels. Possible influences by bias and unfairness variables were controlled for. Our results indicate a moderate to large effect of counseling on teaching quality. In conclusion, if students’ evaluations are accompanied by counseling based on the evaluation results, they present a useful method to assure and increase teaching quality in higher education.  相似文献   

16.

Previous research examining computer-assisted teaching is inconclusive. Some studies find enhanced student performance while others find no difference from traditional-approach pedagogy. This case study compares student performance and course evaluations for computer-assisted and traditional-approach sections in three criminal justice courses: crime theory, criminal courts, and inequality in the justice system. Overall results indicate a significant difference between student performance in computer-assisted and traditional classes. Yet differences are not the same for each course. The theory course shows the least difference while the courts course had the greatest difference. Student evaluation data indicate computer-assisted activities are enjoyed, yet differences from traditional-approach sections are not significant. Questions for future research on the use of technology in teaching are raised.  相似文献   

17.
This study examined the validity of students’ evaluations of teaching as an instrument for measuring teaching quality by examining the effects of likability and prior subject interest as potential biasing effects, measured at the beginning of the course and at the time of evaluation. University students (N = 260) evaluated psychology courses in one semester at a German university with a standardized questionnaire, yielding 517 data points. Cross-classified multilevel analyses revealed fixed effects of likability at both times of measurement and fixed effects of prior subject interest measured at the beginning of the course. Likability seems to exert a substantial bias on student evaluations of teaching, albeit one that is overestimated when measured at the time of evaluation. In contrast, prior subject interest seems to introduce a weak bias. Considering that likability bears no conceptual relationship to teaching quality, these findings point to a compromised validity of students’ evaluations of teaching.  相似文献   

18.
Student evaluation of courses: what predicts satisfaction?   总被引:1,自引:0,他引:1  
The main goals of course evaluations are to obtain student feedback regarding courses and teaching for improvement purposes and to provide a defined and practical process to ensure that actions are taken to improve courses and teaching. Of the items on course evaluation forms, the one that receives the most attention and consequently the most weight is the question, ‘Overall, I was satisfied with the quality of this course.’ However, no attention has been placed on examining the predictors of students being ‘satisfied with the quality of this course’ overall. This study attempts to address this gap. The findings show that while student characteristics and reasons for enrolling in a course are predictors of overall satisfaction, it is the evaluation questions that predict the majority of the variation in course satisfaction. The findings also reveal that faculty‐selected optional questions are stronger predictors of overall satisfaction than compulsory questions.  相似文献   

19.
This paper studies the effect of teacher gender and ethnicity on student evaluations of teaching at university. We analyze a unique data-set featuring mixed teaching teams and a diverse, multicultural, multi-ethnic group of students and teachers. Blended co-teaching allows us to study the link between student evaluations of teaching and teacher gender as well as ethnicity exploiting within course variation in a panel data model with course-year fixed effects. We document a negative effect of being a female teacher on student evaluations of teaching, which amounts to roughly one fourth of the sample standard deviation of teaching scores. Overall women are 11 percentage points less likely to attain the teaching evaluation cut-off for promotion to associate professor compared to men. The effect is robust to a host of co-variates such as course leadership, teacher experience and research quality, as well as an alternative teacher fixed effect specification. There is no evidence of a corresponding ethnicity effect. Our results are suggestive of a gender bias against female teachers and indicate that the use of teaching evaluations in hiring and promotion decisions may put female lectures at a disadvantage.  相似文献   

20.
This paper examines the effects of instructors’ attractiveness on student evaluations of their teaching. We build on previous studies by holding both observed and unobserved characteristics of the instructor and classes constant. Our identification strategy exploits the fact that many instructors, in addition to traditional teaching in the classroom, also teach in the online environment, where attractiveness is either unknown or less salient. We utilize multiple attractiveness measures, including facial symmetry software, subjective evaluations, and a novel, proxy methodology that resembles a “Keynesian Beauty Contest.” We identify a substantial beauty premium in face-to-face classes for women but not for men. While gender on its own does not impact teaching evaluation scores, female instructors rated as more attractive receive higher instructional ratings. This result holds across several beauty measures, given a multitude of controls and while controlling for unobserved instructor characteristics and skills. Notably, the positive relationship between beauty and teaching effectiveness is not found in the online environment, suggesting the observed premium may be due to discrimination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号