首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

The validity of traditional opinion-based student evaluations of teaching (SETs) may be compromised by inattentive responses and low response rates due to evaluation fatigue, and/or by personal response bias. To reduce the impact of evaluation fatigue and personal response bias on SETs, this study explores peer prediction-based SETs as an alternative to opinion-based SETs in a multicultural environment. The results suggest that statistically significant fewer respondents are needed to reach stable average outcomes when peer prediction-based SETs are used than when opinion-based SETs are used. This implies that peer prediction-based SETs could reduce evaluation fatigue, as not all students would need to do each evaluation. The results also report that the peer prediction-based method significantly reduces the bias evident in the opinion-based method, in respect of gender and prior academic performance. However, in respect of the cultural variables, race and home language, bias was identified in the peer prediction-based method, where none was evident in the opinion-based method. These observations, interpreted through the psychology literature on the formulation of perceptions of others, imply that although peer prediction-based SETs may in some instances reduce some personal response bias, it may introduce the perceived bias of others.  相似文献   

2.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

3.
Student evaluations of teaching (SETs) are an important point of assessment for faculty in curriculum development, tenure and promotion decisions, and merit raises. Faculty members utilise SETs to gain feedback on their classes and, hopefully, improve them. The question of the validity of student responses on SETs is a continuing debate in higher education. The current study uses data from two universities (n = 596) to determine whether and under what conditions students are honest on in-class and online SETs, while also assessing their knowledge and attitudes about SETs. Findings reveal that, while students report a high level of honesty on SETs, they are more likely to be honest when they believe that evaluations effectively measure the quality of the course, the results improve teaching and benefit students rather than the administration, and when they are given at the end of the term. Honesty on evaluations is not associated with socio-demographic characteristics.  相似文献   

4.
Abstract

Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.  相似文献   

5.
The use of student evaluations of teaching (SETs) to assess teaching effectiveness remains controversial. Without clear guidelines regarding how to best document effective teaching, faculty members may wonder how to convincingly demonstrate teaching effectiveness in preparation for promotion and tenure review. Based on a study that examined the relations among student grades, learning, and SETs, we identify a relatively unencumbered approach to documenting teaching effectiveness more comprehensively than through the use of SETs alone. Students enrolled in eight sections of general psychology (N = 165) completed pre‐ and post‐ measures of learning, SETs, and a brief demographic questionnaire. Results of a regression analysis provided partial support for the notion that SETs and learning measures assess distinct aspects of teaching effectiveness. In preparing documentation for promotion and tenure review, faculty members should consider including measures of student learning along with SETs in order to document teaching effectiveness more convincingly and comprehensively.  相似文献   

6.
ABSTRACT

In 2010, new amendments regarding special education were made to the Finnish Basic Education Act (642/2010), and they were officially adopted in 2011. The three-tiered support system that was introduced can be considered the Finnish approach to moving education toward a more inclusive system since it emphasises all teachers’ responsibility to deliver support within the regular educational setting, representing a new feature in the policy documents. This has brought about new expectations for special education teachers’ (SETs’) roles. Our research aims to contribute to knowledge about the implementation of the three-tiered support system and SETs’ roles in Swedish-speaking schools in Finland. The data were collected using a questionnaire (N = 158). The results indicate that the SETs have an important role in the three-tiered support system, both as those with the knowledge and those who share this knowledge. The SETs’ role is more evident when it comes to pupils receiving support on the second and third tiers. Although inclusive values are emphasised in the policy documents, the SETs still use most of their time teaching pupils in educational settings that are often relatively segregated (individual or small-group teaching), and for example, co-teaching seems to be a less frequent approach to collaboration.  相似文献   

7.
The current study explores the feelings and thoughts that faculty have about their student evaluations of teaching (SET). To assess the perceptions of SETs, all teaching faculty in one college at a western Land Grant University were asked to complete an anonymous online survey. The survey included demographic questions (i.e. gender; rank such as assistant, associate, and full professor; and positions like non-tenure track, tenure track, and tenured) as well as questions related to faculty's feelings while reading their SETs. While minimal differences were found in responses based on rank or position, several differences were found based on faculty gender. Overall, female faculty appear to be more negatively impacted by student evaluations than male faculty. These gender differences support previous research that suggests males and females receive and react differently to personal evaluation. Resultant suggestions include modifying surveys from anonymous to confidential and offering professional development training for faculty.  相似文献   

8.
Research on the relationship between research productivity and student evaluations of teaching (SETs) has been marked by several shortcomings. First, research typically fails to check and adjust for nonlinear distributions in research productivity. Since approximately 15% of researchers account for most articles and citations (e.g., Zuckerman, H., Handbook of Sociology, Sage Publications, Newbury Park, CA, pp. 511–574, 1988), this failure might explain weak or nonsignificant findings in some of the past research. Second, the unit of analysis is typically the instructor, not the class. Since top researchers might disproportionately teach small classes at the graduate level, and that SETs are usually higher in such classes, the small relationships between research excellence and SETS found in previous research may be spurious. The present study corrects for each of these issues. It analyzes data from 167 classes in the social sciences and on 65 faculty. The quality of research productivity (raw citations/post-PhD year) is not related to SETs. However, when the distribution of citations is corrected for skewness, a significant positive relationship between research productivity and SETs emerges. This relationship survives controls for course and instructor characteristics, and holds for both the faculty member and the class as units of analysis. This is the first systematic investigation to demonstrate a significant relationship between the quality of research (citations) and SETs.  相似文献   

9.
One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations of teaching (SETs); however, this global correlation did not hold true for individual teachers and courses. In fact, there was a large variance in the correlations between GPAs and SETs, including some teachers with a negative correlation and a large variance between courses.  相似文献   

10.
Though there have been many studies conducted that emphasise faculty reflection as a crucial feature of professional practice, there appears to have been little empirical evidence to support the proposition that reflective practice improves the quality of teaching. Previous research demonstrated that reflective practice could be encouraged by weekly formative student evaluations of teaching (SETs). This study investigated the impact of reported reflective practice using formative SETs on changes to summative SETs, typically conducted at the end of a teaching period. Data was collected in a rural UK‐based university‐college in 11 modules (n = six faculty members, n = 413 students) in Business, Countryside and Environment, Foundation Degree and Veterinary Nursing programmes over the period of 2 years of data collection. Findings show that on average, SET scores increased for all reflective practitioners year on year and increased more for those faculty members who demonstrated higher levels of reflection.  相似文献   

11.
Student evaluation of teaching (SET) is now common practice across higher education, with the results used for both course improvement and quality assurance purposes. While much research has examined the validity of SETs for measuring teaching quality, few studies have investigated the factors that influence student participation in the SET process. This study aimed to address this deficit through the analysis of an SET respondent pool at a large Canadian research-intensive university. The findings were largely consistent with available research (showing influence of student gender, age, specialisation area and final grade on SET completion). However, the study also identified additional influential course-specific factors such as term of study, course year level and course type as statistically significant. Collectively, such findings point to substantively significant patterns of bias in the characteristics of the respondent pool. Further research is needed to specify and quantify the impact (if any) on SET scores. We conclude, however, by recommending that such bias does not invalidate SET implementation, but instead should be embraced and reported within standard institutional practice, allowing better understanding of feedback received, and driving future efforts at recruiting student respondents.  相似文献   

12.
Relating students?? evaluations of teaching (SETs) to student learning as an approach to validate SETs has produced inconsistent results. The present study tested the hypothesis that the strength of association of SETs and student learning varies with the criteria used to indicate student learning. A multisection validity approach was employed to investigate the association of SETs and two different criteria of student learning, a multiple-choice test and a practical examination. Participants were N?=?883 medical students, enrolled in k?=?32 sections of the same course. As expected, results showed a strong positive association between SETs and the practical examination but no significant correlation between SETs and multiple-choice test scores. Furthermore, students?? subjective perception of learning significantly correlated with the practical examination score whereas no relation was found for subjective learning and the multiple choice test. It is discussed whether these results might be due to different measures of student learning varying in the degree to which they reflect teaching effectiveness.  相似文献   

13.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

14.
This exploratory study considered Larrivee’s assessment of teachers’ reflective practice levels by using a formative, weekly, online student evaluation of teaching (SET) tool through a virtual learning environment (VLE) as a means to encourage reflective practice. In‐depth interviews were conducted with six faculty members in three departments at a university college in the UK. The study found that: (a) faculty who experienced surface‐level reflection were more likely to have a reactive reflection style; and (b) faculty who experienced higher levels of reflection were more likely to have a proactive reflection style. Overall, the tool was found to be an efficient means of encouraging reflection by all participants and demonstrated that reflective practice could come about as a result of these weekly formative SETs. The study concludes with suggestions for academic development and future research on reflection that could be conducted using SETs via a VLE.  相似文献   

15.
Student evaluation of teaching (SET) ratings are used to evaluate faculty's teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, Cohen, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty's teaching effectiveness.  相似文献   

16.
Student evaluations of teaching (SETs) are widely used to measure teaching quality in higher education and compare it across different courses, teachers, departments and institutions. Indeed, SETs are of increasing importance for teacher promotion decisions, student course selection, as well as for auditing practices demonstrating institutional performance. However, survey response is typically low, rendering these uses unwarranted if students who respond to the evaluation are not randomly selected along observed and unobserved dimensions. This paper is the first to fully quantify this problem by analyzing the direction and size of selection bias resulting from both observed and unobserved characteristics for over 3000 courses taught in a large European university. We find that course evaluations are upward biased, and that correcting for selection bias has non-negligible effects on the average evaluation score and on the evaluation-based ranking of courses. Moreover, this bias mostly derives from selection on unobserved characteristics, implying that correcting evaluation scores for observed factors such as student grades does not solve the problem. However, we find that adjusting for selection only has small impacts on the measured effects of observables on SETs, validating a large related literature which considers the observable determinants of evaluation scores without correcting for selection bias.  相似文献   

17.
The relation of student personality to student evaluations of teaching (SETs) was determined in a sample of 144 undergraduates. Student Big Five personality variables and core self-evaluation (CSE) were assessed. Students rated their most preferred instructor (MPI) and least preferred instructor (LPI) on 11 common evaluation items. Pearson and partial correlations simultaneously controlling for six demographic variables, Extraversion, Conscientiousness and Openness showed that SETs were positively related to Agreeableness and CSE and negatively related to Neuroticism, supporting the three hypotheses of study. Each of these significant relations was maintained when MPI, LPI or a composite of MPI and LPI served as the SET criterion. For example, the MPI-LPI composite correlated .28 with Agreeableness, .35 with CSE and –.28 with Neuroticism. Similar correlations resulted for MPI and LPI. Hierarchical multiple regression demonstrated that the CSE was an independent predictor of MPI ratings, Agreeableness was an independent predictor of LPI ratings, and both the CSE and Agreeableness were independent predictors of MPI-LPI composite ratings. Neuroticism did not emerge as an independent predictor because of the substantial correlation between CSE and Neuroticism (r = .53) and because CSE had greater predictive capacity. This is the first study to incorporate the CSE construct into the SET literature.  相似文献   

18.
Student evaluations of teaching (SET) are used globally by higher education institutions for performance assessment of academic staff and evaluation of course quality. Higher education institutions commonly develop their own SETs to measure variables deemed relevant to them. However, ‘home-grown’ SETs are rarely assessed psychometrically. One potential consequence of this limitation is that an invalidated instrument may not provide accurate information for the intended purposes. Moreover, in the absence of psychometric assessment, the students’ voices collected by the SETs often fail to provide insight relative to their intended purpose. The present study evaluates a ‘home-grown’ SET using a Rasch model and confirmatory factor analysis. Our results identified weaknesses in two areas: the rating categories and the number of items used to measure the intended constructs. Suggestions are provided to address these weaknesses. This work provides an additional tool set for critical analysis of SET that is generally applicable for a variety of institutions, including those in Asia.  相似文献   

19.
新公共管理运动的发展引发了高等教育质量社会问责制的兴起。高校重视学生评教是对社会重视高等教育的组织绩效、学生主体地位以及教育教学效能的一种适应性回应。学生评教制度设计面临着学生评价能力的有限性、有效教学标准的多样性、干扰因素作用的复杂性以及对学术自由影响的不确定性等因素的制约。我国高校内部评估体系建设中应将学生评教与专家评教相结合,制定多样化的有效教学标准以及适度降低学生评教结果使用的比重。  相似文献   

20.
The literature on student evaluations of teaching (SETs) generally presents two opposing camps: those who believe in the validity and usefulness of SETs, and those who do not. Some researchers have suggested that ‘SET deniers’ resist SETs because of their own poor SET results. To test this hypothesis, I analysed essays by 230 SET researchers (170 lead authors) and classified the researchers as having negative, neutral or positive attitudes towards SETs. I retrieved their RateMyProfessors.com (RMP) scores and, using logistic regression, found that lead authors with negative attitudes towards SETs were 14 times more likely to score below an estimated RMP average than lead authors with positive attitudes towards SETs. Co-authors and researchers with neutral attitudes, on the other hand, did not significantly differ from the RMP average. These results suggest that personal attitudes towards SETs may drive research findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号