首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the last 10–15 years, many institutions of higher education have switched from paper-and-pencil methods to online methods of administering student evaluations of teaching (SETs). One consequence has been a significant reduction in the response rates to such instruments. The current study was conducted to identify whether offering in-class time to students to complete online SETs would increase response rates. A quasi-experiment (nonequivalent group design) was conducted in which one group of tenured faculty instructed students to bring electronic devices with internet capabilities on a specified day and offered in-class time to students to complete online SETs. A communication protocol for faculty members’ use was developed and implemented. A comparison group of tenured faculty who did not offer in-class time for SET completion was identified and the difference-in-differences method was used to compare the previous year’s response rates for the same instructor teaching the same course across the two groups. Response rates were substantially higher when faculty provided in-class time to students to complete SETs. These results indicate that high response rates can be obtained for online SETs submitted by students in face-to-face classes if faculty communicate the importance of SETs in both their words and actions.  相似文献   

2.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

3.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

4.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

5.
The literature on student evaluations of teaching (SETs) generally presents two opposing camps: those who believe in the validity and usefulness of SETs, and those who do not. Some researchers have suggested that ‘SET deniers’ resist SETs because of their own poor SET results. To test this hypothesis, I analysed essays by 230 SET researchers (170 lead authors) and classified the researchers as having negative, neutral or positive attitudes towards SETs. I retrieved their RateMyProfessors.com (RMP) scores and, using logistic regression, found that lead authors with negative attitudes towards SETs were 14 times more likely to score below an estimated RMP average than lead authors with positive attitudes towards SETs. Co-authors and researchers with neutral attitudes, on the other hand, did not significantly differ from the RMP average. These results suggest that personal attitudes towards SETs may drive research findings.  相似文献   

6.
We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow users to assess the interpretative validity (IV) of a SET score for a specific evaluation context. In this framework, a small margin of error, e.g. 3% of the range, suggests a greater precision, or IV in a score, whereas a large margin of error, e.g. 10% of the range, suggests a lower IV. We review the SET literature and summarise five ways that low response rates may be ameliorated. We also compare our results to previous published studies. Our findings matched, but greatly extended, prior results.  相似文献   

7.
The current study explores the feelings and thoughts that faculty have about their student evaluations of teaching (SET). To assess the perceptions of SETs, all teaching faculty in one college at a western Land Grant University were asked to complete an anonymous online survey. The survey included demographic questions (i.e. gender; rank such as assistant, associate, and full professor; and positions like non-tenure track, tenure track, and tenured) as well as questions related to faculty's feelings while reading their SETs. While minimal differences were found in responses based on rank or position, several differences were found based on faculty gender. Overall, female faculty appear to be more negatively impacted by student evaluations than male faculty. These gender differences support previous research that suggests males and females receive and react differently to personal evaluation. Resultant suggestions include modifying surveys from anonymous to confidential and offering professional development training for faculty.  相似文献   

8.
When response rates on student evaluations of teaching (SETs) are low, inability to properly interpret and use responses from the students who do participate is a big problem. Where does the motivation to participate break down, and where and how does it make sense to invest efforts in rectifying that? In this study, we examined 641 university students’ reported behaviours and motivation related to their SET participation. In terms of behaviour, students who seldom or never participate in online SET tools reported a willingness to invest, at most, five minutes in the process, though the majority never even open the online evaluation links when they receive them. In terms of motivation, they significantly differed from students who always participate with distinctly lower levels of: (1) willingness to participate at all, (2) perception of autonomy and competence, (3) meaningfulness, (4) personal value, (5) engagement in others’ participation, and (6) understanding of the value of their own participation for others’ benefit. Based on these findings, we propose a strategy for increasing future response rates, particularly among recalcitrant students, in order to be able to gather sufficient and reliable results for the work of improving teaching.  相似文献   

9.
This paper examines the effects of two background variables in students' ratings of teaching effectiveness (SETs): class size and students' motivation (as surrogated by students' likelihood to respond randomly). Resampling simulation methodology has been employed to test the sensitivity of the SET scale for three hypothetical instructors (excellent, average, and poor). In an ideal scenario without confounding factors, SET statistics unmistakably distinguish the instructors. However, at different class sizes and levels of random responses, SET class averages are significantly biased. Results suggest that evaluations based on SET statistics should look at more than class averages. Resampling methodology (bootstrap simulation) is useful for SET research for scale sensitivity study, research results validation, and actual SET score analyses. Examples will be given on how bootstrap simulation can be applied to real-life SET data comparison.  相似文献   

10.
A continuing decline in an institution’s response rates for student evaluations of teaching (SET) raised faculty concerns about non-response bias in summary statistics. In response, this institution’s SET stakeholders partnered with a Marketing Methods class section to create strategies for increasing response rates. The project was also an exercise in organisational citizenship behaviour (OCB) training because students in that class section received intensive training on how SET feedback is valued by instructors and its role in improving their academic organisation. Within the context of OCB theory, this article finds student exposure to OCB training increases SET response rates because knowing how SET benefits their organisation increases unit-level response propensity for member surveys intended to improve their institution. In the year of the OCB training, SET response rates increased by 26%, though the increases did not persist into later academic years. The response rate increases are realised across all demographic groups with disproportionate increases among low response rate groups, including low performing students, men and ethnic minorities.  相似文献   

11.
Using data on 4 years of courses at American University, regression results show that actual grades have a significant, positive effect on student evaluations of teaching (SETs), controlling for expected grade and fixed effects for both faculty and courses, and for possible endogeneity. Implications are that the SET is a faulty measure of teaching quality and grades a faulty signal of future job performance. Students, faculty, and provost appear to be engaged in an individually rational but socially destructive game of grade inflation centered on the link between SETs and grades. When performance is hard to measure, pay-for-performance, embodied by the link between SETs and faculty pay, may have unintended adverse consequences.  相似文献   

12.
The purpose of this study was to conduct a validation analysis of an SET and provide a validation framework of SETs that can be included when designing complete evaluations of teaching within higher education institutions. A series of Rasch analyses was conducted on the results of the SET, examining the responses of students within a college and three departments. Results show the majority of items were moderately difficult to endorse in the college and departments, there were issues with DIF, and two items did not consistently fit the model. The study provides an analysis framework that may aid policymakers and institutional administrators in developing higher quality SETs, and demonstrates the need for validating SETs being implemented in higher education settings.  相似文献   

13.
Several studies have reported that student evaluation of teaching (SET) presents important problems. First, depending on the area, there are significant differences in the evaluations. Second, numerous noninstructional biases exist, such as when those teachers who award better grades obtain better SETs. Correcting the rankings by considering these biases (e.g., adjusting SETs according to the class grade) has been proposed. In this paper, we analyse a third problem: it is impossible to correct the biases because they are specific to each area, level, and even class. On a sample of 15,439 SETs, we compared the biases present in two very close areas (accounting and finance) and at two levels (undergraduate and postgraduate). Then, we used a procedure based on the analysis of residuals in OLS models to eliminate area- and level-specific biases. However, there are still latent biases apparently linked to each specific group of students.  相似文献   

14.
Abstract

The validity of traditional opinion-based student evaluations of teaching (SETs) may be compromised by inattentive responses and low response rates due to evaluation fatigue, and/or by personal response bias. To reduce the impact of evaluation fatigue and personal response bias on SETs, this study explores peer prediction-based SETs as an alternative to opinion-based SETs in a multicultural environment. The results suggest that statistically significant fewer respondents are needed to reach stable average outcomes when peer prediction-based SETs are used than when opinion-based SETs are used. This implies that peer prediction-based SETs could reduce evaluation fatigue, as not all students would need to do each evaluation. The results also report that the peer prediction-based method significantly reduces the bias evident in the opinion-based method, in respect of gender and prior academic performance. However, in respect of the cultural variables, race and home language, bias was identified in the peer prediction-based method, where none was evident in the opinion-based method. These observations, interpreted through the psychology literature on the formulation of perceptions of others, imply that although peer prediction-based SETs may in some instances reduce some personal response bias, it may introduce the perceived bias of others.  相似文献   

15.
Student evaluations of teaching (SETs) are an important point of assessment for faculty in curriculum development, tenure and promotion decisions, and merit raises. Faculty members utilise SETs to gain feedback on their classes and, hopefully, improve them. The question of the validity of student responses on SETs is a continuing debate in higher education. The current study uses data from two universities (n = 596) to determine whether and under what conditions students are honest on in-class and online SETs, while also assessing their knowledge and attitudes about SETs. Findings reveal that, while students report a high level of honesty on SETs, they are more likely to be honest when they believe that evaluations effectively measure the quality of the course, the results improve teaching and benefit students rather than the administration, and when they are given at the end of the term. Honesty on evaluations is not associated with socio-demographic characteristics.  相似文献   

16.
Abstract

Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.  相似文献   

17.
Student evaluations of teaching (SETs) have been used to evaluate higher education teaching performance for decades. Reporting SET results often involves the extraction of an average for some set of course metrics, which facilitates the comparison of teaching teams across different organisational units. Here, we draw attention to ongoing problems with the naive application of this approach. Firstly, a specific average value may arise from data that demonstrates very different patterns of student satisfaction. Furthermore, the use of distance measures (e.g. an average) for ordinal data can be contested, and finally, issues of multiplicity increasingly plague approaches using hypothesis testing. It is time to advance the methodology of the field. We demonstrate how multinomial distributions and hierarchical Bayesian methods can be used to contextualise the SET scores of a course to different organisational units and student cohorts, and then show how this approach can be used to extract sensible information about how a distribution is changing.  相似文献   

18.
Student evaluation of teaching (SET) is now common practice across higher education, with the results used for both course improvement and quality assurance purposes. While much research has examined the validity of SETs for measuring teaching quality, few studies have investigated the factors that influence student participation in the SET process. This study aimed to address this deficit through the analysis of an SET respondent pool at a large Canadian research-intensive university. The findings were largely consistent with available research (showing influence of student gender, age, specialisation area and final grade on SET completion). However, the study also identified additional influential course-specific factors such as term of study, course year level and course type as statistically significant. Collectively, such findings point to substantively significant patterns of bias in the characteristics of the respondent pool. Further research is needed to specify and quantify the impact (if any) on SET scores. We conclude, however, by recommending that such bias does not invalidate SET implementation, but instead should be embraced and reported within standard institutional practice, allowing better understanding of feedback received, and driving future efforts at recruiting student respondents.  相似文献   

19.
This paper provides new evidence on the disparity between student evaluation of teaching (SET) ratings when evaluations are conducted online versus in‐class. Using a multiple regression analysis, we show that after controlling for many of the class and student characteristics not under the direct control of the instructor, average SET ratings from evaluations conducted online are significantly lower than average SET ratings conducted in‐class. Further, we demonstrate the importance of controlling for the factors not under the instructor’s control when using SET ratings to evaluate faculty performance in the classroom. We do not suggest that moving to online evaluation is overly problematic, only that it is difficult to compare evaluations done online with evaluations done in‐class. While we do not suppose that one method is ‘more accurate’ than another, we do believe that institutions would benefit from either moving all evaluations online or by continuing to do all evaluations in‐class.  相似文献   

20.
This paper addresses the determination of statistically desirable response rates in students’ surveys, with emphasis on assessing the effect of underlying variability in the student evaluation of teaching (SET). We discuss factors affecting the determination of adequate response rates and highlight challenges caused by non-response and lack of randomization. Estimates of underlying variability were obtained for a period of 4?years, from online evaluations at the University of British Columbia (UBC). Simulations were used to examine the effect of underlying variability on desirable response rates. The UBC response rates were compared to those reported in the literature. Results indicate that small differences in underlying variability may not impact desired rates. We present acceptable response rates for a range of variability scenarios, class sizes, confidence level, and margin of error. The stability of estimates observed at UBC, over a 4-year period, indicates that valid model-based inferences of SET could be made.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号