首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
In the last 10–15 years, many institutions of higher education have switched from paper-and-pencil methods to online methods of administering student evaluations of teaching (SETs). One consequence has been a significant reduction in the response rates to such instruments. The current study was conducted to identify whether offering in-class time to students to complete online SETs would increase response rates. A quasi-experiment (nonequivalent group design) was conducted in which one group of tenured faculty instructed students to bring electronic devices with internet capabilities on a specified day and offered in-class time to students to complete online SETs. A communication protocol for faculty members’ use was developed and implemented. A comparison group of tenured faculty who did not offer in-class time for SET completion was identified and the difference-in-differences method was used to compare the previous year’s response rates for the same instructor teaching the same course across the two groups. Response rates were substantially higher when faculty provided in-class time to students to complete SETs. These results indicate that high response rates can be obtained for online SETs submitted by students in face-to-face classes if faculty communicate the importance of SETs in both their words and actions.  相似文献   

2.
This article provides an overview of issues involved with traditional paper versus online course evaluations. Data were gathered from university faculty, who transitioned from traditional paper to online course evaluations. Faculty preferred traditional course evaluations versus online course evaluations by a small margin. However, faculty overwhelmingly believed that traditional course evaluations result in higher response rates from students. Incentives were also believed by faculty to increase student response rates. Suggestions from faculty on how to improve student response rates are also provided in this article.  相似文献   

3.
Institutions of higher education continue to migrate student evaluations of teaching (SET) from traditional, in-class paper forms to online SETs. Online SETs would favorably compare to paper-and-pencil evaluations were it not for widely reported response rate decreases that cause SET validity concerns stemming from possible nonresponse bias. To combat low response rates, one institution introduced a SET application for mobile devices and piloted formal synchronous classroom time for SET completion. This paper uses the Leverage Salience Theory to estimate the impact of these SET process changes on overall response rates, open-ended question response rates, and open end response word counts. Synchronous class time best improves SET responses when faculty encourage completion on keyboarded devices and provide students SET completion time in the first 15 min of a class meeting. Full support from administrators requires sufficient wireless signal strength, IT infrastructure, and assuring student access to devices for responses clustering around meeting times.  相似文献   

4.
This study compares student evaluations of faculty teaching that were completed in‐class with those collected online. The two methods of evaluation were compared on response rates and on evaluation scores. In addition, this study investigates whether treatments or incentives can affect the response to online evaluations. It was found that the response rate to the online survey was generally lower than that to the in‐class survey. When a grade incentive was used to encourage response to the online survey, a response rate was achieved that was comparable with that to the in‐class survey. Additionally, the study found that online evaluations do not produce significantly different mean evaluation scores than traditional in‐class evaluations, even when different incentives are offered to students who are asked to complete online evaluations.  相似文献   

5.
Student evaluations of teaching (SETs) are an important point of assessment for faculty in curriculum development, tenure and promotion decisions, and merit raises. Faculty members utilise SETs to gain feedback on their classes and, hopefully, improve them. The question of the validity of student responses on SETs is a continuing debate in higher education. The current study uses data from two universities (n = 596) to determine whether and under what conditions students are honest on in-class and online SETs, while also assessing their knowledge and attitudes about SETs. Findings reveal that, while students report a high level of honesty on SETs, they are more likely to be honest when they believe that evaluations effectively measure the quality of the course, the results improve teaching and benefit students rather than the administration, and when they are given at the end of the term. Honesty on evaluations is not associated with socio-demographic characteristics.  相似文献   

6.
The current study explores the feelings and thoughts that faculty have about their student evaluations of teaching (SET). To assess the perceptions of SETs, all teaching faculty in one college at a western Land Grant University were asked to complete an anonymous online survey. The survey included demographic questions (i.e. gender; rank such as assistant, associate, and full professor; and positions like non-tenure track, tenure track, and tenured) as well as questions related to faculty's feelings while reading their SETs. While minimal differences were found in responses based on rank or position, several differences were found based on faculty gender. Overall, female faculty appear to be more negatively impacted by student evaluations than male faculty. These gender differences support previous research that suggests males and females receive and react differently to personal evaluation. Resultant suggestions include modifying surveys from anonymous to confidential and offering professional development training for faculty.  相似文献   

7.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

8.
Increasingly, student assessments of courses are being conducted online as opposed to administered in class. A growing body of research compares response rates and course ratings of courses evaluated online versus on paper. The present study extends this research by comparing student course assessments before and after the University of South Florida made online evaluations mandatory for all courses. This change only directly affected courses taught on-campus, as online courses were already being assessed online. However, we examine the effect of this change on courses taught on-campus and online, because we expect this change in policy to have differential effects. We hypothesise that by making online assessments mandatory for all courses, online assessments went from a novel method of evaluation to the norm; and, therefore, increased response rates for online courses, but had the opposite effect for on-campus courses. We find mixed support for our hypothesis.  相似文献   

9.
The introduction of online delivery platforms such as learning management systems (LMS) in tertiary education has changed the methods and modes of curriculum delivery and communication. While course evaluation methods have also changed from paper-based in-class-administered methods to largely online-administered methods, the data collection instruments have remained unchanged. This paper reports on a small exploratory study of two tertiary-level courses. The study investigated why design of the instruments and methods to administer surveys in the courses are ineffective measures against the intrinsic characteristics of online learning. It reviewed the students' response rates of the conventional evaluations for the courses over an eight-year period. It then compared a newly developed online evaluation and the conventional methods over a two-year period. The results showed the response rates with the new evaluation method increased by more than 80% from the average of the conventional evaluations (below 30%), and the students' written feedback was more detailed and comprehensive than in the conventional evaluations. The study demonstrated the possibility that the LMS-based learning evaluation can be effective and efficient in terms of the quality of students' participation and engagement in their learning, and for an integrated pedagogical approach in an online learning environment.  相似文献   

10.
Research on the relationship between research productivity and student evaluations of teaching (SETs) has been marked by several shortcomings. First, research typically fails to check and adjust for nonlinear distributions in research productivity. Since approximately 15% of researchers account for most articles and citations (e.g., Zuckerman, H., Handbook of Sociology, Sage Publications, Newbury Park, CA, pp. 511–574, 1988), this failure might explain weak or nonsignificant findings in some of the past research. Second, the unit of analysis is typically the instructor, not the class. Since top researchers might disproportionately teach small classes at the graduate level, and that SETs are usually higher in such classes, the small relationships between research excellence and SETS found in previous research may be spurious. The present study corrects for each of these issues. It analyzes data from 167 classes in the social sciences and on 65 faculty. The quality of research productivity (raw citations/post-PhD year) is not related to SETs. However, when the distribution of citations is corrected for skewness, a significant positive relationship between research productivity and SETs emerges. This relationship survives controls for course and instructor characteristics, and holds for both the faculty member and the class as units of analysis. This is the first systematic investigation to demonstrate a significant relationship between the quality of research (citations) and SETs.  相似文献   

11.
The use of student evaluations of teaching (SETs) to assess teaching effectiveness remains controversial. Without clear guidelines regarding how to best document effective teaching, faculty members may wonder how to convincingly demonstrate teaching effectiveness in preparation for promotion and tenure review. Based on a study that examined the relations among student grades, learning, and SETs, we identify a relatively unencumbered approach to documenting teaching effectiveness more comprehensively than through the use of SETs alone. Students enrolled in eight sections of general psychology (N = 165) completed pre‐ and post‐ measures of learning, SETs, and a brief demographic questionnaire. Results of a regression analysis provided partial support for the notion that SETs and learning measures assess distinct aspects of teaching effectiveness. In preparing documentation for promotion and tenure review, faculty members should consider including measures of student learning along with SETs in order to document teaching effectiveness more convincingly and comprehensively.  相似文献   

12.
In the context of increased emphasis on quality assurance of teaching, it is crucial that student evaluations of teaching (SET) methods be both reliable and workable in practice. Online SETs particularly tend to raise criticisms with those most reactive to mechanisms of teaching accountability. However, most studies of SET processes have been conducted with convenience, small and cross-sectional samples. Longitudinal studies are rare, as comparison studies on SET methodological approaches are generally pilot studies followed shortly after by implementation. The investigation presented here significantly contributes to the debate by examining the impact of the online administration method of SET on a very large longitudinal sample at the course level rather than attending to the student unit, thus compensating for the inter-dependency of students’ responses according to the instructor variable. It explores the impact of the administration method of SET (paper based in-class vs. out-of-class online collection) on scores, with a longitudinal sample of over 63,000 student responses collected over a total period of 10 years. Having adjusted for the confounding effect of class size, faculty, year of evaluation, years of teaching experience and student performance, it is observed that the actual effect of the administration method exists, but is insignificant.  相似文献   

13.
Abstract

The validity of traditional opinion-based student evaluations of teaching (SETs) may be compromised by inattentive responses and low response rates due to evaluation fatigue, and/or by personal response bias. To reduce the impact of evaluation fatigue and personal response bias on SETs, this study explores peer prediction-based SETs as an alternative to opinion-based SETs in a multicultural environment. The results suggest that statistically significant fewer respondents are needed to reach stable average outcomes when peer prediction-based SETs are used than when opinion-based SETs are used. This implies that peer prediction-based SETs could reduce evaluation fatigue, as not all students would need to do each evaluation. The results also report that the peer prediction-based method significantly reduces the bias evident in the opinion-based method, in respect of gender and prior academic performance. However, in respect of the cultural variables, race and home language, bias was identified in the peer prediction-based method, where none was evident in the opinion-based method. These observations, interpreted through the psychology literature on the formulation of perceptions of others, imply that although peer prediction-based SETs may in some instances reduce some personal response bias, it may introduce the perceived bias of others.  相似文献   

14.

Although part-time (p/t) faculties constitute a growing proportion of college instructors, there is little work on their level of teaching effectiveness relative to full-time (f/t) faculty. Previous work on a key indicator of perceived teaching effectiveness, student evaluation of teaching (SET), and faculty status (p/t/ vs f/t) is marked by a series of shortcomings including lack of a systematic theoretical framework and lack of multivariate statistical analysis techniques to check for possible spuriousness. The present study corrects for these shortcomings. Data consist of SETs from 175 sections of criminal justice classes taught at a Midwestern urban university. Controls are introduced for variables drawn from the literature and include ascribed characteristics of the professor, grade distribution, and structural features of the course (e.g., level, size). The results of a multivariate regression analysis indicate that even after controlling for the other predictors of SETs, p/t faculty receive significantly higher student evaluation scores than f/t faculty. Further, faculty status was the most important predictor of SETs. The results present the first systematic evidence on faculty status and SETs.  相似文献   

15.
Using data on 4 years of courses at American University, regression results show that actual grades have a significant, positive effect on student evaluations of teaching (SETs), controlling for expected grade and fixed effects for both faculty and courses, and for possible endogeneity. Implications are that the SET is a faulty measure of teaching quality and grades a faulty signal of future job performance. Students, faculty, and provost appear to be engaged in an individually rational but socially destructive game of grade inflation centered on the link between SETs and grades. When performance is hard to measure, pay-for-performance, embodied by the link between SETs and faculty pay, may have unintended adverse consequences.  相似文献   

16.
Student evaluations of teaching (SETs) are widely used to measure teaching quality in higher education and compare it across different courses, teachers, departments and institutions. Indeed, SETs are of increasing importance for teacher promotion decisions, student course selection, as well as for auditing practices demonstrating institutional performance. However, survey response is typically low, rendering these uses unwarranted if students who respond to the evaluation are not randomly selected along observed and unobserved dimensions. This paper is the first to fully quantify this problem by analyzing the direction and size of selection bias resulting from both observed and unobserved characteristics for over 3000 courses taught in a large European university. We find that course evaluations are upward biased, and that correcting for selection bias has non-negligible effects on the average evaluation score and on the evaluation-based ranking of courses. Moreover, this bias mostly derives from selection on unobserved characteristics, implying that correcting evaluation scores for observed factors such as student grades does not solve the problem. However, we find that adjusting for selection only has small impacts on the measured effects of observables on SETs, validating a large related literature which considers the observable determinants of evaluation scores without correcting for selection bias.  相似文献   

17.
Abstract

Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.  相似文献   

18.
There is a plethora of research on student evaluations of teaching (SETs) regarding their validity, susceptibility to bias, practical use and effective implementation. Given that there is not one study summarising all these domains of research, a comprehensive overview of SETs was conducted by combining all prior meta-analyses related to SETs. Eleven meta-analyses were identified, and nine meta-analyses covering 193 studies were included in the analysis, which yielded a small-to-medium overall weighted mean effect size (r = .26) between SETs and the variables studied. Findings suggest that SETs appear to be valid, have practical use that is largely free from gender bias and are most effective when implemented with consultation strategies. Research, teaching and policy implications are discussed.  相似文献   

19.
When response rates on student evaluations of teaching (SETs) are low, inability to properly interpret and use responses from the students who do participate is a big problem. Where does the motivation to participate break down, and where and how does it make sense to invest efforts in rectifying that? In this study, we examined 641 university students’ reported behaviours and motivation related to their SET participation. In terms of behaviour, students who seldom or never participate in online SET tools reported a willingness to invest, at most, five minutes in the process, though the majority never even open the online evaluation links when they receive them. In terms of motivation, they significantly differed from students who always participate with distinctly lower levels of: (1) willingness to participate at all, (2) perception of autonomy and competence, (3) meaningfulness, (4) personal value, (5) engagement in others’ participation, and (6) understanding of the value of their own participation for others’ benefit. Based on these findings, we propose a strategy for increasing future response rates, particularly among recalcitrant students, in order to be able to gather sufficient and reliable results for the work of improving teaching.  相似文献   

20.
The purpose of this study was to analyse the students’ evaluations of the course and instructor for all statistics courses offered during fall semester 2009 at a large university in the southern United States. Data were collected and analysed for course evaluations administered both online and on paper to students in both undergraduate and graduate courses. Unlike most previous studies on this subject, class section rather than student was treated as the unit of analysis. It was of specific interest to verify prior research findings that evaluation surveys administered online would not result in lower course and instructor ratings and lower response rates. The results showed that there is not sufficient evidence within the collected data to conclude that either course and instructor ratings or response rates are lower for evaluations administered online (online evaluations) than they are for evaluations administered on paper (paper evaluations). Of secondary interest was whether class ratings would be associated with students’ attendance and a comparison of variability among answers for undergraduate vs. graduate students. It was observed that class and teacher ratings were not related to students’ attendance and individual students did not tend to give the same answer for every question on their survey.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号