首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 434 毫秒
1.
Student evaluations of teaching (SETs) are an important point of assessment for faculty in curriculum development, tenure and promotion decisions, and merit raises. Faculty members utilise SETs to gain feedback on their classes and, hopefully, improve them. The question of the validity of student responses on SETs is a continuing debate in higher education. The current study uses data from two universities (n = 596) to determine whether and under what conditions students are honest on in-class and online SETs, while also assessing their knowledge and attitudes about SETs. Findings reveal that, while students report a high level of honesty on SETs, they are more likely to be honest when they believe that evaluations effectively measure the quality of the course, the results improve teaching and benefit students rather than the administration, and when they are given at the end of the term. Honesty on evaluations is not associated with socio-demographic characteristics.  相似文献   

2.
Nearly 700 US journalism and mass communication faculty (all teaching personnel) reported their perceptions of student email use via a web‐based survey. This nationwide study focused on the content of email sent by faculty to students, email’s effectiveness, and email’s effect on student learning. Comparisons were made based on faculty gender, rank, age, and ethnicity. Findings suggest that despite statistical differences, when gender, rank, age, or ethnicity are considered, faculty are not in the habit of sending course materials like syllabi, project instructions, and lecture notes to students personally via email. Moreover, faculty tend to find favor with email communication and its effectiveness as a tool of teaching. The results of this survey coupled with previous research by the authors and other scholars suggest faculty ought to embrace the technology and develop positive ways to incorporate email, as well as other technology, into the educational process.  相似文献   

3.
The use of student evaluations of teaching (SETs) to assess teaching effectiveness remains controversial. Without clear guidelines regarding how to best document effective teaching, faculty members may wonder how to convincingly demonstrate teaching effectiveness in preparation for promotion and tenure review. Based on a study that examined the relations among student grades, learning, and SETs, we identify a relatively unencumbered approach to documenting teaching effectiveness more comprehensively than through the use of SETs alone. Students enrolled in eight sections of general psychology (N = 165) completed pre‐ and post‐ measures of learning, SETs, and a brief demographic questionnaire. Results of a regression analysis provided partial support for the notion that SETs and learning measures assess distinct aspects of teaching effectiveness. In preparing documentation for promotion and tenure review, faculty members should consider including measures of student learning along with SETs in order to document teaching effectiveness more convincingly and comprehensively.  相似文献   

4.
Course evaluations (often termed student evaluations of teaching or SETs) are pervasive in higher education. As SETs increasingly shift from pencil-and-paper to online, concerns grow over the lower response rates that typically accompany online SETs. This study of online SET response rates examined data from 678 faculty respondents and student response rates from an entire semester. The analysis focused on those tactics that faculty employ to raise response rates for their courses, and explored instructor and course characteristics as contributing factors. A comprehensive regression model was evaluated to determine the most effective tactics and characteristics. Using incentives had the most impact on response rates. Other effective tactics that increase response rates include reminding students to take the evaluation, explaining how the evaluations would be used to improve instruction, sending personal emails and posting reminders on Blackboard®. Incentives are not widely used; however, findings suggest that non-point incentives work as well as point-based ones, as do simple-to-administer minimum class-wide response rate expectations (compared to individual completion).  相似文献   

5.
Though there have been many studies conducted that emphasise faculty reflection as a crucial feature of professional practice, there appears to have been little empirical evidence to support the proposition that reflective practice improves the quality of teaching. Previous research demonstrated that reflective practice could be encouraged by weekly formative student evaluations of teaching (SETs). This study investigated the impact of reported reflective practice using formative SETs on changes to summative SETs, typically conducted at the end of a teaching period. Data was collected in a rural UK‐based university‐college in 11 modules (n = six faculty members, n = 413 students) in Business, Countryside and Environment, Foundation Degree and Veterinary Nursing programmes over the period of 2 years of data collection. Findings show that on average, SET scores increased for all reflective practitioners year on year and increased more for those faculty members who demonstrated higher levels of reflection.  相似文献   

6.
Using data on 4 years of courses at American University, regression results show that actual grades have a significant, positive effect on student evaluations of teaching (SETs), controlling for expected grade and fixed effects for both faculty and courses, and for possible endogeneity. Implications are that the SET is a faulty measure of teaching quality and grades a faulty signal of future job performance. Students, faculty, and provost appear to be engaged in an individually rational but socially destructive game of grade inflation centered on the link between SETs and grades. When performance is hard to measure, pay-for-performance, embodied by the link between SETs and faculty pay, may have unintended adverse consequences.  相似文献   

7.
In the last 10–15 years, many institutions of higher education have switched from paper-and-pencil methods to online methods of administering student evaluations of teaching (SETs). One consequence has been a significant reduction in the response rates to such instruments. The current study was conducted to identify whether offering in-class time to students to complete online SETs would increase response rates. A quasi-experiment (nonequivalent group design) was conducted in which one group of tenured faculty instructed students to bring electronic devices with internet capabilities on a specified day and offered in-class time to students to complete online SETs. A communication protocol for faculty members’ use was developed and implemented. A comparison group of tenured faculty who did not offer in-class time for SET completion was identified and the difference-in-differences method was used to compare the previous year’s response rates for the same instructor teaching the same course across the two groups. Response rates were substantially higher when faculty provided in-class time to students to complete SETs. These results indicate that high response rates can be obtained for online SETs submitted by students in face-to-face classes if faculty communicate the importance of SETs in both their words and actions.  相似文献   

8.
There is a plethora of research on student evaluations of teaching (SETs) regarding their validity, susceptibility to bias, practical use and effective implementation. Given that there is not one study summarising all these domains of research, a comprehensive overview of SETs was conducted by combining all prior meta-analyses related to SETs. Eleven meta-analyses were identified, and nine meta-analyses covering 193 studies were included in the analysis, which yielded a small-to-medium overall weighted mean effect size (r = .26) between SETs and the variables studied. Findings suggest that SETs appear to be valid, have practical use that is largely free from gender bias and are most effective when implemented with consultation strategies. Research, teaching and policy implications are discussed.  相似文献   

9.
Abstract

Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.  相似文献   

10.

Although part-time (p/t) faculties constitute a growing proportion of college instructors, there is little work on their level of teaching effectiveness relative to full-time (f/t) faculty. Previous work on a key indicator of perceived teaching effectiveness, student evaluation of teaching (SET), and faculty status (p/t/ vs f/t) is marked by a series of shortcomings including lack of a systematic theoretical framework and lack of multivariate statistical analysis techniques to check for possible spuriousness. The present study corrects for these shortcomings. Data consist of SETs from 175 sections of criminal justice classes taught at a Midwestern urban university. Controls are introduced for variables drawn from the literature and include ascribed characteristics of the professor, grade distribution, and structural features of the course (e.g., level, size). The results of a multivariate regression analysis indicate that even after controlling for the other predictors of SETs, p/t faculty receive significantly higher student evaluation scores than f/t faculty. Further, faculty status was the most important predictor of SETs. The results present the first systematic evidence on faculty status and SETs.  相似文献   

11.
With the proliferation of computer networks and the increased use of Internet‐based applications, many forms of social interactions now take place in an on‐line context through Computer‐Mediated Communication (CMC). Many universities are now reaping the benefits of using CMC applications to collect data on student evaluations of faculty, rather than using paper‐based surveys in Face‐To‐Face (FTF) classroom settings. While the relative merits of CMC versus FTF student evaluations have been researched extensively, there is limited research published about the ways students respond to the questions from either mode of data collection. This paper reports on a research study to analyse the communication differences between student scores from FTF student evaluations and CMC evaluation questions from end of semester evaluations from a university in the Middle East region. In addition to the questions about communication mode differences between two evaluation questions, several demographic variables were measured to determine any interaction effects. The results of our study suggest that the type of communication channel mitigates the responses that students make on CMC evaluations vis‐à‐vis FTF evaluations of faculty. In particular, even though there were significant differences found at the aggregate level between CMC and FTF evaluations, when the course and instructor are controlled for, there were no significant differences reported. In addition, several differences were noted depending on the type and level of the course being studied. Also, we found that students are more likely to express more extreme responses to scale questions in CMC than FTF evaluations. Administrators should consider these potential differences when implementing on‐line evaluation systems.  相似文献   

12.
Relating students?? evaluations of teaching (SETs) to student learning as an approach to validate SETs has produced inconsistent results. The present study tested the hypothesis that the strength of association of SETs and student learning varies with the criteria used to indicate student learning. A multisection validity approach was employed to investigate the association of SETs and two different criteria of student learning, a multiple-choice test and a practical examination. Participants were N?=?883 medical students, enrolled in k?=?32 sections of the same course. As expected, results showed a strong positive association between SETs and the practical examination but no significant correlation between SETs and multiple-choice test scores. Furthermore, students?? subjective perception of learning significantly correlated with the practical examination score whereas no relation was found for subjective learning and the multiple choice test. It is discussed whether these results might be due to different measures of student learning varying in the degree to which they reflect teaching effectiveness.  相似文献   

13.
The literature on student evaluations of teaching (SETs) generally presents two opposing camps: those who believe in the validity and usefulness of SETs, and those who do not. Some researchers have suggested that ‘SET deniers’ resist SETs because of their own poor SET results. To test this hypothesis, I analysed essays by 230 SET researchers (170 lead authors) and classified the researchers as having negative, neutral or positive attitudes towards SETs. I retrieved their RateMyProfessors.com (RMP) scores and, using logistic regression, found that lead authors with negative attitudes towards SETs were 14 times more likely to score below an estimated RMP average than lead authors with positive attitudes towards SETs. Co-authors and researchers with neutral attitudes, on the other hand, did not significantly differ from the RMP average. These results suggest that personal attitudes towards SETs may drive research findings.  相似文献   

14.
Institutions of higher education continue to migrate student evaluations of teaching (SET) from traditional, in-class paper forms to online SETs. Online SETs would favorably compare to paper-and-pencil evaluations were it not for widely reported response rate decreases that cause SET validity concerns stemming from possible nonresponse bias. To combat low response rates, one institution introduced a SET application for mobile devices and piloted formal synchronous classroom time for SET completion. This paper uses the Leverage Salience Theory to estimate the impact of these SET process changes on overall response rates, open-ended question response rates, and open end response word counts. Synchronous class time best improves SET responses when faculty encourage completion on keyboarded devices and provide students SET completion time in the first 15 min of a class meeting. Full support from administrators requires sufficient wireless signal strength, IT infrastructure, and assuring student access to devices for responses clustering around meeting times.  相似文献   

15.
Research on the relationship between research productivity and student evaluations of teaching (SETs) has been marked by several shortcomings. First, research typically fails to check and adjust for nonlinear distributions in research productivity. Since approximately 15% of researchers account for most articles and citations (e.g., Zuckerman, H., Handbook of Sociology, Sage Publications, Newbury Park, CA, pp. 511–574, 1988), this failure might explain weak or nonsignificant findings in some of the past research. Second, the unit of analysis is typically the instructor, not the class. Since top researchers might disproportionately teach small classes at the graduate level, and that SETs are usually higher in such classes, the small relationships between research excellence and SETS found in previous research may be spurious. The present study corrects for each of these issues. It analyzes data from 167 classes in the social sciences and on 65 faculty. The quality of research productivity (raw citations/post-PhD year) is not related to SETs. However, when the distribution of citations is corrected for skewness, a significant positive relationship between research productivity and SETs emerges. This relationship survives controls for course and instructor characteristics, and holds for both the faculty member and the class as units of analysis. This is the first systematic investigation to demonstrate a significant relationship between the quality of research (citations) and SETs.  相似文献   

16.
This exploratory study considered Larrivee’s assessment of teachers’ reflective practice levels by using a formative, weekly, online student evaluation of teaching (SET) tool through a virtual learning environment (VLE) as a means to encourage reflective practice. In‐depth interviews were conducted with six faculty members in three departments at a university college in the UK. The study found that: (a) faculty who experienced surface‐level reflection were more likely to have a reactive reflection style; and (b) faculty who experienced higher levels of reflection were more likely to have a proactive reflection style. Overall, the tool was found to be an efficient means of encouraging reflection by all participants and demonstrated that reflective practice could come about as a result of these weekly formative SETs. The study concludes with suggestions for academic development and future research on reflection that could be conducted using SETs via a VLE.  相似文献   

17.
Determinants of teaching quality: What's important to students?   总被引:7,自引:0,他引:7  
A method for using student evaluations to help faculty improve their teaching performance is presented. A survey of current methods of student evaluations of teaching identified a need to improve the statistical information obtained from these evaluations. An ordinary least squares framework is used to identify the factors that students feel are important in teacher and course ratings. This framework is used to estimate weights that students assign to various teacher and course attributes and to test whether students apply these weights consistently across teachers and courses. About 81 percent of the explained variation in teacher ratings was associated with attributes that contribute to student enjoyment of the learning process. Over 90 percent of the explained variation in course ratings was associated with attributes that measure how much a student learned in the course. Students were found to apply these attributes or weights consistently across teachers and courses. Implications for developing effective teaching strategies, faculty recruitment, and curriculum reform are discussed.  相似文献   

18.
As student evaluation of teaching (SET) instruments are increasingly administered online, research has found that the response rates have dropped significantly. Validity concerns have necessitated research that explores student motivation for completing SETs. This study uses Vroom's [(1964). Work and motivation (3rd ed.). New York, NY: John Wiley & Sons] expectancy theory to frame student focus group responses regarding their motivations for completing and not completing paper and online SETs. Results show that students consider the following outcomes when deciding whether to complete SETs: (a) course improvement, (b) appropriate instructor tenure and promotion, (c) accurate instructor ratings are available to students, (d) spending reasonable amount of time on SETs, (e) retaining anonymity, (f) avoiding social scrutiny, (g) earning points and releasing grades, and (h) being a good university citizen. Results show that the lower online response rate is largely due to students’ differing feelings of obligation in the 2 formats. Students also noted that in certain situations, students often answer SETs insincerely.  相似文献   

19.
Abstract

The validity of traditional opinion-based student evaluations of teaching (SETs) may be compromised by inattentive responses and low response rates due to evaluation fatigue, and/or by personal response bias. To reduce the impact of evaluation fatigue and personal response bias on SETs, this study explores peer prediction-based SETs as an alternative to opinion-based SETs in a multicultural environment. The results suggest that statistically significant fewer respondents are needed to reach stable average outcomes when peer prediction-based SETs are used than when opinion-based SETs are used. This implies that peer prediction-based SETs could reduce evaluation fatigue, as not all students would need to do each evaluation. The results also report that the peer prediction-based method significantly reduces the bias evident in the opinion-based method, in respect of gender and prior academic performance. However, in respect of the cultural variables, race and home language, bias was identified in the peer prediction-based method, where none was evident in the opinion-based method. These observations, interpreted through the psychology literature on the formulation of perceptions of others, imply that although peer prediction-based SETs may in some instances reduce some personal response bias, it may introduce the perceived bias of others.  相似文献   

20.
Student evaluations of teaching (SETs) are widely used to measure teaching quality in higher education and compare it across different courses, teachers, departments and institutions. Indeed, SETs are of increasing importance for teacher promotion decisions, student course selection, as well as for auditing practices demonstrating institutional performance. However, survey response is typically low, rendering these uses unwarranted if students who respond to the evaluation are not randomly selected along observed and unobserved dimensions. This paper is the first to fully quantify this problem by analyzing the direction and size of selection bias resulting from both observed and unobserved characteristics for over 3000 courses taught in a large European university. We find that course evaluations are upward biased, and that correcting for selection bias has non-negligible effects on the average evaluation score and on the evaluation-based ranking of courses. Moreover, this bias mostly derives from selection on unobserved characteristics, implying that correcting evaluation scores for observed factors such as student grades does not solve the problem. However, we find that adjusting for selection only has small impacts on the measured effects of observables on SETs, validating a large related literature which considers the observable determinants of evaluation scores without correcting for selection bias.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号