首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
College students critique their professors' teaching at RateMyProfessors.com, a web page where students anonymously rate their professors on Quality, Easiness, and Sexiness. Using the self-selected data from this public forum, we examine the relations between quality, easiness, and sexiness for 3190 professors at 25 universities. For faculty with at least ten student posts, the correlation between quality and easiness is 0.61, and the correlation between quality and sexiness is 0.30. Using simple linear regression, we find that about half of the variation in quality is a function of easiness and sexiness. When grouped into sexy and non-sexy professors, the data reveal that students give sexy-rated professors higher quality and easiness scores. If these findings reflect the thinking of American college students when they complete in-class student opinion surveys, then universities need to rethink the validity of student opinion surveys as a measure of teaching effectiveness. High student opinion survey scores might well be viewed with suspicion rather than reverence, since they might indicate a lack of rigor, little student learning, and grade inflation.  相似文献   

2.
RateMyProfessors.com (RMP) is becoming an increasingly popular tool among students, faculty and school administrators. The validity of RMP is a point of debate; many would argue that self‐selection bias obscures the usefulness of RMP evaluations. In order to test this possibility, we collected three types of evaluations: RMP evaluations that existed at the beginning of our study, traditional in‐class evaluations and RMP evaluations that were prompted after we collected in‐class evaluations. We found differences in the types of evaluations students provide for their professors for both perceptions of professor clarity and ratings of professor easiness. Based on these results, conclusions drawn from RMP are suspect and indeed may offer a biased view of professors.  相似文献   

3.
The present article examined the validity of public web‐based teaching evaluations by comparing the ratings on RateMyProfessors.com for 126 professors at Lander University to the institutionally administered student evaluations of teaching and actual average assigned GPAs for these same professors. Easiness website ratings were significantly positively correlated with actual assigned grades. Further, clarity and helpfulness website ratings were significantly positively correlated with student ratings of overall instructor excellence and overall course excellence on the institutionally administered IDEA forms. The results of this study offer preliminary support for the validity of the evaluations on RateMyProfessors.com.  相似文献   

4.
Student ratings of teaching effectiveness are widely used to make judgments of faculty teaching performance. Research, however, has found that such ratings may not be accurate indicators of teaching performance because they are contaminated by course easiness. Using student ratings of 9855 professors employed at 79 different colleges and universities, the author hypothesized and found that the relationship between perceived course easiness and perceived course quality was moderated by school academic rankings. More specifically, easiness ratings were more strongly correlated with quality ratings among low‐ranked schools than among high‐ranked schools. Furthermore, the easiness–quality relationship was slightly stronger among public schools than among private schools. The article concludes by discussing the practical implications of these findings.  相似文献   

5.
Between Periods     
Student evaluations of teaching provide valued information about teaching effectiveness, and studies support the reliability and validity of such measures. However, research also illustrates potential moderation of student perceptions based on teacher gender, attractiveness, and even age, although the latter receives little research attention. In the present study, we examined the potential effects of professor age and gender on student perceptions of the teacher as well as their anticipated rapport in the classroom. We also asked students to rate each instructor's attractiveness based on societal beliefs about age and beauty. We expected students to rate a picture of a middle-aged female professor more negatively (and less attractive) than the younger version of the same woman. For the young versus old man offered in a photograph, we expected no age effects. Although age served as a detriment for both genders, evaluations suffered more based on aging for female than male professors.  相似文献   

6.
Measuring the perceived quality of professors’ teaching effectiveness is a critical issue in higher education. This study involves a large-scale data exploration with a sample of 16,802 professors, in which each professor had received at least 20 ratings from the RateMyProfessors website. We find that perceived difficulty (from the students’ perspective) has a significantly negative effect on perceived quality. However, when professors teach more difficult courses at top colleges, the decline in perceived quality is relatively small when compared to other colleges. In other words, whether professors come from top colleges has a moderating effect on the relationship between perceived quality and perceived difficulty. Furthermore, through a consideration of the characteristic differences among disciplines in terms of the relationship between perceived quality and perceived difficulty, we obtain three specific groups of disciplines. These findings facilitate a better understanding of quality for professors from different disciplines. We suggest that the measurement of teaching effectiveness should avoid the use of a single criterion because differences in courses, disciplines or schools can influence the measurement results, and these factors are beyond the control of professors.  相似文献   

7.
Abstract

The present research examined whether students’ likelihood to take a course with a male or female professor was affected by different expectations of professors based on gender stereotypes. In an experimental vignette study, 503 undergraduate students from a Canadian university were randomly assigned to read a fictitious online review, similar to those found on RateMyProfessors.com, that varied professor gender, overall quality score and level of caring for students. Students responded to items assessing their likelihood to take a course with the professor, perceived competence and warmth of the professor, and their own gender bias. An analysis of variance revealed an interaction between professor gender, student gender, quality score and caring. When quality score was low, male students indicated a lower likelihood of taking a course with female professors who were not described as caring. Regression analyses showed, however, that students' gender bias was negatively associated with likelihood to take a course with a female professor. These results imply that student gender plays a role in evaluations of female professors who do not display stereotypical warmth but that gender bias, which is typically higher for males at the group-level, may be an underlying factor.  相似文献   

8.
Twenty female and 23 male professors at a liberal arts college participated along with their 803 undergraduate students in a questionnaire study of the effects of professor gender, student gender, and divisional affiliation on student ratings of professors and professor self-ratings. Students rated their professors on 26 questions tapping five teaching factors as well as overall teaching effectiveness. Professors rated themselves on the same questions as well as on nine exploratory ones. On student ratings, there were main effects for both professor gender (female professors were rated higher than male professors on the two interpersonal factors) and division (natural science courses were rated lowest on most factors). These patterns were qualified by significant interactions between professor gender and division. Although professor self-ratings varied by division, there were few significant correlations between professor self-ratings and students' ratings. Implications for future research are discussed.Appreciation goes to Laura Capotosto and Julie Phelan for their research assistance.Suzanne Montgomery is now at Widener Law School, Philadelphia, PA, USA.  相似文献   

9.
This paper analyses the popular RateMyProfessors (RMP) website where students evaluate instructors in higher education. A study was designed to measure (1) the awareness and utilisation of the RMP website, (2) the internal and external validity of the RMP ratings in measuring teaching effectiveness, and (3) variation in the above across disciplines. It is concluded that the category of ratings, created by the website, establishes an anti‐intellectual tone that manifests itself in comments about instructors’ personality, easiness of workload and entertainment value rather than knowledge attained.  相似文献   

10.
This set of experiments assessed the influence of RateMyProfessors.com profiles, and the perceived credibility of those profiles, on students’ evaluations of professors and retention of material. In Study 1, 302 undergraduates were randomly assigned to read positive or negative RateMyProfessors.com profiles with comments that focused on superficial or legitimate professor features. Participants then watched a lecture clip, provided professor ratings and completed a quiz on the lecture. In Study 2, 81 students who were randomly assigned to read the same RateMyProfessors.com profiles before an actual class provided credibility ratings of the information, listened to a lecture and provided professor ratings at the end. Across both experiments, one in a laboratory setting, and one in an authentic undergraduate lecture, participants gave more favourable professor ratings after reading positive evaluations from RateMyProfessors.com information. These findings establish the causal link between professor information and subsequent evaluations.  相似文献   

11.
This paper examines the stability and validity of a student evaluations of teaching (SET) instrument used by the administration at a university in the PR China. The SET scores for two semesters of courses taught by 435 teachers were collected. Total 388 teachers (170 males and 218 females) were also invited to fill out the 60‐item NEO Five‐Factor Inventory together with a demographic information questionnaire. The SET responses were found to have very high internal consistency and confirmatory factor analysis supported a one‐factor solution. The SET re‐test correlations were .62 for both the teachers who taught the same course (n = 234) and those who taught a different course in the second semester (n = 201). Linguistics teachers received higher SET scores than either social science or humanities or science and technology teachers. Student ratings were significantly related to Neuroticism and Extraversion. Regression results showed that the Big‐Five personality traits as a group explained only 2.6% of the total variance of student ratings and academic discipline explained 12.7% of the total variance of student ratings. Overall the stability and validity of SET was supported and future uses of SET scores in the PR China are discussed.  相似文献   

12.

The prevalent use of student ratings in teaching evaluations, particularly the reliability of such data, has been debated for many years. Reports in the literature indicate that there are many factors influencing student perceptions of teaching. Three of these factors were investigated at the University of Western Australia, namely the broad discipline group, course/unit year level and student gender. Data collected over 3 years were analysed. The outcomes of this study confirmed results reported by other workers in the field that there are differences in ratings of students in different discipline groups and at different year levels. It also provided a possible explanation for the mixed results reported in studies of student gender in relation to student ratings.  相似文献   

13.
Abstract

The authors compared the average grades given in 165 behavioral and social science courses with the average ratings given by students to the instructors who taught the courses. Significant positive correlations were found between the average ratings for instructional quality and the average grades received by students. The courses in which the average grades were the highest were also those in which students gave teachers the highest ratings. Among possible reasons for the correlations are that better teachers attracted better students or that quality teachers provided more effective instruction, resulting in more student learning and, thus, higher average grades. Another explanation is that most college students tend to bias their ratings of instructional quality in favor of teachers who grade leniently (I. Neath, 1996). If correct, the latter reasoning begins to explain why the widespread use of student evaluations in the United States in recent decades has been accompanied by increases in the average grades that university students received. To prevent grade inflation, and particularly to avoid rewarding and promoting instructors who use increasingly lax grading standards, administrators should adjust student ratings of instructional quality for the average grades given for a course. In general, only courses near the extremely high and low ends in terms of students' average grades were significantly affected by the statistical adjustment.  相似文献   

14.
The term"student learning outcomes"refers to the knowledge, skills, and abilitiesthat students achieve during a course, and is typically assessed based on student evaluations conducted at the end of the semester. Previous studies in this area have investigated the effects of instructional quality and academic demands separately and have been limited primarily to examining findings using student samples fromthe United States. With Japanese college students' perceptions of self-improvementin English language courses as the dependent variable, the present study directly tests the hypothesis that students who perceive instructional quality to be higher, andcourse demands to be greater, also estimate higher levels of self-improvement in English language skills. The analysis provides strong support for this hypothesis.Higher ratings of instruction and academic demands have already beenshown to increase levels of student learning(Greimel-Fuhrmann and Geyer 2003;Nois and Hudson 2006; Mc Fadden and Dart 1992). The present study is the first toprovide direct evidence of the relative importance of student evaluations ofinstructional quality and academic demands as predictors of student learning and thefirst ever to do so with a sample of Japanese college students enrolled in a required English as a foreign language course. Our hypothesis is that Japanese students whoperceive instructional quality to be higher, and course demands to be greater,estimate higher levels of self-improvement in English language skills. Thus we test Japanese students' attitudes toward instructional quality and course demands asindependent variables predicting their perceptions of self-improvement in English language courses. The research focuses on Japanese students' improvement in English language skills because English education in Japan is an arena in which thedebate over limited English proficiency rages on, and because other research suggests reconsideration of English education in light of the demands of the rapidly expanding global era(Amaki 2008).  相似文献   

15.
This paper provides new evidence on the disparity between student evaluation of teaching (SET) ratings when evaluations are conducted online versus in‐class. Using a multiple regression analysis, we show that after controlling for many of the class and student characteristics not under the direct control of the instructor, average SET ratings from evaluations conducted online are significantly lower than average SET ratings conducted in‐class. Further, we demonstrate the importance of controlling for the factors not under the instructor’s control when using SET ratings to evaluate faculty performance in the classroom. We do not suggest that moving to online evaluation is overly problematic, only that it is difficult to compare evaluations done online with evaluations done in‐class. While we do not suppose that one method is ‘more accurate’ than another, we do believe that institutions would benefit from either moving all evaluations online or by continuing to do all evaluations in‐class.  相似文献   

16.
Abstract

The present study addressed the impact of individual consultation on teaching improvement as measured by changes in student ratings. Subjects included 91 professors who presented naturally for individual consultation services over a seven‐year period at the teaching centre of a Canadian university. Interventions by the consultant fell into three categories: 1) Feedback‐Consultation, 2) Feedback‐Consultation‐Class Observation, and 3) Feedback‐Consultation‐Class Observation and Student Consultation. End of term student ratings for the course that was the subject of the consultation were compared with student ratings for the same course taught between one and three years prior to the consultation service, and for the same course taught between one to three years following consultation. The results showed that, overall, consultation was effective in improving the quality of the consultees’ teaching, as evidenced by an increase in mean student ratings of instruction. This effect persisted post consultation. Not all intervention groups, however, showed the same pattern of results. Change was evident immediately after the intervention except in the case of brief consultation, although follow‐up data showed improved teaching for the latter group. Control data provided evidence that the change in student ratings post consultation could reasonably be attributed to consultation effects.  相似文献   

17.
Although the research literature investigating the relationship between grade awarded to students and students’ evaluations of teaching performance is voluminous, very few studies have examined the grade‐rating relationship according to level of student. The present study examined correlations between mean instructor rating and mean class grade for all course evaluations (N = 625 classes) at Utah State University during an academic quarter. In lower‐division (courses 100–299) and upper‐division (courses 300–599) undergraduate classes, correlations between grades and ratings of faculty were of an expected direction and magnitude (0.29 and 0.28, respectively); however, the grade‐rating correlation for graduate classes (courses 600 +) was — 0.20. It is speculated that graduate students are both better students and more critical evaluators of instruction, but replication and extension with different samples are needed before this tentative explanation can be accepted with confidence.  相似文献   

18.
We examined web‐based ratings and open‐ended comments of teaching‐award winners (n = 120) and research‐award winners (n = 119) to determine if teaching‐award winners received more favourable ratings and comments on RateMyProfessors.com. As predicted, students rated teaching‐award winners higher than research‐award winners on measures of teaching quality (i.e. helpfulness and clarity). A higher percentage of teaching‐award recipients relative to research‐award recipients received positive open‐ended comments about competence, use of humour, clarity, appearance and personality as well as both positive and negative open‐ended comments about level of course difficulty. We discuss implications of these findings for lending credibility to the RateMyProfessors.com indices and for promoting published faculty evaluations at post‐secondary institutions more generally.  相似文献   

19.
This study reviewed the effect of class size, grades given, and academic field on student opinion of instruction. Data analysis showed that there were no significant correlations between variables of the three groups: (a) class size and student opinion of instruction; (b) grades given and student opinion of instruction, and (c) college and national academic field rankings and student opinion of instruction. These results leave teacher effectiveness as the most likely variable to explain ratings of student opinion of instruction.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号