Student Feedback Surveys

This page provides information related to student evaluation of teaching. The Center for Teaching Excellence does not administer the Student Perception of Teaching or Student Evaluation Survey evaluations. Please see important contact information in the box to the right for questions about the timing of the SPOT/SES or problems completing or retrieving the evaluation. 

NOTE: Duquesne University has fully switched from the Student Evaluation Surveys and Blackboard Learning Management System to the Student Percpetion of Teaching evaluation and Canvas Learning Management System since Fall 2022. This results in many instructors having had both Canvas and Blackboard pages as well as SES and SPOT evaluations. This page contains the language of both for that reason. 

Overview of the information on this page:

  1. Introduction to Student Evaluation of Teaching at Duquesne
  2. How Do I Read the Student Ratings?
  3. Processing the Written Comments
  4. Faculty Behaviors That Impact Online Student Response
  5. Impact of Early-Course Evaluation on End-of-Semester Evaluations
  6. Myths and Realities about Student Evaluations
  7. Debates about Potential Biases of Student Evaluations Teaching
  8. Consulting with CTE 

    1. Introduction to Student Evaluation of Teaching at Duquesne

    Teaching and learning are at the heart of Duquesne. In order to assure quality and provide regular feedback to instructors on their teaching, Duquesne uses two kinds of teaching evaluation: student and peer. Both student and faculty peer perspectives on teaching and course design are helpful - each in its own way. Evaluation of teaching findings are useful both for improving one's teaching (formative evaluation), as well as for hiring, promotion and tenure decisions (summative evaluation).

    Students complete the Student Perception of Teaching (SPOT)/Student Evaluation Survey (SES) about their instructor. This survey is used in face-to-face, hybrid and online courses. Clinical courses use a different evaluation of teaching.

    Evaluation procedures, the student evaluation survey, and the clinical teaching effectiveness questionnaire are available through Duquesne's intranet, DORI. Click here to be directed to the page once your multipass information is entered (link for SPOT information will be added ASAP). From DORI, click on the Faculty tab. In the Academic Affairs area, select Student Evaluation Survey. The Faculty Handbook outlines who is to be evaluated.

    The SES examines teaching according to four domains, each with multiple items. These domains reflect the complexity of teaching and provide a profile indicating areas of relative strength and opportunities for growth.

    • instructional design
    • instructional delivery
    • attitudes toward student learning
    • faculty availability

    Consultation: Faculty and TAs are welcome to make an appointment to discuss their teaching evaluations with CTE staff for the purpose of improving their teaching. Please note, CTE does not formally evaluate teaching or create policy on how faculty evaluation is conducted at Duquesne.

    Return to Top of Page

    2. How Do I Read the Student Ratings?

    The instructor receives the summary report of the scaled and open-ended items after the Registrar has posted course grades. On page one, the "Student Evaluation Survey-Online: Course Report" summarizes basic information about the students in the course such as their year in college, self-assessment of effort made, expected grade, hours spent outside of class, and perceived level of difficulty. This information provides a context for interpreting the ratings that follow.

    On the second page, the form provides an average rating for each of the 25 items, the average of all items within each of the 5 domains, and mean ratings for your school. Ratings of "NA" are excluded from the average. You can also see the breakdown of ratings for each item to determine whether most students agreed with one another, or whether for example, the average is derived from a split between low and high ratings. This helps you know how to use the information for making changes in your teaching.

    When you receive your summary report, look first at your relative areas of strength and weakness as demonstrated by the average scores for each domain. Examine differences in your scores in the different kinds of courses you teach. Look for changes compared to previous courses you have taught. Compare your scores to school averages. You might want to create a chart that tracks your ratings by course over time. This can be useful in presenting your findings in annual reports or promotion and tenure documents.

    Past university-wide reports of the Student Evaluation Survey are posted on the Duquesne intranet through DORI. These reports provide helpful benchmarking information within Duquesne University. Log into DORI using your multipass, and click on the "index" icon in the upper right menu, and then on academic affairs. Choose a recent report to use for analyzing your data; the university report is posted for fall each year.

    Then, using the data in the tables, compare your results to school average ratings. Each person's context is different. You may want to compare your results to those in the tables that present findings by required versus elective courses, undergraduate versus graduate courses, class sizes, effort reported, perceived difficulty level, and faculty rank.

    A major benefit of the detailed university-wide report is that faculty can examine their teaching ratings within the context of their particular course by comparing the data in different ways.

    Return to Top of Page

    3. Processing the Written Comments

    Recent research suggests the qualitative feedback or written comments that students provide have credibility towards understanding the quantitative scales frequently used to assess student perceptions of teaching and class experiences (Alhija & Fresko, 2009; Boysen, 2016). However, individual or aggregated comments are not consistent in detail or utility (Jordan, 2011) and positive and negative written comments often have substantially different foci for teaching improvements (Brockx, Van Roy, & Mortelman, 2012). Not all feedback is equal either in terms of content. For example, students may give feedback on the:

    • instructor (general comments on teaching practices or teaching interactions with students),
    • course (content difficult or relevancy, or assignments), and/or
    • context (format/duration of course, or student composition) (Alhija & Fresko, 2009). 

    Consider the following process to organize the written feedback in meaningful and purposeful ways for analysis based on suggestions from Buskit and Hogan (2010) and Lewis (2001).

    1. Set aside comments that do not provide you with useful information regarding your teaching such as "They need a haircut and a new pair of shoes."
    2. Set aside positive comments that lack in detail such as "Best class ever"
    3. Categorize written comments between positive and negative. Further separate negative comments between things you can change and things you cannot
    4. Seek out a peer or colleague to process student interpretations that differ from your own interpretations
    5. Emphasize comments that can guide improvement or reinforcement of teaching practices


    • Alhija, F. N. A., & Fresko, B. (2009). Student evaluation of instruction: what can be learned from students' written comments?. Studies in Educational evaluation, 35(1), 37-44.
    • Boysen, G. A. (2016). Using student evaluations to improve teaching: Evidence-based recommendations. Scholarship of Teaching and Learning in Psychology, 2(4), 273.
    • Brockx, B., Van Roy, K., & Mortelmans, D. (2012). The student as a commentator: students' comments in student evaluations of teaching. Procedia-Social and Behavioral Sciences, 69, 1122-1133.
    • Buskist, C., & Hogan, J. (2010). She Needs a Haircut and a New Pair of Shoes: Handling Those Pesky Course Evaluations. Journal of Effective Teaching, 10(1), 51-56.
    • Jordan, D. W. (2011). Re-thinking student written comments in course evaluations: Text mining unstructured data for program and institutional assessment (Doctoral dissertation).
    • Lewis, K. G. (2001). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.

    Return to Top of Page

    4. Faculty Behaviors That Impact Online Student Response

    Increasing response rates is one important action to take in order to alleviate concerns about the generalizability of the student evaluation of teaching (Goodman, Anson, & Belcheir, 2015). In a study at Brigham Young University, Johnson (2003) found that the way faculty communicate with students about the online survey influences the response rate:

    Type of Faculty Communication Average Response Rate
    Assigned students to complete online rating forms but did not give them points 77%
    Encouraged students to complete the online forms but did not make it a formal assignment 32%
    Did not mention the online student-rating forms to students 20%

    Effective strategies include:

    1. Inform your students about the online survey procedures. For more information, click on Student Evaluation Survey under Academic Policies:
    2. Work towards creating a climate of mutual respect, one where student opinions are respected and addressed and instructor needs are taken into consideration (Chapman & Joines, 2017)
    3. Discussing the importance of student ratings to the faculty member and their efforts to improve the course (Ballantyne, 2003; Linse, 2016),
    4. Noting that their feedback will likely benefit future students (Linse, 2016),
    5. Start your next semester by discussing what you learned from the surveys and how you are adjusting your teaching or something about the course as a result,
    6. Provide non-point incentives (e.g. class treat for reaching a certain completion percentage) to complete the evaluation (Goodman et al., 2015),
    7. Giving time in class to complete the evaluation (Goodman et al., 2015). For example, allow time during class for students to complete the SES, either using their own devices or holding the class in a computer lab. The course instructor needs to be absent during the completion of SES. Alternatively, make the completion of the online ratings a course assignment (e.g. "Tonight, as part of your homework, please complete the online course evaluation on Blackboard or your smartphone."). This would be part of routine homework, not for points toward the final grade.
    8. Multiple reminders from the faculty (Linse, 2016).


      • Ballantyne, C. (2003). "Online Evaluations of Teaching: An Examination of Current Practice and Considerations for the Future." New Directions for Teaching and Learning 96, 103-112.
      • Chapman, D. D., & Joines, J. A. (2017). Strategies for Increasing Response Rates for Online End-of-Course Evaluations. International Journal of Teaching and Learning in Higher Education, 29(1), 47-60.
      • Goodman, J., Anson, R., & Belcheir, M. (2015). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40(7), 958-970.
      • Johnson, T.D. (2003). "Online Student Ratings: Will Students Respond?" New Directions for Teaching and Learning 96, 49-59.
      • Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94-106.

      Return to Top of Page

      5. Impact of Early-Course Evaluation on End-of-Semester Evaluations

      Few studies over the past three decades have examined the impact of early-course or midsemester evaluations on teaching and course improvements; however, they agree consistently that evaluations conducted prior to end-of-semester evaluations have substantial promise (Cohen, 1980, Cook-Sather, 2009; Hartford, 2017; Lewis, 2001; McGowen & Osgathrope, 2011). These early-course evaluations help promote teaching improvements during the student learning process rather than receiving feedback after instructional experiences have ended---a weakness of end-of-semester evaluations (Cook-Sather, 2009; Hartford, 2017). In addition, early-course feedback can act as corroborating evidence when interpreting end-of-semester evaluations (Hartford, 2017) which may strengthen teaching portfolios and tenure paperwork.

      Cohen's (1980) meta-analysis provided a foundation that early-course evaluation supports students feeling part of the class environment and offers additional developmental assistance for faculty. They concluded that, "instructors receiving mid-semester feedback averaged .16 of a point higher on end-of-semester overall ratings"-which meant a 15% rating increase compared to those who did not.

      In a more recent study, McGowan and Osguthorpe (2011) showed that the impact of mid-course feedback on end-of-term feedback depends on what instructors do with the early-course evaluation. Faculty who read the student feedback and did not discuss it with their students saw a 2 percent improvement in their online student rating scores compared to faculty who conducted the mid-course evaluation, read the feedback, discussed it with their students, and made changes saw a 9 percent improvement.

      Lewis (2001, p. 38 - 39) offered practical strategies in implementing these early-course evaluations:

      1. Prepare your students ahead of time in what they are asked to do and why you are asking
      2. Ensure as anonymous of responses as possible
      3. Ensure students understand the procedures for filling out the evaluations
      4. Preview the response options and operationalize the ratings
      5. Upon collecting the evaluations, read immediately and offer a response to the feedback as soon as possible
      6. Discuss the positive feedback and consensus that students responded with
      7. Discuss how you may respond to mixed feedback where a plurality, but not majority, of students responded similarly in request for a change

      Some Early Course Evaluation Ideas:

      Pluses and Wishes
      "As this course progressed, I was able to get it back on track by using a mid-semester evaluation process called "pluses and wishes." Students divided the evaluation sheet in half and placed all the positives about the course on one side and suggestions for improvement on the other. For the most part, the students were satisfied with the course, but the one ‘wish' that was prevalent was to increase student interaction" (Ladson-Billings, 1996).

      Traffic Light Survey
      Nakpangi Johnson (Duquesne Pharmacy Graduate) uses a one Minute Traffic Light Survey.

      Traffic Light Survey

      More Early Course Evaluation Methods


      • Cohen, P. (1980). Effectiveness of Student-Rating Feedback for Improving College Instruction: A Meta-Analysis of Findings. Research in Higher Education 13 (4), 321-341.
      • Cook‐Sather, A. (2009). From traditional accountability to shared responsibility: The benefits and challenges of student consultants gathering midcourse feedback in college classrooms. Assessment & Evaluation in Higher Education, 34(2), 231-241.
      • Hartford, K. M. (2017). The Effect of Student Evaluations on Faculty Performance (Doctoral dissertation, Northeastern University).
      • Lewis, K. (2001). Using Midsemester Student Feedback and Responding to It. New Directions for Teaching and Learning 87, 33-44.
      • McGowen, W.R. and Osgathorpe, R.T. (2011). Student and Faculty Perceptions of Effects of Midcourse Evaluation. To Improve the Academy 29, 160-172.

      Return to Top of Page

      6. Myths and Realities about Student Evaluations

      Myth: Student evaluations are irrelevant because students don't know how to evaluate good teaching.

      Reality: According to Filak and Sheldon (2003), recent studies show "that student course evaluations are valid measures of instructional effectiveness." "In other words, students know what makes for a good educational experience and what makes for a bad one" (Filak and Sheldon, 2003).

      Myth: Student evaluations are a popularity contest with warm, friendly, humorous instructors receiving the highest scores.

      Reality: In a study of both written and objective evaluations, Aleamoni (1999) found that "students praised instructors for their warm, friendly, humorous manner in the classroom but frankly criticized them if their courses were not well organized or their methods of stimulating students to learn were poor." In other words, while students may rate a faculty person highly for building student rapport, good rapport does not preclude poor ratings in other areas such as instructional design, delivery, faculty availability, or student outcomes.

      Myth: Students are not truthful in answering the SESs.

      Reality: Marlin (1987) conducted surveys of undergraduates in economics courses at Western Illinois University and Appalachian State University where he asked the following question: "Do you feel that you are fair and accurate in your ratings of teachers and do you give adequate thought and effort to the rating process?" The percentage of responses is summarized in the following table:

      Institution Almost Always Most of the Time Some of the Time Almost Never
      Western Illinois 51.6 39.1 6.7 1.0
      Appalachian State 51.5 42.2 6.0 0.3

      In Marlin's study, the majority of students reported that they were truthful in their evaluations of faculty.

      Myth: Grade inflation results in Higher SES Scores.

      Reality: This is one of the most controversial myths about student evaluations of teaching. Studies suggest a moderate correlation between teaching evaluations and students' anticipated grades. Researchers variously report the correlation at .20 (Centra and Creech, 1976), between .10 and .30 (Feldman, 1997), and, more recently, at .11 (Centra, 2003).

      While grade inflation is one hypothesis for the moderate correlation between expected grades and teaching evaluations, other possible reasons for the correlation include what Marsh (2007) calls the validity hypothesis and the prior student characteristic hypothesis. Marsh (2007) defines the various hypotheses as follows: 

      • "The grading leniency hypothesis proposes that instructors who give higher-than-deserved grades will be rewarded with higher-than-deserved SETs, and this constitutes a serious bias to SETs. According to this hypothesis, it is not grades per se that influence SETs, but the leniency with which grades are assigned."
      • The validity hypothesis proposes that better expected grades reflect better student learning and that a positive correlation between student learning and SETs supports the validity of SETs.
      • The prior student characteristics hypothesis proposes that preexisting student variables such as prior subject interest may affect student learning, student grades, and teaching effectiveness so that the expected-grade effect is spurious. (Marsh, 2007, 352-353)

      In Marsh's analysis of the three hypotheses, he concludes, "In summary, evidence from a variety of different studies clearly supports the validity and student characteristics hypotheses. Whereas a grading-leniency effect may produce some bias in SETs, support for this suggestion is weak, and the size of such an effect is likely to be insubstantial" (Marsh, 2007, 357). Centra (2003), one of the researchers who put forward the correlation between expected grades and teaching evaluations, similarly says, "To summarize, teachers will not likely improve their evaluations from students by giving higher grades and less course work. They will, however, improve their evaluations and probably their instruction if they respond to consistent student feedback about instructional practices."

      Myth: I can fix my teaching by only reading the SES results.

      Reality: Studies examining how student evaluations can contribute to better teaching suggest that reading your SES results is not enough to produce positive change. In an earlier analysis, Rotem and Glassman (1979) conclude by saying, "The main implication emerging from the present review is that feedback (alone) from student ratings (as was elicited and presented to teachers in the studies reviewed) does not seem to be effective for the purpose of improving performance of university teachers." More recently, Ḥaṭivah (2000) concludes her study by saying, "These results suggest that self-reflection based on students' feedback is insufficient, on average, for self-improvement of instruction and that additional instructional development activities conducted by experts are necessary for achieving this improvement."

      The good news from the research is that significant teaching improvement occurs when teachers discuss their ratings with a consultant. Wilbert McKeachie (1997) says that "research shows that student ratings are more helpful if they are discussed with a consultant or peer." In Robert Wilson's study of how consultations help faculty to make changes, Wilson (1986) discovered that "the more behavioral, specific, or concrete a suggestion is, the more easily it can be implemented by a teacher and the more likely it is that it will affect students' perceptions of his or her teaching."


      • Aleamoni, L. (1999). "Student Rating Myths Versus Research Facts from 1924 to 1998." Journal of Personnel Evaluation in Education 13:2, 153-166.
      • Centra, J. (2003). "Will Teachers Receive Higher Student Evaluations by Giving Higher Grades and Less Course Work?" Research in Higher Education 44:5, 495-518.
      • Centra, J. A., & Creech, F. R. (1976). "The Relationship between Students, Teachers, and Course Characteristics and Student Ratings of Teacher Effectiveness" (Project Report 76-1), Princeton, NJ: Educational Testing Service.
      • Feldman, K. (1997). "Identifying Exemplary Teachers and Teaching: Evidence from Student Ratings." In Effective Teaching in Higher Education Research and Practice, eds. Raymond Perry and John Smart, 368-395.
      • Filak, V., & Sheldon, K. (2003). "Student Psychological Need Satisfaction and College Teacher-Course Evaluations." Educational Psychology 23: 3, 235-247.
      • Hativa, N. (2000). Teaching for Effective Learning in Higher Education. Dordrecht, Netherlands: Kluwer Academic.
      • Marlin, J. (1987). "Student Perception of End-of-Course Evaluations." Journal of Higher Education 58:6, 704-716.
      • Marsh, H. (2007). "Students' Evaluations of University Teaching: Dimensionality, Reliability, Validity, Potential Biases, and Usefulness." In The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective, eds. Raymond Perry and John Smart, 319-384.
      • McKeachie, W. (1997). "Student Ratings: The Validity of Use." American Psychologist 52:11, 1218-1225.
      • Wilson, R. (1986). "Improving Faculty Teaching: Effective Use of Student Evaluations and Consultants." The Journal of Higher Education 57:2, 196-211.

      Return to Top of Page

      7. Potential Biases Within Student Evaluations of Teaching

      Wachtel (1998) thoroughly summarized research at the time on "variables thought to influence student ratings" (p. 195). His analysis shows the variety of implicit biases within student evaluations of teaching and the need for ongoing research. Below, we highlight passages from Wachtel's study related to biases on course and instructor characteristics and raise awareness of additional research.

      Electivity & Subject Area

      Wachtel: Researchers have found that teachers of elective or non-required courses receive higher ratings than teachers of required courses. More specifically, the 'electivity' of a class can be defined as the percentage of students in that class who are taking it as an elective (Feldman, 1978); a small to moderate positive relationship has been found between electivity of a class and ratings (Brandenburg et al., 1977; Feldman, 1978; McKeachie, 1979; Scherr & Scheft, 1990). This may be due to lower prior subject interest in required versus non-required courses" (pp. 195-196). "Researchers have found that subject matter area does indeed have an effect on student ratings (Ramsden, 1991), and furthermore, that ratings in mathematics and the sciences rank among the lowest (Cashin, 1990, 1992; Cashin & Clegg, 1987; Centra & Creech, 1976; Feldman, 1978).

      Additional Research: Graduate students who held negative perceptions about the lack of freedom to choose a course were related to lower and negative student ratings of quality of instruction (Donnon, Delver, & Beran, 2010). A negative relationship exists between students' perceptions that their professors are ideologically driven and their evaluations (Lazos, 2012)

      Level of Course & Course Size

      Wachtel: Most studies have found that higher level courses tend to receive higher ratings (Feldman, 1978; Marsh, 1987, p. 324). However, no explanation for this relationship has been put forth, and Feldman also reports that the association between course level and ratings is diminished when other background variables such as class size, expected grade and electivity are controlled for. Therefore, the effect of course level on ratings may be direct, indirect, or both" (p. 196). Considerable attention has been paid to the relationship between class size and student ratings. Most authors report that smaller classes tend to receive higher ratings (Feldman, 1978; Franklin et al., 1991; McKeachie, 1990). Marsh (1987,p. 314; Marsh & Dunkin, 1992) reports that the class size effect is specific to certain dimensions of effective teaching, namely group interaction and instructional rapport. He further argues that this specificity combined with similar findings for faculty self-evaluations indicated that class size is not a 'bias' to student ratings (see also Cashin, 1992). However, Abrami (1989b) in his review of Marsh's (1987) monograph counters that this argument cannot be used to support the validity of ratings, and instead demonstrates that interaction and rapport, being sensitive to class size, are dimensions which should not be used in summative decisions. Another hypothesis is that the relationship between class size and student ratings is not a linear one, but rather, a U-shaped or curvilinear relationship, with small and large classes receiving higher ratings than medium-sized ones (Centra & Creech, 1976; Feldman, 1978, 1984; Koushki & Kuhn, 1982)" (p. 196).

      Instructor Rank and Experience

      Wachtel: Where ratings of professors and teaching assistants have been compared, professors are rated more highly (Brandenburg et el., 1977; Centre & Creech, 1976; Marsh & Dunkin, 1992). First-year teachers receive lower ratings than those in later years (Centre, 1978). Feldman (1983) synthesized research at the time that suggested when academic rank of teachers was associated with student ratings of teaching, it was positively associated, but warned that academic rank is not a proxy for instructor years of experience or age of instructor.

      Additional Research: Hoffmann and Oreopoulos (2009) found "subjective instructor evaluations have almost no correlation with instructor rank or salary, yet vary widely within these categories" (p. 24). For example:
      • The influence of instructor rank and salary also differs by students' high school grade quartile. Lecturers have a significant negative impact on subject interest for students among the lowest quartile, but a positive impact among students from the highest quartile.
      • Compared with full professors, students from the lowest grade quartile are less likely to be interested in a subject after taking an introductory course with an assistant or associate professor, or an adjunct or emeritus professor.
      • Highly paid professors, however, have a positive influence on subject interest among students with better high school grades.
      Kendall and Schussler (2012) identified key words such as "confident and strict" having a positive correlation with number of years of teaching experience and "nervous and uncertain" having a negative correlation.

      Gender and Race of Instructor

      Wachtel: Many authors contend that student ratings are biased against women instructors (for example, Basow, 1994; Basow & Silberg, 1987; Kaschak, 1978; Koblitz, 1990; Martin, 1984; Rutland, 1990). A few studies (Bennett, 1982; Kierstead et al., 1988) have found that female instructors need to behave in stereotypically feminine ways in order to avoid receiving lower ratings than male instructors. In view of this, Koblitz (1990) sees a difficulty for women instructors who need to adopt a 'get tough' approach.

      Additional Research: Kendall and Schussler (2012) identified "organized" as being more positively associated with women than men. Hamermash and Park (2005) illustrated within single institution large data sets, women and racial minorities are rated slightly lower than their male and white colleagues. Smith and Hawkins (2011) found that although faculty members in their study were similar on multidimensional items, they were very different on the two global items. This finding supports the contention of Black faculty that their student ratings are lower on the global items ("overall value of course" and "overall teaching ability")

      Physical Appearance of Instructor

      Wachtel: A study by Buck and Tiene (1989) found that there was a significant interaction between gender, attractiveness, and authoritarianism; namely, teachers with an authoritarian philosophy were rated less negatively if they were attractive and female. Rubin (1995) found that students' judgments of teaching ability of non-native speaking instructors were affected by judgments of physical attractiveness" (p.201).

      Additional Research: Hamermesh and Parker (2005) demonstrated that instructors who are viewed as better looking receive higher instructional ratings; however, the researchers do raise continued concerns in determinants to measure beauty and unrelated, but positive correlations between these and other aspects of effective teaching habits (e.g. high levels of confidence impacting interpersonal relationships within classrooms).


        • Abrami, P. C. (1989). Book Reviews: SEEQing the Truth About Student Ratings of Instruction. Educational Researcher, 18(1), 43-45.
        • Basow, S. A., & Silberg, N. T. (1987). Student evaluations of college professors: Are female and male professors rated differently?. Journal of educational psychology, 79(3), 308.
        • Bennett, S. K. (1982). Student perceptions of and expectations for male and female instructors: Evidence relating to the question of gender bias in teaching evaluation. Journal of Educational Psychology, 74(2), 170.
        • Brandenburg, D. C., Slinde, J. A., & Batista, E. E. (1977). Student ratings of instruction: Validity and normative interpretations. Research in Higher Education, 7(1), 67-78.
        • Donnon, T., Delver, H., & Beran, T. (2010). Student and teaching characteristics related to ratings of instruction in medical sciences graduate programs. Medical teacher, 32(4), 327-332.
        • Feldman, K. A. (1978). Course characteristics and college students' ratings of their teachers: What we know and what we don't. Research in Higher Education, 9(3), 199-242.
        • Hamermesh, D. S., & Parker, A. (2005). Beauty in the classroom: Instructors' pulchritude and putative pedagogical productivity. Economics of Education Review, 24(4), 369-376.
        • Hoffmann, F., & Oreopoulos, P. (2009). Professor qualities and student achievement. The Review of Economics and Statistics, 91(1), 83-92.
        • Kendall, K. D., & Schussler, E. E. (2013). Evolving impressions: undergraduate perceptions of graduate teaching assistants and faculty members over a semester. CBE-Life Sciences Education, 12(1), 92-105.
        • Lazos, S. R. (2012). Are student teaching evaluations holding back women and minorities. Presumed incompetent: The intersections of race and class for women in academia, 164-185.
        • Marsh, H. W. (1987). Students' evaluations of university teaching: Research findings, methodological issues, and directions for future research. International journal of educational research, 11(3), 253-388.
        • Marsh, H. W., & Roche, L. A. (1997). Making students' evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American psychologist, 52(11), 1187.
        • McKeachie, W. J. (1979). Student ratings of faculty: A reprise. Academe, 65(6), 384-397.
        • Smith, B. P., & Hawkins, B. (2011). Examining student evaluations of Black college faculty: Does race matter?. Journal of Negro Education, 80(2).
        • Wachtel, H. K. (1998). Student evaluation of college teaching effectiveness: A brief review. Assessment & Evaluation in Higher Education, 23(2), 191-212.

        Return to Top of Page

        8. Consulting with CTE staff 

        The Center for Teaching Excellence staff are available to consult with faculty and TAs concerning their teaching ( or 412-396-5177). CTE personnel do not have access to evaluation results except through individuals who bring their own results to consultations. They do not play any role in the official evaluation of teaching, but rather provide feedback for use by individual instructors.

        Return to Top of Page