Monday, December 03, 2012


Mary Beard on teaching evaluations from students:

On the forms I have been dutifully handing round to my audiences, we even ask the poor dears: "Do you have any difficulty hearing the lectures?" "Yes" or "No". An innocuous question maybe. But the rebel in me does think that if a group of highly intelligent 19-year olds have just dumbly sat through eight weeks of lectures without putting their hands up to say, "Err sorry, we can't hear you at the back", they hardly deserve to be at university.

The primary benefit of student evaluations is that they are highly reliable (i.e., students tend to agree with each other) and, given that, moderately valid for 'teaching effectiveness' (i.e., student evaluations tend to track, roughly, with other measures of teaching effectiveness). The former is less of a benefit than it is often made out to be -- a great deal of it seems to be due to the fact that the evaluations tend to be very generic questions answered very generically by people who do not have a deep familiarity with the field and don't have much of an opinion on the quality of the class (and often don't think that the evaluations will have any effect one way or another), but do tend to agree generally on what is not stress-inducing. It's the latter, for instance, that appears to be the reason why student evaluations exhibit a very measurable leniency bias -- favorable evaluation correlates very well with students expecting high grades -- and is related to their exhibiting a 'Fox Effect' (in which enthusiasm and confidence, rather than content or how much is learned, tends to the major influence on evaluations of quality). One study showed that end-of-term student evaluations could be predicted with fairly good accuracy from how students judged thirty-second clips of professors on things like 'optimism' and 'confidence'. And it has been shown that students think they've learned more in a class with an animated instructor than in a class with a dull instructor, even if all objective measures suggest they have learned less.

Contrary to the usual talk on the subject, we do not know how to evaluate teaching effectiveness. There are a number of reasons for this: good teaching is often potentially controversial in its approach, we have no way of precisely comparing the difficulty of (say) the graph theory of a discrete mathematics course with the House of Fame section of a Chaucer course, what is effective for some students will not be so for others so there are no general measures, there seem to be several very different things conflated together under the label 'teaching effectiveness', etc. Students do not know how to assess it, contrary to what they feel; faculty do not know how to assess it, contrary to what they think (faculty assessments of teaching suffer from analogous problems, but are also far less consistent with each other than student assessments); and administrators certainly do not know how to assess it. Perhaps this is because we have not thought carefully enough about what it is.

No comments:

Post a Comment

Please understand that this weblog runs on a third-party comment system, not on Blogger's comment system. If you have come by way of a mobile device and can see this message, you may have landed on the Blogger comment page, or the third party commenting system has not yet completely loaded; your comments will only be shown on this page and not on the page most people will see, and it is much more likely that your comment will be missed.