Saturday, January 17, 2015

Guest Writer: Quality Control and Student Evaluations by anonymousapril

by anonymousapril

I have heard horror stories.


In working at different colleges over the last ten years, I have seen my fair share of instructors showing extremes in inappropriate behavior. Everything from dating students, to sipping whiskey from coffee mugs, to cancelling classes on poetic whims. I once knew an instructor who forced all of his English 100 students to write their final persuasive essays about the Loch Ness monster, and was shocked to find they were underprepared for final essay examinations. Personally, I had to step in and teach a class once where an adjunct got into a shouting match with a group of Cosmetology students, and then just walked out. When our Department Head finally got wind of what happened, the instructor refused to teach her class at all, and they passed along this section to me. This woman was not pulled from the teaching roster. She was not given any training on behavior management after the incident. In fact, she maintained her current position without repercussion, and pulled a part-time salary more handsome than my own (due to her longevity at the institution).


So I get it. I get that we need quality control in teaching, and things like student surveys are a window into each class, and provide opportunities for students make known some of the problematic teaching behaviors of unprofessional professors (as well as celebrate the strengths of others). I get that we need standards, and consistency, and data.  We have to evaluate.  But I suppose, I don’t agree that having students complete surveys really does the job of evaluating what it is that we teachers do. And even if they could, based on everything we already know about teaching and learning—is this even a reasonable way to do it?

Certainly, a student may be able to viscerally tell that the teaching of old Nessy was a joke, or that his or her instructor smelt of booze—but students aren’t trained in methodology, and having them honestly assess the teacher’s methods or approach is actually sort of ridiculous the more one thinks about it.  Many students are not self-aware of their own learning styles nor do they understand the whole picture of what a facilitator might be doing.  Sometimes the most challenging and strict of teachers can teach a person the most.  Other times, creative teachers or ones using Socratic questioning methods might make students squirm. In a recent article, “Student Course Evaluations Get an F” Kamenetz breaks down some of the many problems with evaluations by discussing a recent study from the University of California at Berkeley. The study illustrates that student surveys might not be effective at all in determining if an educator is skilled in the classroom.  Kamenetz explains better, “Say one professor gets "satisfactory" across the board, while her colleague is polarizing: Perhaps he's really great with high performers and not too good with low performers. Are these two really equivalent?” (2014).


At some colleges, student satisfaction is paramount. Students are treated like consumers and customers, rather than, well— students, apprentices in learning. A part-time teacher’s workload might be made or broken based on the scores of student surveys. At other colleges, the surveys are merely a shuffling game of papers that all end up untracked, unread and recycled. There is a disconnect between institutions and what each does with the survey results; but that isn’t even actually the whole problem. Among other statistical issues like sampling bias (students who do exceptionally well or poorly are more motivated to respond to surveys), there is also the response rate, less than half of students respond at all (Stark & Freishtat, 2014).


However, the idea of traditional student survey evaluation connects to our cultural norms in general; these days consumers expect to evaluate anything that can be bought or sold, be it on Yelp, reviews on Amazon, and in the case of education, the infamous Rate My Professor.  Arguably, even outside student/consumer initiated reviews still epitomize the pitfalls that Stark and Freishtat were discussing. Only the most motivated student on either side of the grading pendulum is likely to respond. Considering this, I wondered how the Rate My Professor  evaluations might stack up when comparing the overall satisfaction score from a community college, a large online university, and a famous private school. According to this sampling from October 2014, the community college earned a student satisfaction score of 3.2 (when evaluating the average student responses in regard to: reputation, location, internet access, food, opportunity, library, campus, clubs, social and happiness and out of 5.0). The big online university earned a score of 3.7 (the highest score was resoundingly location, but overall better than the community college anyway). And in the case of the expensive private school, it had an embarrassing 3.4 review.  According to students, schools like Harvard are nearly as unsatisfying to them as local community colleges! But what does any of this mean really?


The purpose of an evaluation is to judge a practitioner. However, it must be stated that in the profession of teaching things can get complicated. This is not, after all, widget making—there are so many variables when it comes to teaching and learning, and education in overall! Although highly regulated these days, education is often a very personal or private journey for students.  Some of us learn from mistakes, some of us learn from success, and all teachers come with personal styles and skills just like their students. There is the issue of timing, of work/life balance, special needs, and different rates of realization for people. A brilliant, organized and consistent teacher can fall flat on his or her face when it comes to reviews simply due to personality conflicts or student preferences.  A creative and innovative instructor might make the concrete, sequential style learners wince.


In the end, I believe that a better way to evaluate would be through professional community building, having instructors present their lessons, activities and curriculum with one another and leadership formally. Investing in collaborative content building, and the sharing of materials and ideas takes time but pays off in spades. Utilizing modeling of effective best practices, meetings, conferences and webinars to keep instructors motivated and at the top of their professional game is the answer. Building an ongoing conversation with teaching professionals, so they are self-aware about their teaching methods, matters. After that engaging and supportive process, have the evaluation be based on the teacher’s own materials and the student work artifacts. Berkley’s survey writers agree with me,  “Show me your stuff,” Stark says. “Syllabi, handouts, exams, video recordings of class, samples of students' work. Let me know how your students do when they graduate. That seems like a much more holistic appraisal than simply asking students what they think” (as cited in Kamenetz, 2014).
References
Harry S. Truman College in Chicago, IL - RateMyProfessors.com. (2014, September 8). Retrieved October 22, 2014.


Harvard University in Cambridge, MA - RateMyProfessors.com. (2014, September 8). Retrieved October 22, 2014.


Kamenetz, A. (2014, September 26). Student Course Evaluations Get An 'F' Retrieved October 10, 2014.


Stark, P. and R. Freishtat (2014). An evaluation of evaluations.  Department of Statistics, University of California, Berkley. Retrieved from, http://www.stat.berkeley.edu/~stark/Preprints/evaluations14.pdf
University of Phoenix Online in Phoenix, AZ - RateMyProfessors.com. (2014, September 8). Retrieved October 22, 2014.

No comments:

Post a Comment