Remember a few years ago when I told you that trying to create equations for the intangibles is calamitous for any profession, especially education? If think tanks prognosticate that the 21st century needs ideals like collaboration and transparency, then we’re doing a poor job of exemplifying that in schools.
On Saturday, for example, Commissioner John King released New York State’s mandated teacher – and principal – evaluation system because the United Federation of Teachers and NYC Department of Education couldn’t come to an agreement on their own. (N.B.: Mayor Mike Bloomberg came at the last minute and put a stop to the almost-finished negotiations, but that’s another story completely.) In his press release, King waxes poetic about students, saying “they’ve waited too long” for these reports to come out, an empty statement since students probably won’t read too many of these. To be fair, he also rebutted Bloomberg by saying we “can’t fire our way” towards improving the teaching profession, though one has to wonder how due process comes into play in this evaluation.
Then, he revealed the plan. The presser is here for your perusal, and follows the pattern set across the country: 60% is based on observations – decreased to 55% if we include the 5% for student surveys – and 40% on state and local assessments. Unless they’re rated as ineffective in the 40% – making the 40% equal to 100% – and the teacher is automatically rated as ineffective overall.
From that alone, we can reasonably conclude that teacher evaluations aren’t about the improvement and professionalization of teaching, but about the politics at play in distant office buildings, back rooms of city halls, and government floors. How “assessment” takes precedent over anything else in the school year is beyond me. While the United Federation of Teachers, NYC Schools Chancellor Dennis Walcott, and Bloomberg called the evaluation deal a win for each of their constituents (the jury’s still out on this), we can all agree that each of the percentages are so unstable, we can’t rely on anything we read in these documents.
Let’s look at them bit by bit:
State Growth: 20%
As I’ve discussed here, the most viable research on this stuff shows that the equations central offices have used to put a number on teachers have largely shown that we actually can’t put a real number on what teachers do with assessments. Our classes and the tests they’ve taken in the last decade vary from year to year in a way that has us comparing apples to oranges to cantaloupes to watermelons. By the time researchers get a stable number, the margin of error gets down to a still-hefty 11%, something we wouldn’t accept in our local or national elections but seem to be OK with in our classrooms.
Local Assessments: 20%
School committees will have the choice to go with their own assessments or an assessment chosen by the city (most likely Acuity). Based on my understanding, if they go with the former, they’ll have to get their assessments approved by central offices anyways. If they go with the latter, that’s two to three more standardized tests they have to take throughout the year. While I prefer the former, I also have to consider that this 20% will look different for every school throughout the city, but let me get to that.
Principal Observations: 60% (maybe)
This is where things get tricky because principals have to look at all the dimensions of the Danielson framework, something I’m not opposed to. As UFT President Michael Mulgrew has said before, the framework has elements that speak to a more holistic evaluation of teachers instead of specific dimensions that must be mastered. What gets tricky is how people perceive those dimensions. Currently, we have Danielson experts helping schools to calibrate, as, from what I’ve heard, principals usually rated their teachers higher than the experts did.
With this new vision for how to conduct observations, too many administrators are still in the “gotcha” mindset, perilous for any teacher who hasn’t done their homework on the framework. Also, a handful of administrators might be tempted to rate teachers lower on the framework and intentionally rate them higher as the year goes by so they can look like they’re the ones making the change in the teacher.
Any number of possibilities can occur with the new observations, which was the case with the old observations. This points to a need for a cultural change on how we perceive teacher evaluations. Again.
Student Surveys: 5% (maybe)
This part really brought out the rancor of some people on Twitter. Frankly, it made some of my colleagues look like the job-thirsty authoritarians we can’t risk looking like. I questioned openly why some of our colleagues felt like they were above student feedback. First, we should understand that, unlike state and local assessments, student surveys don’t do very much for our teacher evaluation scores. Also, putting a number on how students feel about our performance feels odd.
Generally, I’ve found that students can provide awesome feedback about the types of teachers that work for them and whether teachers actually do a good job or not, even if they can’t totally completely elaborate on the details. Yet, giving this feedback a number may alter the way students give feedback or, worse, how the student surveys get administered.
To summarize …
I see lots of potential for good discussion around teacher evaluation, and how we as teachers can get better feedback to improve our practice. I also don’t think putting a number to any of these pieces actually solves anything. Quantifying anything makes that thing susceptible to corruption. I’m not OK with the overemphasis on standardized testing, though it’s nice to see how my kids did sometimes. I’d like to have a master principal – a teacher of teachers in the truest sense – support my continual learning as a teacher. I’m also in favor of schools creating assessments if those assessments are better aligned to their curriculum than the city-sponsored ones are.
Overall, I’m suspicious about how these numbers get interpreted, especially when our media would love to grab these numbers and try to tell the world just how “bad” we are in their pages. We ought to consider the fact that putting numbers on anything puts us on a path where principals get tempted to rank their teachers and make assumptions about them without warrant. It picks apart school districts by assuring that top schools don’t share their secrets with anyone, and the “bottom” schools get one more label, and perhaps one more reason why they “must” be shut down instead of rehabilitated.
Perhaps if teacher evaluations meant, “We’re just getting a general sense of what this teacher is doing” instead of “We’re out to get rid of the ‘worst’ ones by using numbers,” then I’d render unto King what’s his. I just hope that people who get to see these reports come in with the understanding that teaching is hard, and the successes we have in our classroom, no matter how hard, are innumerable.
Jose, who’s back to writing for real …