A Future Too Big To Fail: Using Corporate Thinking Corrupts The Classroom

Jose Vilson Jose

A few years ago, I read an article in Wired Magazine detailing the events that led up to the Market Crash of 2008-2009, wherein they pin down much of the blame on an elegant and seemingly infallible formula created by world renown mathematician David X. Li. Recounting the events that led up to the crash, the Wired article reads similar to every other well-intentioned idea: the creator thinks he or she is solving a problem, assumes no one will tamper with the few good assumptions within the creation, and hands it to the people entrusted with its longevity .

Only for it to fall into wrong hands and get adulterated for other, more vile purposes.

This formula created by Li assumed, essentially, that the market would continue to grow, and if it didn’t, it wouldn’t lose too much ground. It also meant that those who abused the formula, and didn’t understand the maths behind the formula, would continue to push it to validate selling and reselling pieces that essentially had no value or ground behind it.

That’s how VAM (value-added modeling) threatens to dice the teaching profession.

Proponents of VAM remind me of the proponents of the market’s wild success before the market crash. The mathematicians may have had the best intentions for using test scores as a means of determining how much the students are learning. The value-added model tries to control for external (socioeconomic) factors which differentiates it from a strictly evaluative model (taking the average of all test scores as is). It also tries to emphasize the growth a student makes from year to year, another boom for proponents of VAM.

What we have then are governors, mayors, newspaper heads, corporatists, and education celebrities (who have come under fire for massive erasures across their former school supervision) trying to tell the public that this formula is the only way by which we can hold teachers accountable. Policies like No Child Left Behind and initiatives like Race To The Top both implicitly and explicitly push schools to use student test scores as the most accurate way to evaluate teachers in the classroom.

Yet, it just doesn’t work. By many accounts from statisticians, financiers, and other vested mathematicians, students’ test scores used in this form are wildly undependable in all sides of the spectrum, and full of errors used as a continuous product of teacher quality.

What really shocked me was the ridiculous margin of error: 35% over 4 years, 11% over 10 years. As a measure of the teacher, it means that those who make it to the 4th year of teaching (if they do) with a 47% percentile on their Teacher Data Report may either be a terrible teacher at 12%, an excellent teacher at 82%, or anything in between. If they make it to their 10th year, and go up to the 53rd percentile, they’re still a below average teacher at 42%, a pretty good teacher at 64%, or anything in between. Let us guesstimate here and say that the margin of error by the 20th year is 5%, the same margin of error for many major political polls. Wherever the teacher is, frankly, won’t matter much because the teacher may be ready to move on to another career or retire.

How is using VAM a way to help a teacher as they grow in the profession with staggering numbers like that? For that matter, how does that help the school as a whole? It doesn’t.

It’s the equivalent of having to wait for someone a few hours too early or too late when you asked them to meet you at 8pm, a plane flying from Chicago to DC ending up anywhere between Massachusetts to North Carolina (give or take), or Ross Perot possibly getting the majority of votes in the 1992 Presidential Election or no votes whatsoever. (he won 18.9% of the votes that election year.)

Also worth noting that many teachers don’t stay in the same exact place and time, and neither does the neighborhood in which they work. With the fluctuation of populations in the places using these formulas, we can’t rely on the same type of students staying in there. How do we know that teachers aren’t teaching students how to take a test by means of mastering test methods taking or aren’t getting “help” from certain individuals?

With dangerous elements like VAM, we’re practically begging teachers to teach to the test, narrow the curriculum, and hope the child had breakfast that morning. We also have to limit creativity, assure students get the right answer on a particular question instead of getting the right answer on all types of questions involving that learning standard. I believe our solution lies in multiple forms of assessments for teachers and students, mainly formative, without repercussions or punitive scare tactics. If we want real professionals, we should find more professional means of treating everyone in this business we call “teaching students.”

Not that it’s a corporation. It’s a lifestyle for us.

Proponents of VAM couldn’t have possibly read up on using formulas for things they weren’t intended to measure and still think they’re benevolent. Public education is a future that’s too big to fail.

Jose, who read the briefing papers and statistics, so you could get back to lesson planning …

P.S. – For more on this, please check this paper by Darling-Hammond, Ravitch, Baker, et. al. …