Last week, the federal government either launched a program that could change higher education forever, or picked a fight with a giant marshmallow. It will take a while before we know which.
The U. S. Department of Education issued preliminary rules that will require states to develop a rating system for the higher education programs that train and prepare teachers for America’s classrooms. The new rules will be open for public comment for 60 days before they could be adopted and put into effect, but they have already set off stink bombs in the education fortresses around the country.
It all seems so simple and logical at the PowerPoint level. The DOE wants states to come up with a rating system for teacher training programs that measures the quality of their output — classroom teachers. It makes a lot of sense, given the importance of the work their graduates do and the amount of public resources and support provided to their training programs.
At ground level, though, this kind of measurement system is difficult to design and implement. We might wonder at just how difficult it could be, however, since organizations such as U.S. News and World Report, Forbes and Princeton Review have been publishing college rankings for years. Recently they have begun to rate specific undergraduate and graduate programs as well.
The popularity of the magazine-based rankings, government money and meddling, and the economic effects of big-time sports and student loans have pushed ranking systems onto the agenda of every college president.
Naturally, and unfortunately, there is a lot of wasted effort by colleges to “game the system” and reshape the campus priorities and resource allocation to fit the specific criteria and weighting systems of the magazine rankings. This isn’t all bad, but it isn’t good, either. At its worst it is dangerously close to running an educational system driven by opinion polls.
The purpose of the DOE rules is not to compete with the private sector in the college rankings business but to improve the quality of classroom teaching. Arne Duncan, the Secretary of Education, said that too many education programs set lower standards for acceptance and for completion than in other major fields of study.
The result of these lower standards, he added, is that new teachers are not well prepared for what they will face in their classrooms. But that’s only the first whammy.
What so often happens is that their lower level of preparedness sooner or later affects the reputation of the college or university program where they studied. This means that the program’s graduates are not prime candidates for teaching jobs. Instead they are often hired by schools desperate to fill slots in the most difficult schools and classrooms — the jobs no one else wants. Not only are the graduates poorly prepared, then, but also they now face the second whammy: a more difficult job than those graduating from better training programs.
There will undoubtedly be some controversy and pushback on one aspect of the new rules: including student standardized test scores as part of teacher evaluations. More accurately, it fires up the existing controversy and puts the Obama administration on a collision course with the teachers unions, which strongly object to this use of student test results.
The fact is that student test scores are affected by things other than teachers’ initial classroom-preparedness or even overall quality. It has been suggested that the effectiveness of teacher training programs might better be measured by job placement results. Those results are often used in the magazine rankings to give prospective students an idea of what their employment possibilities would be.
Job placement, though, is almost always based on survey results rather than “hard data” like standardized test scores and is affected by factors other than training program effectiveness. Post-graduation employment deserves a place in program evaluation but has definite limitations as a ranking measurement.
The overall problem with ranking systems is that they are ranking systems; they do not measure the quality of programs in any absolute sense. Therefore, no matter how much teacher training programs might improve, there will still be a bottom 20 percent. If we “cut off the air supply” of the bottom fifth by curtailing its federal grant money, the result may or may not produce better teachers. We really don’t know.
The DOE’s new rules leave it to the individual states to develop this ranking system, and that is the cue for the change-absorbing giant marshmallow to enter. The state and local bureaucracies will most likely leap into inaction and pursue the “Change without changing” strategy that has served them so well. The DOE’s new rules contain some good ideas and deserve better treatment than that.
James McCusker is a Bothell economist, educator and consultant. He also writes a column for the monthly Herald Business Journal.
Talk to us
> Give us your news tips.
> Send us a letter to the editor.
> More Herald contact information.