When it comes to deciding where to send their kids to school, many parents do a bit of research. Like high school students choosing where to apply to college, they will familiarize themselves with the landscape by visiting, interviewing, and reading up on what the school has to offer. At some point in the process, they will probably happen across school rankings.
The current fixation with rankings is easy to understand. Rankings are quickly digestible, seemingly objective data.
But are rankings as are clear and reliable as they seem?
“There’s no direct way to measure the quality of an institution — how well a college manages to inform, inspire, and challenge its students. So the ‘U.S. News’ algorithm relies instead on proxies for quality — and the proxies for education quality turn out to be flimsy at best,” Gladwell stated.
Brian Yager, head of school at San Antonio’s Keystone School agrees with Gladwell. Keystone was recently ranked the most rigorous private high school in the United States by the Washington Post. The system, formulated by Jay Mathews measures the number of AP tests taken against the size of the senior class.
Rather than doing what most would do, capitalize on the good press and let the PR team take over, Yager took public issue with the ranking system in his newsletter on Keystone’s website.
“Unfortunately, we are sheepish about the Post’s placement of Keystone at the top. For, the ranking is based on data that is an extremely poor indicator of school quality or challenge, and, frankly, the methodology behind the rankings is extremely troubling,” Yager stated.
While Yager agrees that Keystone is one of the most rigorous schools in the country, he contests Mathew’s criteria for measuring that status, and suggests his own, the “Keystone Index,” that measures how well students do on the tests, not simply how many took the tests.
Mathews responded, standing by his system on all counts.
The veteran education writer explained that his goal was to encourage school to increase rigor. It wasn’t the AP scores that matter as much as the fact that students were taking the tests, that they were taking the more difficult road.
“He is arguing for the standard school measure – average test scores. I created this list because I thought that was exactly wrong. I had written a book about Garfield High in East LA that had opened AP to all students, and found many of those average and below average kids did well, and proved the worth of exposing AP to as many as possible.
“Then I did a book about Mamaroneck High in an affluent NYC suburb, where the school barred many average students from taking AP if they had a bad grade in that subject in a prior course. This ignored the power of motivation, and the fact that kids mature and realize their junior years that they really want to stretch themselves. The hero of that book was the girl barred from AP US history who took the courses on her own, getting the homework from her friends, and took the test and got a 3. How stupid to bar kids like that.
“So the list is designed to make schools think twice before they restrict access, and I think it has had some good effect on that,” stated Mathews in and emailed response to Yager’s newsletter.
The use of ranking systems to motivate educators, however noble the goal in this case, gets at the heart of what troubles Yager. His concerns about rankings are varied, from the role rankings have come to play in the educational experience to the data itself.
Yager is concerned that the importance of being ranked incentivizes schools to game the system at the expense of actual education. Most data is self-reported by the institutions to be ranked. Yager has methodological concerns about self-reporting, but Mathews claims that this has not been a problem.
“I confess that when I started the list in 1998 (not ever dreaming I would still be doing it in 2015, just after I turned 70) I had the same concern as Mr. Yager does. But now I can say with great confidence that the numbers we get from the schools are true and accurate. We have never had a complaint about a school providing inaccurate figures to pump up its ratings,” Mathews stated.
Nonetheless, in Yager’s view, even if the school’s answers are honest, they know what they need to do to beef up their data. In the case of the Washington Post “Most Rigorous” list, they would need to sign more kids up to take AP tests. Even if they knew the kids would fail.
In more troubling cases, Yager claims that it has led universities and colleges to court applications from under-qualified kids so that they can keep their acceptance rate low. Low acceptance rates are seen as a sign that the school is selective, and therefore challenging. So when high school sophomores are getting letters saying, “We want you to apply,” it’s true…but it doesn’t mean the school anticipates accepting them.
This sort of rankings manipulation can be counterproductive at worst, but even at best it’s a distraction, acceding to Yager. Schools feel that they have to chase high rankings because people pay attention to them.
Another common criteria, cited by Gladwell, are the resources available to students. This seemingly relevant and positive criteria contains some troubling data points, such as faculty salaries and the size of the school’s endowment. So wealthy schools rank high.
The rich get richer in other ways too, while they play the ranking game, according to Gladwell. His article cites a study that calls into question the “reputation” criteria that plays a large role in rankings. In the study, a group of lawyers were asked to rank 10 law schools in order of quality. The list included big names like Harvard, and smaller schools like John Marshall. It also included Penn State, which, at the time, did not have a law school. Still, on the reputation of the wider university, the non-existent Penn State law school was ranked near the middle by many of the lawyers.
The list included big names like Harvard, and smaller schools like John Marshall. It also included Penn State, which, at the time, did not have a law school. Still, on the reputation of the wider university, the non-existent Penn State law school was ranked near the middle by many of the lawyers.
The logic goes that if your institution appears near the top of one set of general rankings, then people will assume that each of it’s departments, colleges, or schools are also deserving of top tier status. The classic fallacy of division (assuming that what is true of the whole is true of all parts).
Yager’s philosophy, which permeates the halls and instruction at Keystone, is that education is about growth, which is hard to measure. As is enjoyment.
“Our secret sauce is our culture that enjoys learning,” Yager said.
Yager can’t offer much data regarding the unique intellectual development taking place at his school, because most of the proof is anecdotal.
However, when parents ask their child’s teacher how the student is doing in class, they are looking for anecdotal answers. They are looking for stories of how a student worked hard to overcome a hurdle. They want to know what subjects and topics seem to stir their child’s love of learning. They want to know if their student is more engaged and involved in class than she was at the beginning of the year.
Anecdotal evidence is what you get when teachers are involved with their students. Keystone produces a lot of it, and Yager takes note. Meanwhile, the quantitative data coming out of Keystone, where students do take AP and standardized tests, seems to be taking care of itself.
*Featured/top image: Keystone students in the lab. Courtesy photo.