UNSW News Room | 20 March 2013
With universities paying about $100,000 a year to employ full-time managers dedicated to liaising with ranking agencies and “clever reporting”, rather than a surge in knowledge, said to explain the surprisingly good results of the Excellence in Research for Australia quality audit, dean of science at UNSW Merlin Crossley writes that the numbers used to measure performance in educational institutions create a lot of discussion – and angst – because of their obvious imperfections.
National Assessment Program – Literacy and Numeracy scores in schools don’t measure creativity, Australian Tertiary Admission Rank cut-offs for university courses don’t reflect future potential and using student feedback to rate teaching is regarded as little better than running a popularity contest.
Excellence in Research for Australia quality assessments and journal ranking scores do not respect locally important research, journal citations and impact factors vary wildly between disciplines and world university rankings are backward-looking and disadvantage newer institutions.
Then there are the collected metrics of the controversial MySchool and MyUniversity websites, which gather imperfect measures into tables, apparently compounding error and threatening the whole system.
There seems to be a general anxiety that people will blindly use these flawed but interesting numbers. In contrast, no one seems to worry about numbers in sport or other endeavours. Every game involves scoring and many sports have a ladder of some sort – it’s all good fun.
Films, hotels and restaurants are measured by stars and chefs’ hats, songs are ranked in charts, and books appear on bestseller lists. We all know these systems are imperfect but minimal time is spent campaigning against them.
Why is there so much discussion about educational scores? Numbers in education bother people a lot because they are seen as indelible measures of individual worth. Several years ago, when Mount Druitt High School in Sydney didn’t get a single student with a Higher School Certificate score of more than 50, it found itself on the front page of The Daily Telegraph and there was an outcry. The fear was that every student from the school – present, past and future – would suffer because of its poor reputation.
Conversely, students from Oxbridge or the US Ivy League universities may benefit from the quality aura that surrounds their degrees.
Judging individuals via institutional generalisations is a form of prejudgment or prejudice and we should be careful to oppose it. But does that mean we should throw out all educational metrics?
There is a great temptation to do so. Academics pride themselves on critical thinking and strive to identify even the tiniest flaws in otherwise useful metrics. But if everything that is imperfect were discarded our universe would be an empty place.
I don’t think it is possible to suppress scoring systems. With a global education landscape connected to feedback via the internet, the information will not go away. Should we instead embrace the new reality? Probably.
First, the different systems for measuring educational quality will almost certainly improve. We have seen this with university league tables and with research assessment exercises.
Second, the old argument that competition drives performance can hold true. It also can drive differentiation as institutions strive to be top in different disciplines or regions.
Third, exposure to an external and independent judge – be it the Academic Ranking of World Universities or the Rate My Professor website – has the advantage that low quality cannot be hidden.
Most important, numbers can break through prejudice. A refugee child with a high ATAR, or a student from a new university with a top-ranking astronomy department, can benefit considerably if the numbers are given their proper weight in their proper context.
Australia is justly proud of its universities and appears to be taking significant measures to ensure quality is maintained. The new Tertiary Education Quality and Standards Agency is building up an extensive workforce to carefully monitor quality across the entire tertiary sector.
These operations may be useful in maintaining minimum standards but won’t push quality at the top end.
It is often said that, while a lot of time and money can be spent on satisfying TEQSA, the more important thing is to do well in the Times Higher Education’s world league tables. What’s more, this latter exercise costs the Australian taxpayer nothing.
There are plenty of numbers out there. If they are carefully interpreted they can positively drive performance. The recent low school scores in Australia are already focusing a much-needed debate on how best to teach science in our schools and have sparked a new discussion on the entry scores for teaching. Chief scientist Ian Chubb’s observation that Australia’s research is falling behind its peers in Europe and North America in terms of citations also demands policy action.
It is a truism that every metric has its limits, but when a doctor takes the temperature of a patient, or someone at the weather bureau takes the temperature of the planet, it is much better to acknowledge the facts and act than to dismiss the inconvenient truth because only one aspect of a complicated system has been measured.
This opinion piece first appeared in The Australian.