The Conversation | 15 June 2013
Since 2007, the Australian government has been evaluating a pilot aptitude test for future university students.
The test is meant to help universities select students who might have the ability to undertake tertiary education but who got bad scores at the end of high school.
The result is uniTEST which claims to evaluate quantitative, critical, verbal and plausible reasoning.
Before looking at the results of the pilot, a quick explanation is required about supplementary tests and why universities use them.
Why more tests?
Supplementary tests, as the name suggests, help universities choose their students over and above other assessment mechanisms, which in Australia is the Australian Tertiary Admission Rank (ATAR).
There are various reasons why supplementary tests are used. One reason is to improve student retention and competition rates. For example, recently in the US, Educational Testing Services rolled out their SuccessNavigator assessment tool, which they claim will help universities both select those students most likely to succeed and manage the students throughout their degree. It aims to do this by measuring four factors “critical” to a student’s success: academic skills; commitment; self-management and social support.
Another reason to use a supplementary test is to add a finer layer of detail when, for example, multiple students have the same entry score but there are not enough places for all of them. In Australia, the Undergraduate Medicine and Health Sciences Admission Test (UMAT) helps universities select students into highly competitive medicine, dentistry and health science degree programs.
But sometimes these tests are employed to create a bias, for all the wrong reasons. For example, in the early 20th century the elite US Ivy League colleges used subjective assessments to test applicants for the “right stuff”. In fact, their tests for “leadership”, “character” and “personality” were designed to exclude (mostly) Jewish students, who were taking more and more of the available places purely on academic merit.
Improving access – the uniTEST results
Last month, researchers from the Australian Council for Educational Research (ACER) and the University of Melbourne released results of a pilot study of uniTEST.
Whilst uniTEST is not specifically designed for disadvantaged students, the researchers evaluated whether it could be used as a way of increasing enrolments of students from low socio-economic backgrounds, by giving them an alternative way of proving their merit. In the paper, the researchers argue the test has potential in this regard. However, there are qualifications.
The study was conducted by the same ACER researchers who were commissioned by the Australian government to create the test. Now, ACER is a leader in the field of educational assessment and the researchers have subjected their findings to peer review. The data and information provided supports the authors’ findings. Nonetheless, this creates potential for conflict of interest and with so much at stake, it is important that the test receives additional, external validation.
The sample group was also relatively small. Only six universities participated and 1,440 students sat the test. This means the results are not generalisable. Nonetheless, they are of interest and reinforce the argument that a student’s ATAR is neither the only nor necessarily the best way of selecting students.
However, the analysis still showed a strong correlation between a student’s ATAR and his/her aptitude test score. So if uniTEST mostly validates the ATAR rank in the majority of cases, the costs associated with administering the test might not justify the very small number of disadvantaged students it ultimately benefits.
Finally, the universities involved in the study only used the test scores to select students after the first (i.e. majority) round of offers had been made. It can be argued therefore that disadvantaged students were largely fighting over left-over places, rather than improving their chances in the primary round.
Should universities use them?
Absolutely. A well-designed test can help adjust for the long-term disadvantage many students experience in their formative years of education. Its success relies on three factors.
Its form must follow its function. As the researchers acknowledged, uniTEST is not designed specifically to combat educational disadvantage. The test must also be valid and reliable, and its internal logic must be transparent. Finally, the test must be regularly audited to ensure that it is still working properly. Over time, student profiles change, people learn how to game the system and processes become distorted.
As the researchers of this study observed:
…there have been no national policy initiatives aimed at improving admissions processes to facilitate entry of a more diverse student population.
I am not sure yet whether uniTEST is the answer to this problem – but it’s encouraging that efforts are being made to improve student selection processes.
Tim Pitman does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
This article was originally published at The Conversation. Read the original article.