ATAR

More Amazing Secrets of Band Six (part two ongoing until they fix the wretched thing)

EDITOR’S NOTE: 

When Simon Crook wrote The Amazing Secrets of band six last year for AARE, I had no idea it would become one of the all-time best read posts of EduResearch Matters (now number 15 out of nearly 500, with a spike during the HSC results period). Those of you who read Amazing Secrets last year will have been familiar with the important points raised in the last few days in the Sydney Morning Herald regarding Band 6s and measuring HSC success [1], [2], [3], [4]. With any luck, through the SMH, these issues will have a much wider audience and may provide incentive and leverage to key stakeholders to do something about the current state of play.

A Quick Recap

NSW is obsessed with HSC performance, particularly Band 6s. Every year, the SMH, Telegraph and other media outlets publish school ranks determined by numbers of Band 6s. The SMH also publishes the Honour Roll of those students who achieved Band 6 in each of their subjects. Yet it has already been shown you cannot compare Band 6s between different subjects, so you cannot tally total Band 6s and make a fair comparison between schools or students. In fact, some lower bands in more rigorous subjects actually contribute more to ATAR than Band 6s in less rigorous subjects. 

As previously described, the standards-based ‘Band Description’ model for the HSC was never designed for comparison between subjects. One of the creators and custodians of the HSC, Professor James Tognolini, reiterated last week that: 

“for better or worse there was no attempt to make the standards equivalent when the system was set up … in most subjects there was no attempt to align a band 6 performance in one subject with the band 6 performance in another. The purpose was to report what it is students know and can do, not make comparisons across subjects.” 

In response to Professor Tognolini’s 50/50 choice, the situation is for the worse. Whatever the original intentions, most of society assumes they are equivalent, that a ‘Band 6 is a Band 6’. The whole media, parental choice and school marketing system perpetuates this flawed metric of comparison. It is tempting to blame the media, and the SMH in particular, for their role in this mess, but they are only reporting what they are allowed to report. As I pointed out last year, and as the SMH articles highlighted, more and better comparative and value-add (growth) data should be reported to provide a fairer narrative of both school and student achievement. The CSNSW paper the SMH references makes some good suggestions in this regard. These include several possible alternative measures that could be published including:

  • Non-HSC data, such as vocational education completion rates and post-school outcomes 
  • Median ATAR (or a suitable proxy for scaled marks)
  • Growth or ‘value-add’ (as suggested last year)
  • Band distributions, “which better show the range of achievements within schools, and any shifts over time”.

In order for this to happen, someone high up needs to provide the requisite permission. 

But the issue is not solely about which school performance data can be published in the media. 

It is also time to start seriously talking about improving the HSC as a whole. I’m not talking about getting rid of the HSC, or even a massive overhaul of the assessment, but evolving it in line with the education landscape in NSW in 2022+, rather than continuing with the same model devised last millenium.

A new education landscape of accountability

In the past twenty odd years, the status of the HSC has evolved from the local NSW matriculation qualification affecting university entry to an incredibly high-stakes commodity that can make or break a school/principal/teacher/student. NSW government high schools are now accountable to the School Success Model with targets for increased Band 5 & 6s. Some of these school targets in particularly challenging local contexts are unlikely to be reached, setting schools and individual subjects up to fail, or unduly influencing their educational offerings (see Detrimental Effects below). Many non-government schools and school systems have similar blanket accountabilities and targets which are again setting up certain locally challenged schools and subjects to fail. The HSC was never designed to be used this way, so it must evolve accordingly.

Detrimental Effects 

While the NSW HSC is a strong, established credential of quality assessment for NSW school leavers, over time, one particular well-intended design feature has produced counterproductive consequences. These consequences are detrimental to teachers and students, particularly in critically important HSC subjects. Furthermore, these subjects are key to the Australian economy, for example, the sciences and technical and vocational (STEM) subjects. The particular design feature of concern is the inconsistency and application of the HSC performance ‘Band Descriptions’ for different subjects.

There is an extreme variation in the proportions of students allocated to each of the performance bands in different subjects. For example, in 2021

This is NOT a fair go for all. As can be seen, under the current system the science, technology and vocational subjects are essentially discriminated against. Despite this extreme variation, the band percentages are used as the primary measure of student and school achievement, including in merit lists and strategic targets. Thus bands have become the key driver of detrimental effects to teaching and learning:

  • Warped student subject choice: ‘able’ students are increasingly choosing (or being forced into) subjects with increased access to Band 6s, thereby prioritising access to Band 6s over academic rigour. This in turn negatively impacts future pathways, particularly for diverse cohorts, including  female representation.
  • Reduced school subject offerings: many schools are axing critical subjects and skewing their strategic directions for hiring and investing in subjects/faculties due to gaming the system towards more Band 6s. This is further exacerbated and even intrinsically encouraged by the worsening skilled teacher shortages in e.g. mathematics and the sciences
  • Accountabilities tied to Band 6s (see A new education landscape of accountability above)
  • Teacher performance measurement tied to Band 6s: blanket targets and teacher performance measures can have a devastatingly negative impact upon staff teaching subjects with low proportions in Band 6, contributing to the widely reported teacher shortage and retention problems in critical subjects, poor well-being and depleted morale, particularly with the existential threats of ‘dud ministers’

As mentioned, the use of bands in this way was never part of the design remit for the new HSC in 2000. But over the years the performance bands have evolved into high-stakes features. High-stakes indicators must be strong, reliable and valid. The variation in Band Descriptions, and the proportions of students allocated to each band across subjects means they are no longer reliable or valid as high-stakes performance indicators. They must be open to scrutiny and reform. 

Evolving the HSC

There is one primary way to evolve the HSC: by strategic reforms to the bands. Reforming the bands needn’t be extensive, expensive, or threaten the HSC standards approach, or the ATAR. Bands could still allow for disciplinary differences, but with improved comparability and fairness. Myself and a loose band of academics and researchers have considered models that could be much simpler and cheaper than the current arrangements, yet strengthen the reliability and validity of bands as educational indicators. As a side benefit, they could also improve clarity on standards and exemplar material in the ‘Standards Packages’ to directly strengthen teaching and learning. We are currently making representations to key stakeholders to outline the details of these reforms. 

We have used our collective expertise and have developed possible pathways to reform bands and sustain the HSC into the future. Such reforms would counter the detrimental consequences of current arrangements, mitigate emerging risks and ensure that the HSC remains a strong credential for the next generation of students in NSW. We need a fair go for all; it would be un-Australian to be otherwise. 

Dr Simon Crook is director of CrookED Science, a STEM education consultancy, and Honorary Associate at the School of Physics, University of Sydney. He works with primary and high school teachers and students around many aspects of science and STEM education, and assists the Sydney University Physics Education Research (SUPER) group with their work, including liaising with NESA regarding science syllabuses. His PhD research evaluated the impact of technology on student attainment in the sciences. Previously, Simon was a high school physics teacher.

The amazing secrets of band six (and what you should know)

New South Wales, Day 1, Term 1. A whole staff meeting to begin the school year. At some point after the Principal’s address, the Leader of Learning (or similar) starts their Powerpoint to go through the previous year’s HSC results subject by subject, particularly the number of Band 6s. In non-selective high schools, invariably the English, Humanities, Visual Arts, Music, PDHPE and even Maths teachers are patting themselves on the back and being lauded by the school leadership. This is in complete contrast to the responses from and directed towards the science teachers. Equally, when principals have their HSC performance review meetings with their own superiors, quite often it’s a tricky conversation regarding the ‘performance’ of the sciences.

What is the obsession with Band 6s? Band 6s sound elite, the very best. But the facts are that a Band 4 or 5 in a difficult subject such as Physics or Chemistry may make as big – or even bigger – contribution to ATAR (Australian Tertiary Admission Rank) (more on that later)  than a Band 6 in say, Music. Also, Band 6s are the only metric made publicly available and shared with the media.

Band 6s and exam results (raw and moderated, see Raw Results and Band 6s discussed later) from NESA are not to be confused (but are so often confused) with ATAR scores which are calculated by the Universities Admissions Centre (UAC). The ATAR is a rank, not a mark, indicating a student’s position relative to all the students in their age group in their state. Calculating the ATAR is based on an aggregate of ‘scaled’ marks of a student’s courses. This is totally different from ‘moderating’ by NESA in NSW (see later). Importantly (and often detrimentally) for teachers, only students are measured by ATARs; teachers are measured by Number of Band 6s.  So students in subjects that scale well such as Physics will receive a good ATAR contribution if they perform reasonably well in that subject. But ‘reasonably well’, say high Band 4 to Band 5, doesn’t cut it for teachers measured by their Number of Band 6s. It is also interesting to note that last year a Band 1 (a fail!) in Physics could rank higher than a Band 4 in Visual Arts and Band 3 in Legal Studies, Ancient History, Business Studies and PDHPE.

Last millenium in the UK, I was fortunate enough to achieve two thirds of my A-Level Physics class attaining ‘A’ grades in a non-selective Government school. In NSW, it took me 4 years just to achieve just one Band 6 in two non-selective schools. For the past few years in my work supporting schools in all sectors in the sciences, I regularly spend much of Term 1 advocating for science teachers, coordinators and principals who are feeling the heat for “poor Band 6 results”. I am constantly witnessing (and have suffered first-hand) the negative impacts of these judgements on teacher and principal morale and well-being. Student well-being is also being detrimentally impacted by similar unfair judgements and subject comparisons in Year 11 and HSC Trial (raw) exam results.

And what is the cause of all of this anxiety? The completely flawed metric of comparison that is the ‘Number of Band 6s’. So why is this overly blunt measure, that appears in all school marketing literature, on school billboards, in the Sydney Morning Herald, and is part of NSW vernacular, so flawed as a point of comparison? The answer is that, importantly, it was never intended to be a point of comparison in the first place, particularly between subjects.

Standards-based Assessment

The NSW HSC is a standards-based assessment. The whole construct of NSW HSC standards-based assessment was devised by Dr John Bennett, former Chief Executive of the Office of the Board of Studies NSW (now NESA), as part of his PhD thesis. Dr Bennett’s supervisor was Professor Jim Tognolini, now Director of the Centre for Educational Measurement and Assessment at The University of Sydney, who has been senior advisor on educational measurement issues for every state and territory education department and examination board including NESA. The whole premise of the standards-based model is to maintain the integrity and consistency of measuring standards year on year for an individual subject i.e. that a Band 6 in Chemistry in 2021 is comparable with a Band 6 in Chemistry in 2020; with no mandate or mechanism to say that Band 6 is comparable between subjects.

In this standards-based model, each subject had its own set of ‘Band Descriptors’ (now called Band Descriptions) providing descriptions of typical student performance for each of Bands 2-6. These respective Band Descriptors were devised by respective subject experts. They provide guidance for marking school based assessments (although this raises an issue discussed later), but most importantly, the Band Descriptors provide the standards for subject-specific ‘Judge Markers’ to measure student HSC examination responses against. This means that the Band Descriptors for say PDHPE were devised by experienced PDHPE teachers and judged annually by experienced PDHPE teachers. Similarly, the same can be said for Physics, or any subject. What this means is that the standards are different for each subject. If the Band Descriptors are different for each and every subject, and they are interpreted/judged differently in each subject, then we cannot use ‘Number of Band 6s’ by way of comparison between subjects. It stands to logic – it is simply comparing apples with oranges. 

Comparing Subjects

Following the release of the 2020 HSC results, in a quote in the SMH, Professor Tognolini reiterated

“we’ve never convinced the community that a band 6 in physics was not designed to be the same as a band 6 in biology or a band 6 in chemistry”

(This example is somewhat ironic since in the new science syllabuses every science subject has the same Band Descriptions, but the general point is being made by one of the original designers of the HSC itself). In the same article, Dr Timothy Wright, former Headmaster of Shore stated:

“it is really hard to get a Band 6 in say Chemistry and easier in say Business Studies”.

A few days prior, also in the SMH, a NESA spokesperson is quoted as saying:

“[the] number of Band 6s achieved in science courses can’t be compared with the number achieved in other courses”.

Consider the following examples comparing Band Descriptions between subjects:

Band 4

Business Studies

“demonstrates knowledge and some understanding of business functions and operations” 

Biology, Chemistry, Physics, Earth & Environmental Science, Investigating Science:

“demonstrates sound knowledge and understanding of scientific concepts”

How is “some understanding” comparable to “sound understanding”? It is not.

Band 6

Business Studies:

“demonstrates comprehensive knowledge and understanding of business  functions and operations”

Biology, Chemistry, Physics, Earth & Environmental Science, Investigating Science:

“demonstrates an extensive knowledge and understanding of scientific concepts, including complex and abstract ideas”

Again, there is much greater rigour in the supposed equivalent Band Description in the sciences compared to the non-science. 

Band 5

Perhaps most telling of all is where the Band Description for a typical Band 5 student in any of the 2 unit science subjects is:

“applies knowledge and information to unfamiliar situations…”.

Applying to ‘unfamiliar’ situations doesn’t appear in any other subject apart from Mathematics (and that is only for Band 6). If a student merely learns all of the content of a science, they cannot get above a Band 4 unless they can also apply their skills to unfamiliar situations. Whereas this is not the case in any other subject.

Equity of Access to Band 6s?

In 2017, I attempted to publish an article in The Conversation entitled Battle of the Bands: HSC Physics and Chemistry bottom of the Band 6 charts (co-authored with my PhD supervisor and co-supervisor). The article was refined, approved by editors and ready to go, only to be pulled at the 11th hour for external reasons. The article looked at data for 25% of the State to determine the rate of access to Band 6s among all HSC subjects in high schools in NSW. What was important about our analysis was that rather than compare blunt total numbers of Band 6s (which are readily available on the NESA website), we made a ‘common-cohort comparison’ i.e. what was the relative access to Band 6s of individual students in one subject when compared with themselves in the same other subjects? 

The findings were staggering. Students in Physics and Chemistry (in non-selective schools) were only only 26% and 27% as likely to achieve a Band 6 respectively as they were in the average of their other subjects. By way of comparison, students in PDHPE, Community & Family Studies and Society & Culture were twice (200%) as likely, and Music 1 and Design & Technology were two-and-a-half (250%) as likely to achieve a Band 6 than in the average of their other subjects. In extremis, this was a tenfold or one order of magnitude difference! That is hardly equitable access to Band 6s. Our findings confirmed what science teachers have been reporting for years, that even though the most able students often studied Physics and Chemistry, with relatively low numbers of Band 6s awarded for the State in total, combined with the over-representation of selective schools in these subjects, non-selective schools were left fighting over scraps in terms of access to Band 6s. Even in a school where say the performance of Physics is well above the State average, it is still destined to be below average compared to the other subjects in the same school (by definition, some subjects (usually about half) have to be below average when compared against each other in the same school, yet we still persist with this type of in-house comparison).

Gaming the System

So if you can’t compare Band 6s, and it is more difficult to get Band 6s in some subjects than others, yet schools are still being measured by their numbers of Band 6s, what can be done?

A genuine, yet morally wrong, short-term solution to maximise Band 6s is to guide students away from subjects with a low frequency of Band 6s. We know this happens already with subjects like English Standard: even though many students are better suited to English Standard, many schools push them into the higher Band 6 frequency English Advanced course. If this strategy is applied to the sciences, then schools simply stop offering the sciences. This is happening already in some quarters, not least with the compounding issue of the shortage of science teachers, let alone science-trained teachers. In the short term, this could genuinely address some of the shortfall of Band 6s in a school, but it is only a short-term solution. If a school stops offering any of the sciences, particularly the traditionally ‘rigorous’ ones such as Chemistry and Physics, then the school will ‘residualise’ as aspirational families reject such a school and attend elsewhere offering the full complement of sciences.

Raw Results and Band 6s

Further confusion and anxiety reign with many schools, students and parents misunderstanding raw exam results and any correlation with performance bands. In every HSC exam, a student’s raw exam mark is internally moderated by NESA by subject, based on the Judges’ interpretation of that year’s exam inline with that subject’s Band Descriptors. For example, in one particular subject, a raw HSC exam mark of 76% might be moderated up to a 90 i.e. Band 6, and a raw exam mark of only 18% might be moderated up to a 50 i.e. a Band 2. Another subject might have 93% moderated to 90 (Band 6) and 52% moderated to 50 (Band 2). However as mentioned, many people, particularly students, parents and sometimes school leadership, don’t understand this. They think that a raw exam mark directly and equally translates to a band i.e. that a raw exam mark of 90+ is needed for a Band 6 in any subject. Following the example above, a Band 6 performing student in the first subject, with a raw mark of 76 in their Y11 exams or Y12 Trials, may incorrectly think they are only operating at Band 4, and the adults around them may equally think so. A statistically more commonplace example might be a student only achieving a 46% raw mark in the first subject in a school exam and interpreting that as a fail, whereas the same score may scrape a Band 4 when moderated in the HSC. This misunderstanding can lead to undue anxiety, misplaced self-deprecation and self-efficacy, students dropping the wrong subjects, and yet again, flawed comparisons between subjects.

Where to from here with ‘Number of Band 6s’?

So comparing the ‘Number of Band 6s’ between subjects is completely untenable. Does this knowledge help principals? Right now, not in the slightest. They don’t need me telling them the pressure they are under for Band 6s. What we need is for all stakeholders to spell this out publicly and abide by no longer comparing subjects (and ideally schools) by the numbers of Band 6s. This has to start with media outlets who are responsible for publishing such league tables and contributing to this statewide obsession and very parochial NSW vernacular in the form of ‘Number of Band 6s’. Along with the media, we need NESA (not just anonymous spokespersons), all school sectors, principals’ associations, parent bodies, teacher associations and universities to formally declare and abide by not publicly publishing, advertising or comparing between subjects (and schools) using Number of Band 6s. Only by formally denigrating ‘Number of Band 6s’ can we get to a point where we have “convinced the community”. Discussing this with journalists associated with this blog, “it’s the only metric we have. We’ve asked many times for this to change. More diverse data would stop the league tables.”

Performance Measurement

However, schools and school systems still need to measure performance. Moving beyond Number of Band 6s should not be a problem. During these COVID times, we have finessed our metrics from blunt, not so useful ‘Number of Cases’ to more pertinent measurements such as ‘Cases in the Community’. With 75,000+ students annually sitting HSC exams, there is more than enough data to measure statistically significant ‘value add’ performance of every school, subject and even teacher (if class sizes are large enough). Mathematically, this is achieved by ‘multiple regression analysis’, controlling for all other variables such as gender, socioeconomic status, school type etc. (see an example of multiple regression analysis here). Such in-house data sets are already in use: in the Department of Education with Scout, in Catholic schools with the CSNSW HSC Analysis Project, and in independent schools with various analysts. In Victoria, there has been a long established effort to celebrate value-add through the Schools that Excel lists, though what is suggested above would be far more finessed. 

If the expectation is there for public comparison of all, the data is there if all schools are willing to share. But there’s the catch. With privacy of individual student information laws, and copyright of data, no detailed comparison as suggested above of the whole state i.e. all three sectors, can be published publicly unless everyone signs up to it, which is unlikely. But does that really matter? 

Publishable Performance Metrics

Stakeholders need to decide, are they willing to share all of their data so that true ‘value add’ measurements (with error margins) can be reported fairly? Or is everyone happy to have league tables of school rank only by subject, taking into account all bands (Number of Band 6s alone is no longer an option)? Or do we even need public ranking/comparison? The first option as mentioned is unlikely to be agreed to, plus would require a level of statistical numeracy across society that doesn’t exist, as evident in dealing with COVID. The second option is completely achievable, schools could be ranked within individual subjects. This would eradicate the current inaccurate comparison between subjects, but would continue to perpetuate the anxiety induced by comparison between schools. So ultimately, do we even need to publicly publish relative performance metrics at all if they are essentially meaningless and harmful, or can we just keep these in-house to help monitor progress and improve our individual education of students? Either way, we must stop comparing subjects using the Number of Band 6s.

Dr Simon Crook is director of CrookED Science, a STEM education consultancy, and Honorary Associate at the School of Physics, The University of Sydney. He works with primary and high school teachers and students around many aspects of science and STEM education, and assists the Sydney University Physics Education Research (SUPER) group with their work, including liaising with NESA regarding science syllabuses. His PhD research evaluated the impact of technology on student attainment in the sciences. Previously, Simon was a high school physics teacher for 15 years.

ATAR is a university marketing tool: 4 reasons to stop obsessing about it

The recent ‘revelation’ that Australian universities are not sticking to their advertised course cut-offs has caused a ruckus. Some even see it as a scandal: universities are admitting students with much lower (gasp) than advertised Australian Tertiary Admissions Ranks (ATARs), even into ‘top’ courses.

I think it is time to look at some facts around ATARs. I have four important ones for you. I believe everyone concerned about or discussing ATARs should know these facts.

Fact 1: Most university place offers are not made on the basis of the published ATAR.

Around two-thirds of the university places offered in Australia each year are made to students who do not have an ATAR. Almost 50 per cent of new university students are mature age, international, vocationally qualified or will have come to university through a myriad of alternative entry schemes.

Direct entry to university is growing exponentially at some universities, with the ATAR bypassed altogether. Direct entry, mature-age and international students, and students who come through VET pathways make up the majority of the Australian university cohort.

In my own state, Victoria, most courses that make offers to students through the Victorian Tertiary Admissions Centre (VTAC) do not publish ATARs for those courses. Yes, that’s right, most courses. Of the minority that do publish an ATAR for a course, two-thirds made more than 30 per cent of their offers to students with lower ATARs than the published figure.

All universities award ATAR bonus points. These extra points and how they are determined are not regulated in any way, nor are they usually transparent. Universities can award bonus points as they wish and for whatever they wish. This furtive awarding of points is disguised as recognising “leadership”, “community-mindedness” and other qualities of applicants.

Fact 2: The ATAR is not a score.

The ATAR is a numerical, relative ranking derived from senior high-school performance and a complex series of scaling and other adjustments. In a relative ranking system, students in one year’s cohort are ranked against each other.

An ATAR of 49 does not mean a student has failed, it means the student is ranked at the 49th percentile of a cohort that year in terms of their academic performance, as measured and scaled according to a complex series of mechanisms. In a cohort of, say, 45000 students in one year, a student with an ATAR of 49 has an academic performance equal to or better than 22000 students that same year. Hardly a failure.

And similarly, no matter how bright they are, nor how hard they or their teachers work, no more than ten per cent of students’ ATAR rankings will be in the top ten per cent of rankings. That’s how ranking works.

Fact 3: The ATAR is linked to socioeconomic status.

The evidence indicates that ATAR scores are correlated with socioeconomic status and social capital. To put it simply, the higher the socioeconomic status and capital of the student, the higher the ATAR is likely to be, and vice versa.

For example, poor people in rural areas generally have lower ATARs than rich people from metropolitan areas. But poor people are not stupid and do not compromise educational standards or outcomes. They just have less of the social and cultural capital that counts for school education outcomes (and, therefore, ATARs). No mater how tempting it is to think it: an ATAR rank is not a measure of intelligence, motivation, diligence, aptitude or ability.

Fact 4: The ATAR is now used primarily as a marketing tool to an under informed public

The ATAR was more important when the supply of university places was limited and demand for these exceeded supply. Cut-offs were a useful strategy for allocating too few places. However, in our current demand-driven system of university places, where there are few limits on the number of students a university can enrol, the ATAR is used primarily as a marketing tool. Universities rely on folk believing that the higher the ATAR, the better the quality of the course and possibly, the better the university. But what is it better at?

Many assume, understandably but incorrectly, that the higher the ATAR needed to get into a course of study, the “better” the quality of the course. But the ATAR has no correlation with objective measures of course quality. The simple truth is that the higher the ATAR for a course, the more popular the course is among school leavers.

The public are currently being misled by what is essentially a clever marketing system using ATARs as proxies of quality of courses and institutions. It needs to stop and Peter Shergold, the head of the federal Higher Education Standards Panel, has recently announced that the Panel will begin to increase transparency around this issue.

It is time to stop obsessing about entry standards and start focusing on exit standards

What we should be focused on as a society is what happens to students, regardless of their entry method, during their university study and after graduation. Many students who have very high ATARs come unstuck at university when the intensive support and guidance, to which they had become accustomed, falls away.

As Tim Pitman from the National Centre for Student Equity in Higher Education has recently emphasised, the point of university education is not to validate entry standards but to educate, value-add and ensure high quality outcome standards. We all know that elements of effective university education and high quality learning outcomes go far beyond the supposed standard at which the students enter the university. Teaching quality, the curriculum, learning support and student support are just some of the most obvious.

All universities must put in place proactive support structures, processes and programs to ensure all the students to whom they give access can meet their potential and have the highest chance of success possible.

I often ask: When a university graduate seeks employment, how many sensible employers will ask them to reveal their ATAR from all those years ago? On the other hand, how many will be interested in what the graduate knows, can do, and can contribute?

The main priority should be to focus on exit standards and outcomes, where students end up, not where they started. If we restrict access to university only to those guaranteed to succeed based on previous education scores, we block a life-changing opportunity for scores of thousands of people every year.

It’s important to keep educating a wide range of students

University education is now open to more students than in the past when it was just available to white, upper-class men. This is good for students, their futures, their families, the economy and society. Successive governments of both sides have encouraged and supported increased access to university education for a larger number and broader range of people. The alternative is to have fewer people educated at the highest levels and subsequent reduced capacity to lead and innovate in a rapidly changing world.

Case studies at my own universities show that despite starting with very low ATARs, those who go on to successfully complete courses will graduate as qualified professionals and subsequently contribute to the economy, their communities and society in enhanced ways.

What matters most about university education is the quality of the education offered and the capacity and knowledge of graduates and whether they can do what governments and society expect of them, having had the privilege of access to education at that level.

If the purpose of university education is to contribute to an educated society, that treats its members and members of other societies with dignity, respect and kindness, while simultaneously advancing economic, environmental and other fronts, then we should unburden ourselves of outdated and inaccurate notions about the power of a single number.

I believe we need to focus more closely on how to facilitate success for the many, rather than the few.

 

Devlin

 

Professor Marcia Devlin is a Professor of Learning Enhancement and Deputy Vice-Chancellor (Learning and Quality) at Federation University Australia.  @MarciaDevlin