Unpicking the A-Level Algoshambles

Acorn Aspirations
5 min readSep 22, 2020

--

Hi Listeners!

It’s Anisah Rahman, your occasional Co-Host and full time Social Media Manager for the Teens in AI Podcast. I will also be providing commentary through blog posts that I will have written for topics that are incredibly relevant to our podcast episodes.

In light of our first upcoming podcast episode on AI and Education, today, I am unpicking the A Level Algoshambles with you!

In March, the government announced that all national exams, including SATs, GCSEs and A Levels had been cancelled this year. In light of the COVID — 19 Pandemic, Ofqual, the UK’s Assessment Regulator, decided to adopt a mathematical algorithm to grade students instead.

How did the A-Level algorithm work?

The aim of the algorithm was to standardise centre assessed grades submitted by teachers in order to avoid what Ofqual called an “unprecedented, implausible” rise in pass rates and top grades. The algorithm relied on three key pieces of information: the previous exam results of schools and colleges over the last 3 years, the school’s postcode and the ranking order of pupils based on the centre estimated grades. The algorithm understandably induced stress within state school students because by including school postcodes and their previous academic averages, the government and Ofqual were effectively creating a classist system that downgraded such students more than their grammar and private school counterparts through no fault of their own.

There are several key lessons that can taken from the whole debacle:

1.Building Transparency. The government must be more transparent in developing and using algorithms. Distrust towards the government was rife amongst students, parents and teachers because the government failed to provide the public with enough information about the grading algorithm to assess its implications before the truly disastrous A Level results.Government failures like the A Level Algorithm will undoubtedly affect the drive to deploy algorithms into public sectors, including education. It has now become abundantly clear that the government must now promptly disclose the details of any decision-making algorithm they employ in the future in order to potentially avoid creating grave detrimental consequences in people’s individual lives again.

2. More Scrutinisation. Algorithms are easily susceptible to bias, making it legally dubious as it raises the potential to discriminate against certain groups of people. Going back to the A Level Algorithm Chaos, what was truly alarming was the absence of external independent scrutiny. Ofqual — the regulator responsible for moderating exam results that falls within the remit of the government — ignored offers of expert help in developing the algorithm after statisticians refused to sign a highly restrictive non-disclosure agreement. Algorithms already have a technically complex nature so having less fresh eyes examine the algorithm can lead to bias, which would make the algorithm unethical. The A Level Algorithm Chaos highlighted a need for rigorous, expert and independent scrutiny of government algorithms before they are deployed. It has even raised the question of whether we should shift towards external moderation instead. In order to hold our government accountable for these automated decisions, we also need robust legal and policy guidelines and structures to support this scrutiny.

3. Adequate redress. Unfortunately, there are always some cases concerning poor decision-making algorithms. If a decision-making algorithm has produced an incorrect result, the affected individual must have access to effective redress to rectify this error. The redress scheme however employed by the government for exam results this year was problematic. It was unfair on those 2020 students. They did not even have the ability to appeal their own grades, only their schools could challenge their results on their behalf if there was merit to. Even initially, the appeals process would have cost individuals hundreds of pounds at the least until public criticism swept the government, leading the government to make the process free instead.

4. Latent Algorithm Bias. The UK education system was damaged from the beginning. The A Level Algorithm debacle only served to highlight this. The algorithm was biased and widened the educational attainment gap between state school students and private/grammar school students. By using existing data of schools’ performance from the last three years, the A Level Algorithm entrenched and exacerbated existing social and educational inequalities. We had cases where high-performing students from under-performing were downgraded from an A* to a B, C, D and even a U. Students from lower socioeconomic backgrounds essentially had their true grades capped by the results of prevailed students in the last three years while students from higher performing schools were awarded higher grades achieved by their students in previous years.

5. Socio-economic Algorithmic Bias. As private and independent schools tended to have smaller classes thus a smaller number of subject entries, such schools were more likely to be awarded their centre-assessed grades submitted by their teachers or they were given a more stronger weighting within the algorithm. This therefore amplified socio-economic bias due to students from a higher socio-economic background receiving higher, ‘teacher-inflated’ estimated grades in schools. In some cases, the algorithm gave an 8 (equivalent to an A* at GCSE) to private and independent school students whose teachers predicated them a 4 according to the Guardian.

6. We cannot blame the Algorithm. The government, Ofqual and various other politicians have chosen to blame the A Level Algorithm for the unjust results students were given. But is it really the algorithm’s fault? No it’s not. The fault lies in the people who designed and developed the algorithm. They algorithm just did what it was told to. Remember Microsoft’s racist chatbot. Bias continues to be implemented into automated systems, perpetuating biased results because the team of people developing the algorithm are not a diverse group of people. They are not representative of the whole of the UK’s population.

7. The algorithm must be explainable. Distrust alleviates algorithms if peoples’ basic expectations of trust and confidence are violated. Keeping students and parents in the dark about how their algorithmic grades were produced was not a good first step. There was uncertainty over how it worked. If you cannot explain to students how a computed grade was awarded, chances are that the underlying process is not fair. Something the government failed to see. The government should have consulted parents and students. They should have asked parents and students to rate the process and measured levels of trust.

Those are 7 lessons I believe can be taken from the debacle. Artificial Intelligence is the future but we need to use it for good, not to perpetuate bias amongst certain people. The A Level Algorithm Chloe’s highlights the importance of creating a human centric algorithm with ethics in mind. We need to ensure algorithms meet ethical standards by involving people in the development process from day 1.

Want to learn more about AI and Education?

Listen to Episode 1 for the Teens In Podcast due to be released on the 24th September 2020.

Guests for our first episode include:

  • Professor Rose Luckin: Director of EDUCATE and Professor of Learner Centred Design at UCL
  • Carly Kind: International Human Rights Lawyer and Director of the Ada Lovelace Institute
  • Three young adults including myself, Ayushman and Victoria

Be sure to also read a series of blog posts by TeensInAI Alumni disseminating AI and Eduction.

Written by Anisah Rahman, Young Adult

Occasional Co-Host and Full Time Social Media Manager for the Teens In AI Podcast

--

--

Acorn Aspirations

Powering the Next Generation of Thought Leaders, Innovators and Technologists in #AI @teensinai #TeensInAI #GirlsinAI #MCStartup2016 Founder @elenasinel