ALGORITHMS: 14th November 2020
Talking about why we need to ensure fairness is central to our use of technology in The Herald today:
Algorithms are the new kid on the block for public disquiet.
Social media companies are facing a global backlash against the use of algorithms to let advertisers, including political campaigns, micro-target their users.
Governments across the UK faced widespread concerns and abandoned the use of algorithms to award results to pupils.
Put simply, an algorithm is a set of mathematical instructions or rules that help identify answers to a problem.
Their use is not new, and most of us have been affected by many algorithms over our lifetime.
Insurers use algorithms a lot: it is because an algorithm identifies them as high risk that young male drivers with customised vehicles face high premiums.
Algorithms and their limitations have increasingly entered public consciousness, and the withdrawal of exam results is not the only example.
A Human Rights Watch report highlighted that the UK Government put a flawed algorithm at the heart of Universal Credit.
Amos Toh, the report's author said: "The Government's bid to automate the benefits system - no matter the human cost - is pushing people to the brink of poverty."
In many large organisations, an algorithmic pay-and-benefits system calculates the worth of each job to the postholder.
Recently, SNP-controlled Glasgow City Council paid out over half a billion pounds in compensation to thousands of workers, mainly women.
This was because the pay-and-benefits system introduced by a previous Labour administration undervalued posts mainly filled by women.
Before the SNP took control, the council spent more than £2.5 million fighting off complaints that the pay-and-benefits system was discriminatory.
In a recent report, campaign group Liberty highlighted the importance of "automation bias", the tendency for decision-makers to over-rely on automated systems at the expense of human discretion. It is bad enough when algorithms produce long-term unintended consequences but when everyone can see damage being inflicted, and may suspect it has been designed in, surely something can be done.
In 2019, the Institute for the Future of Work set up an Equality Task Force to examine how algorithms and artificial intelligence impact on equality and fairness at work.
Their report, Mind the Gap, calls for legislative and regulatory action to prevent the use of technology to embed discrimination and proposes the passing of an Accountability for Algorithms Act.
They point out that a recruitment algorithm based on historic data may "assume" that tomorrow's employees from some race, class, family background, educational background, gender or neighbourhood will replicate the performance and results of "people like them".
Human choices about how to build and use data-driven technologies are never neutral; nor are the outputs "objective" or impartial.
Without regulatory intervention, data-driven technologies may offer unsound and profoundly anti-aspirational bases for decision-making.
Aiming to build tools that are simply "non-racist" or "non-sexist" may ensure those tools reinforce existing patterns of racism and sexism.
By contrast, building tools that positively promote equality requires deliberate consideration of equalities when building decision-making systems.
This seems like new territory; in legal and regulatory terms it will certainly be a new challenge. However, the need to move forward on the issue of algorithms is grounded in previous advances. Already, public sector bodies must not just avoid discrimination.
Under the Public Sector Equality Duty, enshrined in the 2010 Equalities Act, they must have regard to the need to advance equality of opportunity and foster good relations between people who share a protected characteristic and those who do not.
The Scottish Parliament has extended this to include a duty to publish and monitor equality outcomes.
The Task Force recommends a principle-driven, human-centred approach to governance and regulation of algorithms, including those used in artificial intelligence and machine learning.
These choices decide who will benefit and who may be harmed, which values algorithms embed and enhance, and which they corrode.
It proposes that regulation of algorithms must bring those choices to the surface and subject them to oversight and accountability, with new duties for bodies to evaluate equality impacts and make adjustments to address any adverse impacts.
Technological advances are proceeding apace - it is time for legal and regulatory frameworks, in Scotland and the UK, to catch up.