Artificial intelligence and algorithmic tools used by central government are to be made public in a public register following warnings they may contain “deep-rooted” racism and bias.
Officials confirmed this weekend that they will soon be publishing the name of the tool, which activists have disputed over concerns of confidentiality and bias. The technology is used for a range of purposes, from detecting sham marriages to rooting out fraud and errors in benefit claims.
The move is a victory for campaigners who have challenged the introduction of AI in central government, in the face of technology that is set to be rapidly adopted in the public sector. Caroline Selman, senior researcher at justice access charity the Public Law Project (PLP), said there had been a lack of transparency about the existence, details and deployment of the systems. “We need to ensure public bodies publish information about these tools that are being rapidly deployed. It is in everyone’s interest that the technology being adopted is lawful, fair and non-discriminatory.”
In August 2020, the Home Office agreed to stop using a computer algorithm to help sort visa applications after allegations that the algorithm contained “deep-rooted racism and bias.” Authorities halted use of the algorithm following a lawsuit by the Joint Council for the Welfare of Migrants and digital rights group Foxglove.
Foxglove argued that some nationalities were automatically given a “red” traffic light risk score, making them more likely to be denied a visa – a procedure the company said amounted to racial discrimination.
The department was also called out last year for its algorithmic tool used to detect sham marriages used to subvert immigration controls. The PLP said the tool could discriminate against people from certain countries, and an equality assessment disclosed to the charity found that people from Bulgaria, Greece, Romania and Albania were more likely to be targeted for investigation.
The government’s Data Ethics and Innovation Centre (now the Responsible Technology Adoption Unit) warned in a November 2020 report that there are many examples of new technologies “perpetuating or amplifying historical biases, and even creating new forms of bias and inequity”.
The center helped develop algorithm transparency recording standards for public agencies deploying AI and algorithmic tools in November 2021. The center proposed that models that interact with the public or significantly influence decision-making be made public in a registry or “repository” with details of how and why they were used.
To date, just nine records have been published in the repository over a three-year period.None of these models are run by the Home Office or the Department for Work and Pensions (DWP), which have run some of the most controversial systems.
The previous government said in its February consultation response on AI regulation that it would require agencies to adhere to reporting standards. The Department of Science, Technology and Innovation (DSIT) confirmed this weekend that agencies will now be required to report on their use of AI technology in accordance with the standards.
A DSIT spokesman said: “Technology has great potential to improve public services, but we know it is important to maintain appropriate safeguards, including human oversight and other forms of governance where appropriate.”
“The Algorithmic Transparency Records standard is now mandatory across all departments, with a number of records due to be made public soon. We continue to explore ways to extend this across the public sector. We encourage all organisations to use AI and data in ways that build public trust through our tools, guidance and standards.”
Departments are likely to be asked to provide further details about how their AI systems work and the steps they have taken to reduce the risk of bias. The DWP is using AI to detect potential fraud in advance applications for Universal Credit and is developing AI to detect fraud in other areas as well.
In its latest annual report, the DWP said it had carried out a “fairness” analysis of the use of AI for Universal Credit advance claims and that “no immediate concerns of discrimination arose”. The DWP has not published any details of the assessment, fearing that making it public could “allow fraudsters to understand how the model works”.
The PLP supports possible legal action against the DWP over its use of the technology, and is demanding details from the department about how it is being used and what measures are being taken to mitigate the harm. The project is creating its own register of automated decision-making tools in government, and has tracked 55 tools so far.