This project addresses key democratic challenges presented by the rapid implementation of AI and automation across government services without adequate transparency, accountability or public debate and participation in decision-making about if, where and how these systems should be used. The project focuses on: 1) Mapping and analysing where and how AI and automated systems are being used across public services, 2) employing case study investigations to better understand the impact of AI on decision-making processes and services, 3) drawing on these case studies to assess the limits of existing regulatory frameworks and oversight structures and to make recommendations to improve accountability and 4) advancing and contribute to civic efforts and community mobilization about AI implications.
Globally, governments are increasing their uses of Artificial Intelligence and automated decision support systems across public services in their efforts to be more efficient and better target limited resources. While Canada has been slower than other countries, including the UK and the US, to implement these systems in public services, government agencies in this country have piloted or implemented AI and ADS systems in the areas of immigration, housing, policing and benefits administration. Areas being considered are children’s services, sentencing and corrections, public health, and employment.
There are significant concerns about how ‘datafied’ turns by government can threaten privacy rights and security, exacerbate discrimination, compel more data collection and shift focus to examining correlations instead of causation. To date there has been little opportunity for the public to engage in decision-making about if, where and how AI and automated systems should be used. Civic decision-making is also limited because there is little public information available about where and how AI is being used by businesses and government and the impacts of the systems already in place. There is pressing need for stronger oversight, protection and redress mechanisms. The aim of this project is to address these challenges.
The Canadian TAG Register was first published in April 2024, identifying 303 uses of AI by government agencies in Canada. Research continues into provincial and municipal uses of AI and we expect to update the register again in the Fall of 2025.
The TAG Register has been discussed in a range of media including the below:
Since the Canadian TAG Register went live, team members have made joint calls with other scholars and researchers for greater public participation in decision-making about if and how AI and automated systems should be used.
Joanna Redden, Alison Hearn, Valerie Steeves, Effie Sapuridis, Sananda Sahoo, Pinar Barlas
Special thanks for additional support for the TAG Register:
The first TAG Register was developed by researchers Tatiana Kazim and Mia Leslie from Public Law Project, an independent national legal charity based in the UK with the aim of improving access to public law remedies for marginalised individuals. The Canadian TAG register was developed by Starling Centre researchers Joanna Redden, Sananda Sahoo, Effie Sapuridis, Janet Allen and JP Mann. We are grateful to the Public Law Project and Mia Leslie for allowing us to adopt their design and content for a Canadian register. We are also grateful to our colleagues and the following organisations for their input into the content and design of the Canadian TAG Register: Alison Hearn, Valerie Steeves, Fenwick McKelvey, and Luke Stark. The Canadian TAG register database code and design was adapted by Jamie Duncan and Alex Luscombe.