In 2020, the City of London, Ontario introduced the Chronic Homelessness Artificial Intelligence (CHAI) model, an algorithm designed to predict which individuals are most at risk of becoming chronically homeless. While the system is intended to improve resource allocation and prevention efforts, it also raises pressing questions about fairness, accountability, and the risks of embedding automated decision-making into social services.
London’s homelessness AI reflects a broader turn toward algorithmic governance — raising urgent questions about accountability, fairness, and justice
This project, led in part by Starling Co-Directors Luke Stark, Joanna Redden, and Alissa Centivany, in collaboration with Dan Lizotte and Melissa Adler, examines London’s CHAI model in its historical, social, political, legal, and policy contexts. London’s adoption of AI reflects a broader global trend of governments turning to predictive analytics in social care, which is often accompanied by concerns about bias, surveillance, data privacy, and the reinforcement of inequality.
This research explores:
Through collaboration with city officials, service providers, civil society actors, and unhoused Londoners themselves, this project aims to generate critical insights into the social justice impacts of AI in social service delivery, and determine how such systems may entrench or challenge existing inequities.