All are invited to join us for this Community Keynote on 25 September 2025 by Dr. Shion Guha (University of Toronto), entitled, “AI Won’t Fix Our Social Problems: Deconstructing Risk in Predictive Risk Models.”
This talk coincides with launch of the Glass Room at London Public Library. For the next four months, members of the public can explore this interactive exhibition that asks us to consider what impact digital technologies are having in our lives. One of the pillars of Starling is to foster community awareness and engagement in emerging tech, like artificial intelligence, as well as technologies already embedded in our every day lives, like cell phones. The exhibition asks participants to consider: what happens when they rely on social media for information? How do they know if a picture is truthful? What data is being collected about them — and why?
Details at a Glance:
Where: London Public Library, Central Branch 251 Dundas Street London, ON
When: 25 September 2025
Schedule: 5:00pm light refreshments; 6:00pm welcome remarks; 6:15-7:45pm keynote address; 8:00pm closing remarks
Abstract:
Predictive risk models” (PRM) now steer frontline decisions across child welfare, criminal justice, homelessness services, and immigration; yet the object of prediction is often a moving target. What counts as “risk” is not a stable property of people or families. It is a construct produced by agency priorities, historical assessment practices, constrained resources, and the surveillance footprints those forces leave in administrative data. Drawing on multi-method empirical studies in child welfare, I show that many PRMs learn patterns of institutional response rather than the outcomes they claim to predict. Proxy labels tied to prior investigations, hotline calls, or service actions routinely stand in for harm. These choices create self-fulfilling feedback that inflates accuracy, hides distributional error, and shifts burden onto already over-observed communities.
I will trace how labeling decisions, data provenance, and workflow integration generate measurement bias, outcome leakage, survivorship bias, and harm displacement. He then outlines a human-centered data science framework that I have helped develop, which couples statistical modeling with interpretive inquiry, participatory auditing with practitioners and affected communities, and design interventions that re-specify targets away from “risk of agency action” toward concrete safety and support objectives. The framework yields practical tests for label validity, governance checkpoints for deployment, and redesigns of triage that prioritize service eligibility and need over speculative risk scoring. Although the case material centers on child welfare, the analysis generalizes to adjacent domains where administrative action masquerades as ground truth. The talk closes with research and policy directions for scholars and collaborators across information, computing, law, and public policy who are ready to rethink what it means to predict “risk” in public services.
Bio
Shion Guha is an Assistant Professor in the Faculty of Information at the University of Toronto, cross-appointed to the Department of Computer Science, and directs the Human-Centred Data Science Lab. He leads a research program that examines how data, measurement, and organizational practice interact when governments adopt algorithmic systems. His team works with agencies in child welfare, healthcare, and homelessness services to study model development and use, to audit outcomes with mixed methods, and to co-produce evaluation criteria that reflect service goals and community priorities. The program emphasizes practical artifacts that institutions can deploy immediately, including assessment protocols, procurement checklists, and implementation guides that link analytics to clear accountability paths. Guha is the author of Human-Centered Data Science: An Introduction (MIT Press, 2022). His research has been funded by the Canadian Institute for Advanced Research, the Natural Sciences and Engineering Research Council of Canada, the Canadian Institutes of Health Research, the Social Sciences and Humanities Research Council, and the U.S. National Science Foundation. He has been recognized with a Way-Klingler Early Career Award (2019), a Connaught New Researcher Award (2021), and a Schwartz-Reisman Institute for Technology and Society Faculty Fellowship (2023). He has partnered with Canadian municipal, provincial, and federal bodies on AI initiatives and provided expert input to U.S. state agencies and civil society organizations. His work has been covered by Newsweek, the Associated Press, ABC, NBC, and the Toronto Star. He holds an MS from the Indian Statistical Institute and a PhD from Cornell University.