One of the challenges resulting from the ongoing surge of interest in AI is finding community and resources outside of tech, “innovation” focused organizations that are often closely aligned with the political and commercial interests of “Big Tech”. The work of critical research communities, such as the one we are trying to foster here at Starling, is necessary to provide space and resources for the vast array of diverse scholarship and interdisciplinary approaches that are required to understanding ongoing and merging social issues that coalesce around critical inquiry of technology. Here is a short list of other institutions that are currently providing both exciting new research and providing resources for scholars invested in these areas of research.
https://www.dair-institute.org
Founded in 2021 by Timnit Gebru a year after her ousting from the Google Ethical AI Team, the Distributed AI Research Institute (DAIR) is committed to interdisciplinary and globally distributed AI research. DAIR identifies the roots of their research projects in the belief that AI is not inevitable in its outreach to all facets of society, its harms are preventable, and that when diverse perspectives and deliberate processes are included in its deployment, it can be beneficial. DAIR identifies the significant concentration of power in Silicon Valley and US-based technologies and aims to centre the lived experiences and material conditions of historically marginalized groups. Many of the current research projects conducted by DAIR use both quantitative and qualitative methods to help these groups advocate for change. These include projects that analyze the impact of South African apartheid using computer visionary techniques and satellite imagery, a natural language processing research project that analyzes the impacts of social media platforms on neglected countries and languages, and a formal inquiry into conditions and dominant discourses that dictate the labour of data workers in Latin America that are pivotal to the development of machine learning technologies. DAIR hosts a variety of presentations and talks each month, including the live Mystery AI Hype Theatre 3000 which aims to break down the latest AI hype and marketing.
Logic(s) is the first black, queer, and Asian publication exploring the intersection of technology and social impact offers a radical shift from both tech journalism and dominant publishing models. Born from Logic Magazine, a six-year publication that was part of the former Logic Foundation (now the Collective Action School) at Colombia University’s INCITE institute. The biannual print and digital magazine is still committed to the INCITE institutes belief that creating knowledge for public action requires all forms of expertise and experience, supporting researchers, students, artists, activists and many others from outside traditional academic institutions. There is sufficient cause for alarm and dread about the issues facing all of us, but Logic(s) recent editorial pivot explicit aim to “do something very different than counting up falling sky Chicken Little Style…switching out the stultifying dread and doomsday and-wringing for Black joy and rhythmic plurinational freedom dreaming” is sorely needed in tech research and publications today. It’s a space where critical research co-exists with poetry, short fiction, conversations, fashion, and visual art. In keeping with
this mission, the Logic(s) team has raised successfully raised funds for their Fellowship for Palestinian Journalists, which will provide support and resources, especially in understanding the high-tech deployment of cryptography, digital surveillance, secure communication, and internet infrastructure to support their crucial journalistic efforts.
Developed in association with the Responsible AI Collaborative, the AI Incident Data Base is a developing repository of harms or near harms realized by the deployment of artificial intelligent systems. Drawing from a similar archival tradition in aviation and computer security, the AI Incident Database is built on user submitted incident reports. The hope is that archiving these incidents will allow developers and policymakers to make informed decisions about the use of artificial intelligence by learning about past harms and incidents. Users can submit news items and events as an incident, and are accepted or rejected based on a comprehensive, but still growing list of criteria. While not comprehensive, this resource is a useful resource for researchers looking to stay up to date on news events and cases that may have not been documented by major outlets.