In September 2022, we were invited to present Stepford at the Algorithmic Reparation Workshop at University of Michigan by Apryl Williams, a Mozilla Fellow we met during one of our monthly monitoring meetings. We were excited by the opportunity to learn about Algorithmic Reparation and to receive feedback on Stepford’s development, so we jumped at the chance.
“‘Algorithmic Reparation is a response and alternative to Fair Machine Learning, one that centralizes rather than obviates levers of inequality in machine learning systems.”
Co-directed by Apryl Williams (University of Michigan) and Jennifer Davis (Australian National University), the Algorithmic Reparation Workshop was a two-day event convening researchers, technologists, students, and activists around the topic of inequality in Machine Learning and AI.
The event was hosted at the University of Michigan in Ann Arbor and featured panelists from organisations such as Meta, AI for the People, Harvard’s CyberLaw Clinic, and the University of Notre Dame’s Technology Ethics Center. Facilitators invited us to perform a live demo of Stepford and to receive critical feedback from a room of people with valuable experience in working toward a more equitable AI landscape.
Participants pointed out some potential misuses of the tool, including:
-
- Disparities in cultural competency – Sometimes sexist language is used by human artists in a way that challenges cultural hierarchies. This tool might not be able to distinguish between the intentionality of the use of particular language.
- System response to trans and non-binary people – Sexism is not an on/off switch and gendered language can be weaponized against gender non-conforming people in a manner the system may not be sophisticated enough to detect.
- Gamification – By giving a ‘score’, the system could invite behaviour that gamifies the input to achieve a high score rather than eliminate harmful language.
These points gave us a lot to consider in how we frame the tool and whether it should apply to human-written text or just algorithmically-generated text. Although we won’t be able to control the ways people and institutions will use this or similar tools in the future, it is valuable to be aware of potential misuses and come up with mitigation strategies.
The experience was beneficial for us, especially as creative practitioners, to gain perspective on how algorithms affect the lives of people in political systems and how an algorithmic reparation approach better reflects existing injustices and their consequences.