Call for abstracts: Workshop on algorithmic injustice
University of Amsterdam, 26.6 & 27.6 2023
Artificial intelligence applications play an increasingly important role in our daily life. But these technological advances come with serious societal risks. For instance, in recent years we have seen many cases of machine learning applications that show unfair biased behavior towards particular groups or individuals. This led to growing concerns about harmful discrimination and reproduction of structural inequalities once these technologies become institutionalized in society.
As result a lot of energy is currently invested in identifying and resolving algorithmic discrimination. However, in AI research and policy, the remedies against algorithmic discrimination are often narrowly framed as design problems, rather than complex, structural, social-political problems. This carries the risk of a highly technocentric, individualist approach to algorithmic discrimination.
In this workshop we aim to bring together researchers from various disciplines who work on the societal impact of AI applications. The goal is to share ideas, best practices and discuss how harmful behavior of AI applications should be approached. We invite abstracts that address this question from a broad range of perspectives.
Examples of topics of interest include but are not limited to:
- (Critical evaluations of) conceptualizations of algorithmic fairness
- The (societal and political) risks of algorithmic discrimination
- The advantages and/or challenges of technocratic decision-making regarding fair AI
- Systemic and Structural Injustice in AI
- Interventions to achieve fairness in AI beyond debiasing
- Law, ethics and regulations of algorithmic fairness
Dr. Su Lin Blodgett is senior researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal.
Dr. Erin Beeghly is Associate Professor of Philosophy at the University of Utah. Her research interests lie at the intersection of ethics, social epistemology, feminist philosophy, and moral psychology.
Authors can submit an anonymous abstract of max 800 words (excluding references), with an optional additional page for tables and figures. The time for presentations of accepted submissions is 30 mins plus 15 mins discussion.
The deadline for the submission of abstracts is April 30, 2023. Authors will be notified of acceptance by May 5. Abstracts can be submitted using EasyChair.
|30 April 2023|
Notification of acceptance:
|5 May 2023|
|26-27 June 2023|