Artificial intelligence applications play an increasingly important role in our daily life. But these technological advances come with serious societal risks. In this workshop we will bring together researchers from various disciplines who work on the societal impact of AI applications. The goal is to share ideas, best practices and discuss how harmful behavior of AI applications should be approached.
The workshop will be held at the University of Amsterdam on 26–27 June, 2023.
Location: The workshop takes place in room F0.01 of Bushuis, Kloveniersburgwal 48, 1012 CXAmsterdam. The room is part of the Humanities Lab of the University of Amsterdam. Zoom links will be available for those who would like to follow along remotely.
Programme: Please check for the latest version of the programme here.
The zoom link can be found there as well.
Dr. Su Lin Blodgett is senior researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal. Dr. Blodgett is interested in examining the social and ethical implications of natural language processing technologies; she develops approaches for anticipating, measuring, and mitigating harms arising from language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. She has also worked on using NLP approaches to examine language variation and change (computational sociolinguistics), for example developing models to identify language variation on social media.
Dr. Erin Beeghly is Associate Professor of Philosophy at the University of Utah. Her research interests lie at the intersection of ethics, social epistemology, feminist philosophy, and moral psychology. Her current book project, What's Wrong With Stereotyping? (under contract with OUP), examines the conditions under which judging people by group membership is wrong. She and Alex Madva are co-editors of the first philosophical introduction to implicit bias: An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge 2020). Beeghly also writes and teaches about topics within legal theory, including discrimination law.
Organizers and sponsoring
The workshop is organised by Dr. Marjolein Lanzing and Dr. Katrin Schulz as part of their project The politics of bias in AI: challenging the technocentric approach. The project is funded by the RPA Human(e) AI of the University of Amsterdam.
The workshop is co-organised by the NWO project of Prof. Dr. Robert van Rooij.
We are very thankful for additional funding from the Institute of Logic, Language and Compuation (ILLC) of the University of Amsterdam.