Dr. Marjolein Lanzing is Assistant Professor Philosophy of Technology at the University of Amsterdam. Previously, she worked on the Googlization of Health as a post-doc of the ERC project 'Digital Good' (PI Tamar Sharon) at the Interdisciplinary Hub for Security, Privacy and Data Governance (Radboud University). She finished her PhD-research 'The Transparent Self': A Normative Investigation of Changing Selves and Relationships in the Age of the Quantified Self at the 4TU Center for Ethics and Technology (University of Technology Eindhoven).
Marjolein studies the ethical and political concerns related to new technologies, in particular concerns regarding privacy and surveillance (autonomy, discrimination, manipulation and commodification), and what they mean for the way we understand ourselves and our social relationships.
Marjolein is board member of Bits of Freedom, an NGO that protects online freedom and (digital) civil rights.
Dr. Katrin Schulz is an assistant professor (UD 1) in experimental methods for AI and logic at the Institute of Logic, Language and Computation (ILLC). A central theme in her research is the question how we as humans make predicitons. The ability to make predictions is key to our survival. But at the same time it is also very problematic, because -- as we all agree -- we cannot know the future. So, how do we make predictions? And why is it that we are doing it so successfully? Katrin study these questions from various angles: linguistics, philosophy, cognitive science and artificial intelligence.
As part of this research line Katrin works also on stereotyping and bias and the role that new media and AI play in strengthening stereotyping and bias in society. She leads on NWO Open competition for digitalisation SSH project on that topic with the title "The biased reality of online media - Using stereotypes to make media manipulation visible". Together with Leendert van Maanen from the University of Utrecht, Jelle Zuidema from the University of Amsterdam and two PhD students she works on developing tools and methods that allow (i) measuring bias in computational language models, and (ii) using these measures to qantify influence of media coverage on beliefs of consumers of these media.
Together Marjolein and Katrin study the conceptualisation of algorithmic injustice in AI. Through clarifying the limitations of the current framing of this type of injustice they aim to provide angles for more effective interventions fighting harm caused by new AI technology.