logo

Liven up your Usually Non-productive Consumption Hour

LUNCH is a seminar series on topics in philosophy, linguistics, cognitive science, mathematics, computer science and related fields of interest. Our target audience is everyone at the Institute for Logic, Language, and Computation.

The LUNCH seminar is intended to be a nice way to spend your lunch hour a few times per semester. The talks will be fairly high-level and interactive. Speakers will include members of the ILLC as well as guests from other universities and research institutions.

We will provide a light lunch for those attending, and you're more than welcome to bring along your own lunch from home.

For questions, comments, or compliments, please contact the organisers Sirin Botan & Zoi Terzopoulou.

Past Talks:

Wednesday Feb 5th: Arianna Betti (University of Amsterdam) — In AI We Trust?

How can we ensure trust in machines? In particular, how can computational text analysis, an important sector of AI, ensure trust in its algorithms? The sector is booming, and its real-life applications ubiquitous. But how comfortable are you with having an AI assess whether your mum's calls to 112 are really urgent? Having your brother defended by a legal AI? Have software decide whether you'll get the next grant? I bet your answers vary from 'not very much' to 'not at all': what do you think should happen to remedy this situation? Is this something that we, the ILLC community, substantially can contribute to? If so, how, ideally?

Wednesday December 4th: Davide Grossi (University of Groningen) — Tales of Deliberation: Told by 1 Journalist, 1 Politician, 6 Comedians, 12 (Angry) Men... and 3 Sciences.

How do deliberating groups work? And can we design deliberative processes that can guarantee well-informed decisions? In this talk I will introduce, in a light way, a number of features of deliberative processes that I consider central, show their relevance for research in logic, economics and linguistics, and highlight some challenges for the development of a science of deliberative processes.

Wednesday May 22nd: Iris van Rooij (Radboud University Nijmegen) — Why cognitive scientists should care about computational complexity

Computational complexity theory studies the computational resources (e.g., time, space, randomness, etc.) required for solving computational problems. Its analytical tools aren't yet commonly taught in cognitive science and many still go about their business without much concern for the computational resources presupposed by their theories and models. Yet, there are good reasons for cognitive scientists to care more about computational complexity. In this talk I will explain how computational complexity theory provides useful analytical tools to guide and constrain computational-level theorizing.

Wednesday February 20th: Martha Lewis (University of Amsterdam) — Compositionality in vector space models of meaning

We interact with computers every day, and often using something like human language. There is therefore a huge amount of research going into how to represent human language computationally. Modelling words as vectors has been one of the most successful approaches over recent years. However, it is not immediately clear how to combine word vectors together to make phrases and sentences. On the other hand, formal semantics gives a clear account of how to compose words, but it is not so obvious how to represent their meanings. I will give an overview of the model I work with that shows how to combine word vectors using formal semantics. I will also describe its limitations and will appreciate ideas and questions.

Friday January 25th: Catholijn Jonker (Delft University of Technology) — Shared mental models in the context of Explainable AI

Explainable AI is receiving a lot of attention these days. This is fantastic and important given the increasing use and impact of artificial intelligence, in particular Machine Learning. I think I have been working on Explainable AI for many years now, and I have always approached this from a Knowledge Representation point of view in which Shared Mental Models (and Team Mental Models) have played a big role. I would like to discuss this with you and come to some joint insights as to their possible roles in Explainable Machine Learning.

Thursday November 22th: Federica Russo (University of Amsterdam) — Why going informational: a gentle introduction

Technology is pervasive in science and in everyday life. To be sure, it has always been. And yet, a special class of technologies – namely digital technologies – are making this pervasiveness also radical. Radical in the way we understand the world, ourselves, and the relation between the two. This, in a nutshell, reconstructs one of the goals of the philosophy of information (PI). In this talk I will spell out the stance PI takes towards digital technologies and explore its potential for two ideas: (i) that we have reasons to explore a process-based (rather than entity-based) ontology, (ii) that technologies have a fundamental role in the construction of knowledge.