LUNCH is a newly launched seminar series on topics in philosophy, linguistics, cognitive science, mathematics, computer science and related fields of interest. Our target audience is everyone at the Institute for Logic, Language, and Computation.
The LUNCH seminar is intended to be a nice way to spend your lunch hour once a month. The talks will be fairly high-level and interactive. Speakers will include members of the ILLC as well as guests from other universities and research institutions.
We will provide a light lunch for those attending, and you're more than welcome to bring along your own lunch from home.
For questions, comments, or compliments, please contact the organisers Sirin Botan & Zoi Terzopoulou.
To be announced.
Computational complexity theory studies the computational resources (e.g., time, space, randomness, etc.) required for solving computational problems. Its analytical tools aren't yet commonly taught in cognitive science and many still go about their business without much concern for the computational resources presupposed by their theories and models. Yet, there are good reasons for cognitive scientists to care more about computational complexity. In this talk I will explain how computational complexity theory provides useful analytical tools to guide and constrain computational-level theorizing.
We interact with computers every day, and often using something like human language. There is therefore a huge amount of research going into how to represent human language computationally. Modelling words as vectors has been one of the most successful approaches over recent years. However, it is not immediately clear how to combine word vectors together to make phrases and sentences. On the other hand, formal semantics gives a clear account of how to compose words, but it is not so obvious how to represent their meanings. I will give an overview of the model I work with that shows how to combine word vectors using formal semantics. I will also describe its limitations and will appreciate ideas and questions.
Explainable AI is receiving a lot of attention these days. This is fantastic and important given the increasing use and impact of artificial intelligence, in particular Machine Learning. I think I have been working on Explainable AI for many years now, and I have always approached this from a Knowledge Representation point of view in which Shared Mental Models (and Team Mental Models) have played a big role. I would like to discuss this with you and come to some joint insights as to their possible roles in Explainable Machine Learning.
Technology is pervasive in science and in everyday life. To be sure, it has always been. And yet, a special class of technologies – namely digital technologies – are making this pervasiveness also radical. Radical in the way we understand the world, ourselves, and the relation between the two. This, in a nutshell, reconstructs one of the goals of the philosophy of information (PI). In this talk I will spell out the stance PI takes towards digital technologies and explore its potential for two ideas: (i) that we have reasons to explore a process-based (rather than entity-based) ontology, (ii) that technologies have a fundamental role in the construction of knowledge.