This page concerns the Lectures on Logic and its Mathematical Aspects (LLAMA) seminar series at the Institute for Logic, Language & Computation (ILLC) of the University of Amsterdam.
Click to reveal the abstract.
Kleene Algebra (KA) is a useful tool for proving that two programs are equivalent by reasoning equationally. But which models of KA are complete, in the sense that they satisfy exactly the provable equations? Kozen (1994) answered these questions by characterizing KA in terms of its language model. Pratt (1980) observed that equations provable in KA also coincide with the equations that hold in any relational interpretation. A less known result due to Palka (2005) says that says that provable equivalences coincide with equations satisfied by all finite KAs, or contrapositively: any unprovable equation is falsified by a finite KA - a finite model property. These results can be argued using Kozen's theorem, but the implication is mutual: the statement of Palka's (resp. Pratt's) result implies Kozen's theorem. We embark on a study of the different complete models of KA, and the connections between them. This yields a fourth result subsuming those of Palka and Pratt, namely that KA is complete w.r.t. finite relational models. Next, we put an algebraic spin on Palka's techniques, which yields an elementary proof of the finite model property, and by extension, of Kozen's and Pratt's theorems. In contrast with earlier approaches, this proof relies not on minimality or bisimilarity, but rather on representing the regular expressions in terms of their transformation automata.
TBA
The Dutch Association for Logic (VvL) will hold an in-person joint seminar organized by Eindhoven University. The event is inspired by the departmental logic seminars that are organized at each university, and aims to unify the universities for a collaborative seminar. Besides hosting a main speaker, the seminar will also be the location of the award ceremony of the VvL MSc Thesis Prize winners, who will give a short presentation of their thesis. Details, including speakers, abstract and information about registration, can be found on the event website.
In contrast to Boolean algebras and Distributive lattices, the variety of Heyting algebras is not locally finite, and in fact, none of its free finitely generated algebras are finite. The associated difficulty of understanding the free algebras has motivated a wealth of research into describing such algebras. In this line study, Ghilardi constructed the free Heyting algebra generated by a finite distributive lattice, using Birkhoff duality. In this talk, I will present a recent generalization of this construction to the case of arbitrary distributive lattices, using Esakia duality. Categorically, this provides an explicit construction of a left adjoint to the inclusion of Heyting algebras in the category of distributive lattices. This has several applications, including the construction of colimits of Heyting algebras, as establishing new properties of the category of Heyting algebras, like co-distributivity. If time permits, I will also discuss some consequences of this for coalgebraic intuitionistic modal logic, and the relationship of positive and intuitionistic modal logic (this last part is joint work with Nick Bezhanishvili).
It is well known that topological spaces are often less well behaved in constructive mathematics than classically. For example, with the usual definition of compactness even the closed interval is not compact. Also, the usual notion of continuous maps is often too weak. There are generally two approaches to fixing these problems. First, we could use locales instead of topological spaces. The problem with this approach is that the category of locales is not concrete, so it is less convenient to work with them than with topological spaces. Another approach originated with Bishop uses ad-hoc definitions of compactness, continuous maps, and other notions. This approach works well for certain spaces, but it is less conceptual and it is difficult to generalize it to other spaces, especially not locally compact ones.
In this talk, I propose another approach to solving these problems. I define a concrete topological category of so-called cover spaces. They can be used to conveniently reason about regular sober spaces. The notions of compactness and morphisms in this category are very natural and similar to the usual topological definitions. This category has a full reflective subcategory of complete spaces, which generalize the notion of completeness for metric and uniform spaces. The notion of a Cauchy sequence makes sense in a cover space, so it can be used to construct certain maps (such as the exponential map) using usual techniques from analysis.
This approach also ties neatly two approaches mentioned above. On the one hand, it specializes to the Bishop approach in the case of locally compact spaces. On the other hand, the category of complete cover spaces (with one small additional condition) is equivalent to a full subcategory of locales, which includes real numbers and other complete metric locales. This shows that we can use simple point-set reasoning to construct and work with localic maps between such locales.
This talk is completely constructive in the sense that it applies in the internal language of an elementary topos.
This talk will be about the effect of additional set theoretic axioms on quotient structures. The focus will be on rigidity, measured in terms of existence (or non-existence) of suitably non-trivial automorphisms of the quotients in question. A textbook example for the study of this topic is the Boolean algebra P(N)/ Fin, whose behavior provides a paradigm for the field: Forcing axioms imply that all of its automorphisms are trivial (roughly, the obvious ones that can be constructed without additional set-theoretic axioms and without too much thinking), while under the Continuum Hypothesis, CH, this rigidity fails and P(N)/Fin admits many non-trivial automorphisms. After introducing the setup (only a basic knowledge of set theory and model theory is required) and giving classical examples, I will discuss a work in progress with Will Brian and Saeed Ghasemi on rigidity for dynamical systems.
This talk will survey some of my recent results, both published and unpublished, on (un)decidability within modal and temporal logic, truthmaker semantics, team semantics, and relevant logic. Motivated by the question of what makes a logic undecidable, the aim is to identify traits of undecidable logics. As a consequence, the presentation will offer more breadth than depth.
To begin, we show that the modal logic of any class of semilattices containing \( (\mathcal{P}(\mathbb{N}), \cup )\) is undecidable. This result carries substantial implications, including resolving the open problem concerning the decidability of hyperboolean modal logic, as raised by Goranko and Vakarelov (1999).
Next, we compare this result with the decidability of team, truthmaker, and modal information logics, drawing heuristic insights into the factors contributing to the decidability or undecidability of logics.
Finally, we turn our attention to the semilattice relevant logic \(\mathbf{S}\), introduced by Urquhart (1972). While Urquhart (1984) showed that many relevant logics are undecidable, \(\mathbf{S}\) eluded these techniques. Eventually, it led Urquhart (2016) to conjecture that \(\mathbf{S}\) is decidable. Contrary to this conjecture, we show that S is undecidable.
References
Classically, behavioural equivalences on reactive systems form spectra ranging from branching-time equivalences such as bisimilarity to linear-time ones such as trace equivalence. A similar situation has developed for quantitative notions of equivalence, that is, for behavioural metrics, which on systems carrying quantitative information such as weights or probabilities provide a more robust measure of comparison than two-valued equivalences. That is, behavioural metrics similarly come in varying degrees of granularity ranging from branching time to linear time. The framework of graded semantics has emerged as a principled way of treating behavioural equivalences in a uniform manner by combining the, by now established, universal-coalgebraic modelling of reactive systems with a universal-algebraic modelling of semantic identifications beyond branching time. We give an introduction to the overall framework and then report on recent developments in adapting the graded semantic approach to the quantitative setting. We pay particular attention to quantitative modal logics that are characteristic for given behavioural metrics in the sense that logical and behavioural distance coincide, highlighting specific challenges of the quantitative setting. In particular, we show that trace distance on probabilistic metric transition systems does not admit a compositionally defined characteristic modal logic. On the positive side, we obtain a general criterion for a modal logic to be characteristic for a given quantitative graded semantics, and apply this criterion to obtain a new characteristic modal logic for trace distance on fuzzy metric transition systems.
In 1984, Robin Milner proposed a process theoretic semantics for regular expressions. Semantic equivalence in this context is bisimilarity of labelled transition systems, which does not coincide with language equivalence. He adapted an axiomatization of regular expressions modulo language equivalence, due to Arto Salomaa, to a sound axiomatization of regular expressions modulo bisimilarity. He then posed the question of whether this axiomatization is complete. This question was answered only recently, by Clemens Grabmayer. In this talk, we will look at a generalization of Milner's completeness question that subsumes similar completeness problems for a number of popular variations on regular expressions, including the recently introduced Guarded Kleene Algebra with Tests (GKAT) and its probabilistic extension (ProbGKAT). Further examples will also be discussed.
In the vast landscape of applications of logic and model theory to other mathematical subjects, one of the most popular directions is using model theoretic methods to better understand fields enriched with valuations, an object classically studied by number theorists and algebraists. I plan to introduce valued fields, explain why they are important, and then go on a tour of some of the classes of valued fields that we understand well, including a peek at my own personal favourite: valued difference fields.
A classical theorem of Pólya shows that \(2^z\) is the smallest non-polynomial entire function that takes integer values at all positive integer inputs. In 2016, Wilkie established an similar characterization for integer-valued functions definable in a salient \(o\)-minimal expansion of the real field. We generalize Wilkie's classification by showing that if such a function has growth bounded by \(2^{(1+10^{-5})x}\), then it must eventually be given by \(P(x)+Q(x)2^x\), for some polynomials \(P\), \(Q\). Our result parallels a subsequent theorem of Pólya for entire functions, and in fact an improved version due to Selberg. We extend Wilkie’s theorem also in another direction, by proving an \(o\)-minimal analog of a theorem by Pila for \(k\)-concordant entire functions. This is joint work with Raymond McCulloch, Nandagopal Ramachandran, Katharine Woo.
Modal logic can be traced back to Lewis’ introduction of a modal connective for ‘strict implication’, also known as the Lewis arrow. Classically, the Lewis arrow is interdefinable with box, and so the connective has fallen into disuse over time. Over an intuitionistic base, however, the arrow is more expressive than the box.
With the aim of setting up a general framework for intuitionistic fixed point modal logics, we currently study an intuitionistic version of the \(\mu\)-calculus with the Lewis arrow as modal connective. Formulas are interpreted over bi-relational Kripke models, which satisfy a confluence condition to ensure monotonicity with respect to the intuitionistic order.
In this talk, I will introduce both algebraic and game semantics for the logic, and provide a complete non-wellfounded proof system for guarded formulas. Moreover, I will show how Ruitenburg’s theorem can be used to obtain that every formula is equivalent to a guarded one, thereby showing that this well-known result for the classical \(\mu\)-calculus still holds in the intuitionistic case.
This is joint work in progress with Bahareh Afshari.
The framework of graded semantics employs graded monads to capture behavioural equivalences of varying granularity, e.g. as found on the linear-time/branching-time spectrum for labelled transition systems, over general system types. In this talk, we are going to discuss a generic Spoiler-Duplicator style game for playing out notions of graded behavioural equivalence on coalgebras of an endofunctor on sets. Instances include the standard pebble games for simulation and bisimulation as well as games for more coarse-grained semantics like probabilistic trace equivalence on (generative) probabilistic systems. Graded monads endow this game with an algebraic flavour: Duplicator and Spoiler effectively play out an equational proof. On the other hand, our generic game arises from a determinization construction on coalgebras under the graded semantics at hand which we will also discuss.
This talk is based on joint work with Harsh Beohar, Barbara König, Stefan Milius, and Lutz Schröder.
In the 1930s, Gerhard Gentzen initiated the field of structural proof theory with his celebrated cut-elimination theorem. For this, he invented a formal system for the notation of mathematical proofs called the sequent calculus, and then showed that, in this calculus, applications of the cut rule—roughly corresponding to the use of lemmas in proofs—can always be eliminated.
Originally developed for classical and intuitionistic logic, Gentzen’s method has since been successfully applied to a large variety of nonclassical logics. Yet, there are some logics, such as the modal logic S5, where the method—at least in its original form—cannot be applied. The standard solution to this problem involves seeking structural extensions of the sequent calculus in which cut-elimination goes through. This program has been very successful and has given rise to many formalisms such as the hypersequent calculus and the nested sequent calculus. The price to pay is that proofs in these structural extensions can be quite complex even after cuts are removed.
In this talk, I discuss joint work with Agata Ciabattoni (TU Wien) and Revantha Ramanayake (RU Groningen) that deals with an alternative approach: Instead of pursuing cut-elimination in extensions of the sequent calculus, we retain the sequent calculus and examine which cuts are necessary to capture a logic. While there are some scattered results on this approach in the literature, it has been overshadowed by cut-elimination. As it turns out, sometimes the format of necessary cuts is quite simple and doesn’t really hamper the analysis of proofs. Beyond such practical considerations, we think that the study of such 'cut-restriction' results offers a valuable complementary outlook compared to the extension-focused view.
A variety of intuitionistic versions of modal logic K have been proposed in the literature. An apparent misconception is that all these logics coincide on their box-only (or diamond-free) fragment, suggesting some robustness of ‘box-only intuitionistic modal logic’.
However we recently showed that this is not true, by consideration of negative translations from classical modal logic: Fischer Servi’s IK proves strictly more diamond-free theorems than Fitch’s CK, and indeed iK, the minimal box-normal intuitionistic modal logic.
On the other hand we showed that the smallest extension of iK by a normal diamond is in fact conservative over iK (over box-free formulas). To this end, we developed a novel proof calculus based on nested sequents for intuitionistic propositional logic due to Fitting.
This talk is based on joint work with Anupam Das, published in the proceedings of Tableaux'23.
Many-valued modal logic allows us to reason about modalities in the context of vagueness, that is, where statements may be assigned additional truth-degrees different from True or False. In this talk, we identify a class of algebraically well-behaved many-valued logics, and study their modal extensions from an algebraic and coalgebraic point of view. More specifically, these well-behaved logics are the ones whose algebras of truth-degrees are semi-primal, examples include finitely-valued Łukasiewicz logics, Post logics, four-valued Belnap-Dunn logic and many others.
In more detail, in this talk we show how to lift classical algebra/coalgebra dualities (e.g., Jónsson-Tarski duality), as well as classical coalgebraic logics (e.g., normal or non-normal classical modal logic) to this many-valued level. This involves a canonical way to lift endofunctors on the variety of Boolean algebras to endofunctors on the variety generated by a semi-primal lattice-expansion. We show that completeness, expressivity and finite axiomatizability of a classical coalgebraic logic are preserved under this lifting, meaning that they automatically transfer to their many-valued counterparts. Furthermore, we provide concrete axiomatizations for some of the resulting varieties of many-valued modal algebras.
I will discuss several interesting examples of classes of structures for which there is a sensible first-order theory of "almost all" structures in the class, for certain notions of "almost all". These examples include the classical theory of almost all finite graphs due to Glebskij-Kogan-Liogon'kij-Talanov and Fagin (and many more examples from finite model theory), as well as more recent examples from the model theory of infinite fields: the theory of almost all algebraic extensions and the universal/existential theory of almost all completions of a global field (both joint work with Arno Fehm). Perhaps surprisingly, such asymptotic theories are sometimes quite well-behaved even when the base theories are not.
As many of the LLAMA participants know, the field of coalgebra is a lively area where category theory meets topics in theoretical computer science. In addition, it was recognized very early that this field might well have something to say to continuous mathematics. This talk will be a survey of what I know concerning this interaction. It will be 'more broad than deep'; that is, light on the proofs. Some of the topics will be:
The last topic will be applied in my talk to the LiRA seminar the following day, where the topic will be the policy improvement algorithm in Markov decision processes viewed coinductively.
Valuations arise as non-archimedean generalizations of absolute values, with examples coming from all over number theory and algebraic geometry. In many cases, henselian valuations turn out to be definable, meaning that their valuation ring is definable by a first-order formula in the language of rings. In the case where the residue characteristic of the so-called canonical henselian valuation is zero, Jahnke and Koenigsmann gave a full characterization of when a field admits a definable non-trivial henselian valuation. We extend their result to cases where the residue characteristic is positive. In my talk, I will explain how we can find a definable valuation using independent defect, as introduced by Kuhlmann and Rzepka.
This is joint work with Simone Ramello (Münster) and Piotr Szewczyk (Dresden).
We investigate Static Type Theory (STT), which is a type theory without any reduction rules. Removing reductions from type theory makes it easier to find models and reduces the complexity of type checking (from non-elementary or even undecidable to quadratic). Despite these simplifications, STT does not lose much deductive strength. By adding only two axioms to STT—binder extensionality and uniqueness of identity proofs—we can prove the same statements as Extensional Type Theory (ETT), which is a type theory with the maximal number of reductions.
We compare two categorical semantics for STT: comprehension categories, which closely follow the syntax and intricacies of type theory, and path categories, which are relatively simple and take inspiration from homotopy theory. Both are only semantics for type theory in a weak way because they have trouble modeling substitutions. However, it is known that we can turn comprehension categories into actual models. In our work we show that path categories are equivalent to comprehension categories, which means that we can also turn them into models. This gives us an easier semantics where we can use intuition from homotopy theory.
A result obtained by J. Derakhshan and A. Macintyre states that the theory of a mixed characteristic henselian valued field with finite ramification, and whose value group is a Z-group, is model complete in the language of rings if the theory of the residue field is model complete in the language of rings. In this talk, we will see how this result can be generalized to mixed characteristic henselian valued fields with finite ramification, but with different value groups. We will address the case in which the value group is an ordered abelian group with finite spines, and (if time permits) the case in which it is elementarily equivalent to the lexicographic sum of Z with a minimal positive element. In both cases, we give a one-sorted language (expansion of the language of rings) in which the theory of the valued field is model complete if the theory of the residue field is model complete in the language of rings.
Our goal is to define an algebraic language for reasoning about non-deterministic computations. Towards this goal, we introduce an algebra of string-to-string transductions. Specifically, it is an algebra of partial functions on words over the alphabet of relational structures over the same domain. The algebra has a two-level syntax, and thus, two parameters to control its expressive power. The top level defines algebraic expressions, and the bottom level specifies atomic transitions. History-dependent Choice functions resolve atomic non-determinism, and make general relations functional. Equivalence classes of such functions serve as certificates for computational problems specified by algebraic terms. The algebra has an equivalent syntax in the form of a Dynamic Logic, where terms describing computational processes or programs appear inside the modalities.
We define a simple secondary logic for representing atomic transitions, which is a modification of conjunctive queries. With this logic, the algebra can represent both reachability and counting examples, which is not possible in Datalog. We analyze the data complexity of this logic, measured in the size of the input structure, and show that a restricted fragment of the logic captures the complexity class NP.
The logic can be viewed as a database query language, where atomic propagations are separated from control.
Utrecht University will organize the second edition of the annual seminar of the Dutch Association for Logic (VvL). The event is inspired by the departmental logic seminars that are organized at each university, and aims to unify the universities for a collaborative seminar. Besides hosting a main speaker, the seminar will also be the location of the award ceremony of the VvL MSc Thesis Prize winners, who will give a short presentation of their thesis. Details, including speakers, abstract and information about the post-seminar dinner, can be found on the event website.
A core question in the model theory of fields is to understand how combinatorial truth patterns and algebraic properties interact. An example of that is a well-known result by Kaplan, Scanlon and Wagner, which states that infinite \(\textrm{NIP}\) fields of characteristic \(p\) have no Artin-Schreier extension. This result has since then being proven by Hempel to also hold for \(\textrm{NIP}_n\) fields. In fact, she conjectured that \(\textrm{NIP}_n\) fields are exactly \(\textrm{NIP}\). Using her results, we generalized Anscombe-Jahnke's classification of \(\textrm{NIP}\) henselian valued fields to \(\textrm{NIP}_n\) henselian valued fields. In this talk, we will survey results around \(\textrm{NIP}_n\) fields, and discuss the various conjectures around them.
Kleene algebra with tests (KAT) is a well-known algebraic framework for equational reasoning about program equivalence and correctness. KAT adds to Kleene algebra a negation operator confined to a Boolean subalgebra of tests. In this talk, we look into releasing the negation operator into the whole algebra. In particular, we formulate Kleene algebra with dynamic tests (KADT) where the confined negation of KAT is replaced by dynamic negation known from Dynamic Predicate Logic. KADT can be seen as a one-sorted algebraic counterpart of Propositional Dynamic Logic (PDL), in contrast to two-sorted counterparts such as Dynamic algebras or Test algebras. It also extends various versions of Kleene algebra with domain (KAD). We prove two completeness results for KADT: a language completeness result relating KADT to models based on guarded languages and a relational completeness result linking KADT to relational models in the style of PDL. The latter result can be used to show that the equational theory of KADT is identical to the equational theory of relational KADT and that it is EXPTIME-complete. We prove that these results can be transferred to fragments of KADT such as the various versions of KAD. Some of our proofs are based on work of Marco Hollenberg from late 1990s.
In previous work with Fehm we found that the existential theory of an equicharacteristic henselian valued field is axiomatised using the existential theory of its residue field. From this we deduced a transfer of decidability: for a complete theory T of residue fields, the existential consequences of \(T\) are decidable if and only if the existential consequences of the theory \(H(T)\) are decidable, where \(H(T)\) is ’equicharacteristic, henselian, and residue field models \(T\)’. In more recent work with Dittmann and Fehm we considered a similar problem in which \(H(T)\) is expanded to a theory that distinguishes a uniformizer, using an additional constant symbol. In this case Denef and Schoutens gave a transfer of existential decidability conditional on Resolution of Singularities. We introduce a consequence of Resolution and prove that it implies a similar transfer of existential decidability.
In this talk I’ll explain these results and describe ongoing work with Fehm in which we broaden the above setting to obtain versions of these transfer results that allow incomplete theories \(T\). Consequently we find several existential theories Turing equivalent to the existential theory of \(\mathbb{Q}\), including the existential theory of large fields.
Classical methods to transfer results between fields of different characteristics are the Lefshetz principle and the Ax-Kochen/Ershov Theorem which states that asymptotically, the theory of the \(p\)-adic numbers \(\mathbb{Q}_p\) and of power series fields \(\mathbb{F}_p((t))\) coincide. Tilting perfectoid fields, a recent approach developed by Scholze, gives a transfer principle between certain henselian fields of mixed characteristic and their positive characteristic counterparts and vice versa. In this talk, we survey various transfer principles and present a model-theoretic approach to tilting via ultraproducts, which allows us to transfer many first-order properties between a perfectoid field and its tilt (and conversely). A key ingredient in our approach is an Ax-Kochen/Ershov principle for perfectoid fields (and generalizations thereof). This is joint work with Konstantinos Kartas (Sorbonne Université).
Intuitionistic S4 (IS4) is one of the intuitionistic variants of modal logic S4. Its semantics was defined by Fisher Servi in terms of birelational models, and it comprises two reflexive and transitive accessibility relations: one corresponding to the accessibility relation of modal logic S4 and the other corresponding to the order relation of intuitionistic Kripke frames. The question of whether IS4 is decidable has been open for almost 30 years, since Simpson posed it in his PhD thesis. We show that IS4 is decidable by performing proof search in a fully labelled sequent calculus for IS4, closely matching the birelational semantics. Our search algorithm outputs either a proof or a finite counter-model, thus additionally establishing the finite model property for intuitionistic S4. In this talk, I will first introduce IS4 and discuss the strategies to establish decidability through proof search in the modal and intuitionistic case. Then, I will sketch the main ingredients of our proof search algorithm.
This talk is based on joint work with Roman Kuznets, Sonia Marin, Marianela Morales and Lutz Straßburger. A preprint is available.
I will give an overview of Dafny, a verification-aware programming language that is currently maintained by my colleagues and me at Amazon Web Services. Dafny is a mature language with support for common programming concepts such as inductive datatypes, lambda expressions, and classes with mutable state. It can be compiled to C#, Java, Javascript, Python, and Go, making integration with existing workflow possible. At the same time, Dafny offers an extensive toolbox for mathematical proofs about programs, including quantifiers, (co-)induction, and the ability to use and prove lemmas. Programs can be annotated in the style of Hoare-Logic, with pre-and post-conditions, loop-invariants, as well as termination and read/write framing specifications. Proofs can be verified instantaneously, through compilation to the Boogie intermediate language, which uses the Z3 SMT-solver to discharge proof obligations. After an initial introduction, I’ll give a live-demo of Dafny. I’ll review some of its publicly known use-cases at Amazon, and highlight its potential for research efforts at the ILLC.
Conditional Boolean algebras form a variety that provides algebraic semantics for Chellas’s basic conditional logic from [5]. Conditional logics trace their roots to the works of Lewis [6, 7] and Stalnaker [9] and have been used to model a wide range of conditional statements, including counterfactuals and conditional obligations. While most conditional logics studied in the literature are extensions of classical logic, recent research have also explored conditional logics that extend intuitionistic logic.
Any Boolean algebra can be turned into a conditional algebra by expanding it to a structure \(\mathfrak{B} = ⟨B, \cdot, +, \rightarrow, -, 0, 1⟩\) where \(\rightarrow\) is a binary operator that satisfies for all \(a, b, c \in B\) that (C1) \(a \rightarrow 1 = 1\) and (C2) \((a \rightarrow b) \cdot (a → c) = a \rightarrow (b \cdot c)\). In this presentation, we focus on those conditional algebras that additionally satisfy (C3) \((a+b) \rightarrow c \leq (a \rightarrow c) \cdot (b \rightarrow c)\). The variety of conditional algebras with (C3) provides algebraic semantics for an extension of Chellas’s conditional logic.
This variety contains two important subvarieties. On one hand, we have pseudo-subordination algebras [4], which are conditional algebras \(\mathfrak{B} = ⟨B, ·, +, \rightarrow, -, 0, 1⟩\) that satisfy (1) \(0 \rightarrow a = 1\) and (2) \((a + b) \rightarrow c = (a \rightarrow c) \cdot (b \rightarrow c)\) for all \(a, b, c \in B\). The variety of pseudo-subordination algebras is term-equivalent to the variety of Boolean algebras with a binary modal operator. The connection between pseudosubordination algebras and subordination algebras has been established in [4] and [2].
On the other hand, we have the subvariety of Boolean weak Heyting algebras. Recall that a weak Heyting algebra is an algebra \(⟨B, \cdot, +, \rightarrow, 0, 1⟩\) such that its reduct \(⟨B, \cdot, +, 0, 1⟩\) is a bounded distributive lattice, and the binary arrow operator satisfies the following equations: (1) \(a \rightarrow a = 1\), (2) \((a \rightarrow b) \cdot (a \rightarrow c) = a \rightarrow (b \cdot c)\), (3) \((a + b) \rightarrow c = (a \rightarrow c) \cdot (b \rightarrow c)\), (4) \((a \rightarrow b) \cdot (b \rightarrow c) \leq a \rightarrow c\). The class of weak Heyting algebras was defined in [3] and it corresponds to the strict implication fragment of the normal modal logic K which is also known as the subintuitionistic local consequence of the class of all Kripke models. A Boolean weak Heyting algebra is an algebra \(⟨B, \cdot, +, -, \rightarrow, 0, 1⟩\) such that \(⟨B, \cdot, +, \rightarrow, 0, 1⟩\) is a weak Heyting algebra and \(⟨B, ·, +, -, 0, 1⟩\) is a Boolean algebra.
The core concept of conditionality is closely related to the notion of necessity. Specifically, conditionals can be seen as expressions of relative necessity: "if A, then B" means that the proposition expressed by B is, in some sense, necessary with respect to the one expressed by A. From an algebraic perspective, each element \(a \in B\) of the Boolean algebra gives rise to a normal necessity operator \([a]\), defined as \([a](b) := a \rightarrow b\). This yields a family of modal operators indexed by the elements of the algebra (see [8, 10, 5]). It is known that modal operators on B are in a one-to-one correspondence with modally definable subordinations on B, as discussed in [2]. In the same way, it can be proved that conditional algebras that satisfy condition (C3) are in one-to-one correspondence to a subclass of weak extended contact algebras, introduced in [1], that on the other hand are a generalization of extended contact algebras, and can be seen as Boolean algebras endowed with a family of subordinations.
Our aim is to discuss representation and duality theorems for conditional algebras and use them to study significant subvarieties within this framework. We will also make comparisons with existing duality theorems in the literature for precontact algebras. Additionally, we will use the duality to establish correspondence and canonicity results.
This is joint work with Sergio Celani and Rafał Gruszczyński.
A map \(f : M \to N\) between two structures \(M\) and \(N\) is said to be a homomorphism when for every atomic formula \(\phi(x_1, \ldots , x_n)\) and \(a_1, \ldots , a_n \in M\), \(M \models \phi(a_1, \ldots , a_n)\) implies \(N \models \phi(f(a_1), \ldots , f(a_n))\). Positive model theory is the part of model theory that deals with the formulas that are preserved by homomorphisms (see, e.g., [9, 10, 11, 14, 15, 16]). It is well known that these coincide with the positive formulas, i.e., formulas built from atomic formulas and \(\bot\) using only \(\exists\), \(\wedge\), and \(\vee\).
The Keisler-Shelah’s Isomorphism Theorem states that two structures are elementarily equivalent if and only if they have isomorphic ultrapowers. This celebrated result was first proved by Keisler under the generalized continuum hypothesis (GCH) [4, Thm. 2.4]. This assumption was later shown to be redundant by Shelah [12, p. 244]. The aim of this talk is to prove a version of Keisler’s original theorem in the context of positive model theory.
To this end, we say that two structures are positively equivalent when they have the same positive theory. Then we introduce a generalization of the ultraproduct construction that captures positive equivalence. We term this construction a prime product because it is obtained by replacing the index set I of an ultraproduct by a poset \(\mathbb{X}\) and the ultrafilter over I by a prime filter of the bounded distributive lattice of upsets of the poset \(\mathbb{X}\). The case of traditional ultraproducts is then recovered by requiring the order of \(\mathbb{X}\) to be the identity relation.
Prime products and positive formulas are connected by a positive incarnation of Łos’ ́Theorem. As a consequence, prime products preserve not only positive formulas, but also the universal closure of the implications between them, known as basic h-inductive sentences [11]. This allows us to describe the classes of models of h-inductive theories as those closed under isomorphisms, prime products, and ultraroots.
Our main result states that under GCH two structures have the same positive theory if and only if they have isomorphic prime powers of ultrapowers. The same result holds without GCH, provided that prime powers are replaced by prime products in the statement. Notably, the presence of ultrapowers cannot be removed from these theorems, as there exists positively equivalent structures without isomorphic prime powers. The results of this talk have been collected in the paper [8].
This is joint work with J.J. Wannenburg and K. Yamamoto.
We shall present an overview of recent results on the minimal undecidable fragments of modal and superintuitionistic predicate logics. In particular, we shall prove that every sublogic of QKTB, QGL, and Grz, as well as every sublogic of QKC are undecidable in languages with a single predicate letter and two individual variables. We shall also prove analogous results for predicate logics determined by the Kripke frames with finitely many possible worlds validating QKTB, QGL, Grz, and QKC. (Here, QL denotes the minimal predicate extension of a propositional logic L.)
This is joint work with Mikhail Rybakov.
In this talk I will introduce a new notion of the degree of the finite model property (the degree of FMP) for intermediate and modal logics. This notion resembles that of the degree of incompleteness of modal logics introduced by Fine (1974). I will show that any countable cardinal, as well as the continuum, can be realized as the degree of FMP for some intermediate and transitive modal logic.
This is joint work with Guram Bezhanishvili and Tommaso Moraschini.
Provability logic is a modal logic in the usual propositional modal language with unary modal operator \(\Box\) in which \(\Box A\) is interpreted as provability of \(A\) in a fixed first-order mathematical theory. An early celebrated result is due to Robert Solovay 1976, which classifies the Gödel-Löb logic \(\sf GL\) as the provability logic of all strong-enough mathematical first-order theories.
Natural question for the characterization of provability logic of Heyting Arithmetic \(\sf HA\), was considered by Albert Visser since 1980 and left open as a major open problem since then.
Here in this talk we go through a complete characterization and decidability of the provability logic of \(\sf HA\). As one might expect, some new subjects shows up, namely unification, projectivity and admissible rules. Also a Kripke style semantic which appears to be new, is provided for the purpose of arithmetical completeness. In this semantic, possible words in the Kripke-style semantic are augmented with derivability in a simple modal system.
There is a lively debate in the current literature on epistemology on which type of ignorance may provide a moral excuse. A good candidate is the one in which an agent has never thought about or considered as true a proposition p. From a logical perspective, no formal analysis was provided for ignorance as an excuse. We fill this gap by providing an original logical setting for modelling this type of ignorance. In particular, we introduce a complete and sound logic in which ignorance is expressed as a primitive modality. Semantically, the logic is characterized by Kripke semantics with possibly incomplete worlds, or, equivalently, Kripke models equipped with a three-valued valuation function. Moreover, to consider the conditions of a possible change of an agent’s ignorance, we will extend the setting to public announcement logic. First, we show that standard update procedures are not suitable for an adequate representation of scenarios involving excusable ignorance. Secondly, we define an original update procedure, which modifies the initial models by creating new worlds and accessibilities, instead of eliminating them. We exemplify the fruitfulness of this procedure via an intuitive example.
Orthologic was introduced by Goldblatt (1974) as a formalism intended to capture some of the logical properties of ortholattices — algebras satisfying the same laws as Boolean algebras, except possibly distributivity. In the same paper, the author introduced a Kripke semantics for orthologic, and a sound and faithful translation of orthologic into the modal logic system KTB, having some striking similarities to the Godel-McKinsey-Tarski (GMT) translation between intuitionistic logic and the modal logic system S4. Later work by Miyazaki further highlighted these connections by developing a theory of "modal companions", connecting extensions of orthologic with normal extensions of KTB.
In this talk I will show that despite these similarities, the Goldblatt translation does not enjoy most of the nice properties of the GMT. Using elementary observations, I show that there can be no isomorphism between the lattices of extensions. Using the duality for ortholattices introduced by Goldblatt (1974) and Bimbo (2004), I also show that a classical theory of modal companions cannot be developed. In the end I will remark on the role of implication in these results, and some further developments pursued in this direction. I also comment on the consequences of these results for a broader understanding of translations into modal logic. This talk reports on work done in my masters thesis, supervised by N. Bezhanishvili and T. Moraschini.
Non-normal modal logics are called in this way because they do not validate some axioms or rules of the basic normal modal logic K. They have been studied since the seminal works by C.I. Lewis, Lemmon, Kripke, Segerberg, Scott, and Montague, and are applied in many areas, such as deontic, epistemic, multi-agent, and probabilistic reasoning. In this talk, we present an original semantics and a sequent calculus for non-normal modal logics. We first analyse the properties of the calculus, then we show how semantical properties, like complexity of the satisfiability problem and the bounded model property, can be derived from the properties of the calculus. In the last part of the talk, we apply these results to some specific systems. This talk is based on joint works with Charles Grellois, Nicola Olivetti, Björn Lellmann, Sara Negri, Elaine Pimentel and Gian Luca Pozzato.
We analyze modal definability and axiomatizability of modal reasoning in commutative groups and vector spaces, showing where standard modal notions apply and where the fit is less ideal. Our positive results include several completeness theorems using techniques from hybrid logic. In the process, we also identify broader connections with complex algebras, substructural logics, and logics of (in-)dependence.
This talk is based on a recently published article, found here.
Turing introduced progressions of theories obtained by iterating the process of extension of a theory by its consistency assertion. Generalized Turing progressions can be used to characterize the theorems of a given arithmetical theory of quantifier complexity level \(\Pi^0_n\), for any specific \(n\). Such characterizations reveal a lot of information about a theory, in particular, yield consistency proofs, bounds on provable transfinite induction and provably recursive functions.
Conservativity spectrum of an arithmetical theory is a sequence of ordinals characterizing its theorems of quantifier complexiy levels \(\Pi_1\), \(\Pi_2\), etc. by iterated reflection principles. We describe the class all such sequences and show that it bears a natural structure of a Kripke frame similar to the one for the variable-free fragment of the polymodal logic GLP introduced by K. Ignatiev. Furthermore, this structure can also be considered as a natural algebraic model of a strictly positive modal logic - reflection calculus with conservativity modalities.
Dependence logic was introduced by Väänänen (2007) as a novel formalism for reasoning about dependence and independence relations. The logic adopts the team semantics of Hodges (1997). The basic idea of team semantics is that dependency properties can only manifest themselves in multitudes, and thus formulas of dependence logic are evaluated on sets of assignments (called teams) instead of single assignment as in the usual Tarskian semantics.
Teams are essentially relations, which are second-order objects. Dependence logic is known to be equivalent to existential second-order logic, and thus cannot be effectively axiomatized in full. In this talk, I will survey some recent developments in finding partial axiomatizations for dependence logic. In particular, I will also discuss the role of “negation” in this project.
Hodges famously said "model theory = algebraic geometry minus fields". However, the current exercise of model theory, to a large extent, is still "set-based", and lacks a functorial approach — the latter is essential to modern algebraic geometry, since it provides us very powerful geometry tools to study algebraic varieties. In this talk, I will discuss how to adapt a functorial language to model theory, raise several examples to indicate why a categorical approach will conceptualise many classical constructions and theorems in model theory, and show how such a language leads to a duality result for first-order logic, with some applications included. The content of this talk is largely expository with few genuine new results, except possibly for some partial observations and a coherent categorical narrative. However, I see this as an invitation to the development of a functorial model theory.
Uniform interpolation is a strengthening of interpolation in which the interpolant only depends on the premise or the conclusion of the implication. There are well-known logics with uniform interpolation, such as K and IPC. But not in all logics does interpolation imply this stronger version. The modal logics K4 and S4 are examples of this.
Why some logics with interpolation have the stronger property while others that seem quite similar do not is not quite clear, neither technically nor conceptually. Especially in proof theory there is the phenomenon of logics for which there is a perfectly constructive proof-theoretic proof of interpolation that does not extend to a proof of uniform interpolation. In this talk it will be shown how for reductive sequent calculi a proof-theoretic proof of interpolation does imply that the corresponding logic has uniform interpolation. Here a calculus is reductive if proof search in the calculus is terminating and it has some other desirable properties.
The motivation behind this research in universal proof theory is the search for methods to establish which logics do and which do not have good sequent calculi, meaning calculi with good structural properties. In the talk the connection between the technical part and this general aim will be explained.
It is customary to think that descriptions of the material reality by different sciences form "layers of abstraction": chemistry is more high-level than physics, biology more high-level than chemistry and so on. We introduce layered props: a diagrammatic framework that is able to handle situations with multiple levels of description, as well as translations between them. To motivate the formalism, we express two examples of scientific modelling as layered props: electrical circuits and retrosynthetic analysis in chemistry. We conclude by showing that the syntactic description of a layered prop can be interpreted naturally as a certain subcategory of pointed profunctors.
This is joint work (in progress) with Fabio Zanasi (UCL) and Ella Gale (Bristol).
The Dutch Association for Logic (VvL) will hold an annual in-person joint seminar organized by the University of Groningen. The event is inspired by the departmental logic seminars that are organized at each university, and aims to unify the universities for a collaborative seminar. Besides hosting a main speaker, the seminar will also be the location of the award ceremony of the VvL MSc Thesis Prize winners, who will give a short presentation of their thesis.
The Goldblatt-Thomason theorem provides a characterization of modally definable classes of Kripke frames. There is a version of this theorem for topological spaces with respect to the topological semantics interpreting the modal diamond as the closure operator. In this talk, we investigate the definability problems for topological spaces with respect to the topological derived-set operator. This is joint work with N. Bezhanishvili and D. Fernandez Duque.
Starting from the Thirties, Bruno de Finetti layed the ground for the development of subjective probability (also called Bayesian) in form of a betting game: In his setting, the probability of an event is the amount that a rational agent is willing to bet on it. de Finetti also proved that this point of view on probability is consistent with the widely used axiomatic approach. Indeed, it is possible to define a notion of coherence for the choices of the bets on the events and to prove that a book is coherent if, and only if, it can be extended to a probability measure on the Boolean algebra generated by the events. Coherence is therefore a bridge between subjective and objective, axiomatic, probability.
Another point of view on probability theory is the so-called frequentist approach. In this case, the probability of an event is defined by the frequency of that event based on previous observations. In this setting, it is common to define a statistical model that fits the observed data, and to derive the properties of the hidden probability distribution in this way. Perhaps surprisingly, subjective probability is related to frequentist probability again via de Finetti’s work, and in particular via the notion of exchangeability that, loosely speaking, shows how statistical models appear in a Bayesian framework, and how probabilities can come from statistics.
In this talk, we discuss the results of [1, 2]. In particular, we discuss the notion of coherence and exchangeability, and give a version of the celebrated de Finetti's theorems in a peculiar infinitary logic that expands the well known Lukasiewicz logic.
The “happened-before” relation is an order relation over events playing a special role in formal verification. Pomset logic handles this relation logically, that is, as a proper connective of the logic. However, the proof-theoretical tools developed for this logic are limited to series-parallel orders.
In this talk I will present proof systems operating on graphs instead of formulas to overcome the restrictions coming from the use of in-line formulas. To this aim, I will recall some notions from graph theory and explain the proof theoretical methodologies required to prove analyticity and conservativity in this formalism.
Uniform interpolation is a property of logics that is stronger than the well-known Craig interpolation property, in the sense that a uniform interpolant only depends on the antecedent or the conclusion of the implication. It has close connections to so-called bisimulation quantification in Kripke models. Proof-theoretic methods proving uniform interpolation have been successfully applied to logics described by terminating sequent calculi. In this talk we turn to nested sequents. We reprove the uniform interpolation property for normal modal logics K, D, and T via terminating nested sequents. While our method of defining the interpolants is proof-theoretic, we use semantic notions including bisimulation modulo propositional variables to prove their correctness. We are also able to adjust our method to provide a direct proof of uniform interpolation for S5 using hypersequents.
This is joint work with Raheleh Jalali and Roman Kuznets.
A surprising result of Pitts (1992) says that propositional quantifiers are definable internally in intuitionistic propositional logic (IPC) - this result is commonly known as the uniform interpolation theorem for IPC. We recently completed a formalization of Pitts' result in the Coq proof assistant, and thus a verified implementation of the propositional quantifiers. Our Coq formalization in addition allowed us to extract an OCaml program that computes propositional formulas that realize intuitionistic versions of ∃p φ and ∀p φ from p and φ, and runs quickly on examples.
In this talk I will give some background for the result, and I will make some points about its proof that we clarified during its implementation, which also give some new insights about the "pen-and-paper" proof. In our implementation, as in Pitts' original proof, the propositional quantifiers are defined by a mutual induction based on Dyckhoff's proof calculus LJT. The correctness of the construction is then proved by an induction on the size of an LJT-proof. The techniques used and lessons learned from our implementation may also be of more general interest for verified implementations of other proof-theoretic results.
This is joint work with Hugo Férée (IRIF, Université Paris Cité).
The states of a deterministic finite automaton can be identified with collections of words, each state corresponding to the words arriving to the state, when computed by the automaton. But words can be ordered and among the many possible orders; a very natural one is the co-lexicographic one because it suggests a transfer of the order from words to the automaton’s states. This suggestion is, in fact, concrete and in a number of papers. Automata admitting a total ordering of states coherent with the co-lex ordering of the set of words reaching them have been proposed, studied, and implemented. Such a class of ordered automata—the Wheeler automata—turned out to be efficiently stored/searched; moreover, the set of words reaching each of their states have been proved to be intervals of the co-lexicographic order, implying some nice properties on languages accepted, determinization and the like.
Unfortunately not all automata can be totally ordered as previously outlined. However, automata can always be partially ordered using the co-lexicographic order over words, and an intrinsic measure of their complexity can be defined and effectively determined, as the minimum width of one of their admissible partial orders. We will show that this new concept of width of an automaton has useful consequences.
This talk is an overview of several work done in collaboration with
Transitive Closure Logic (TCL) enriches the language of first order logic with a recursive operator, expressing the transitive closure of a binary relation. Cyclic sequent calculi proofs for this logic have been introduced in the literature. In this talk, I will present a new cyclic proof system for TCL, which employs hypersequents. The main motivation behind this work is that, differently from sequent TCL proofs, hypersequent proofs for TCL successfully simulate cyclic proofs in the transitive fragment of Propositional Dynamic Logic (PDL+), without using the non-analytic rule of cut.
After introducing the cyclic hypersequent calculus for TCL, I will present two main results. First, I will sketch a proof of soundness for the calculus, which requires a more involved argument than the one used in traditional cyclic proofs. Second, I will show how PDL+ cyclic proofs can be translated into TCL hypersequent proofs. This result allows to lift the completeness result for cyclic PDL+ to the hypersequent calculus.
This talk is based on joint work with Anupam Das.
Cyclic proof systems permit derivations that are finite graphs rather than conventional derivation trees. The soundness of such proofs is ensured by a condition on the infinite paths through the derivation graph, known as the global trace condition. The usual proof strategy for showing soundness of a cyclic proof system is highly classical. If the cyclic proof system is that of an intuitionistic theory, such as Heyting arithmetic, a constructive soundness proof would be preferable.
We present ongoing work on a procedure which translates proofs in a cyclic proof system of Heyting arithmetic to proofs in a standard proof system of the same. We believe that this procedure is not only effective but also that its correctness can be verified constructively. If this is the case, combining it with the standard soundness proof for Heyting arithmetic would yield a constructive soundness proof for the cyclic proof system of Heyting arithmetic. Much of the procedure deals with combinatorial aspects of cyclic proofs, rather than Heyting arithmetic specifically, and should thus be transferable to other theories to obtain analogous results.
One of the foundational results of (classical) finite model theory, due to Ronald Fagin, shows that every formula of a finite relational language is either almost surely true or almost surely false over finite structures. This has been called a 0-1 law. This talk intends to contribute to the finite model theory of many-valued logics by generalizing this 0-1 law to first-order finite models valued over finite MTL-chains. We will prove that for any first-order (or infinitary with finitely many variables) formula Phi, there is a truth-value that Phi takes almost surely in every finite many-valued model and such that every other truth-value is almost surely not taken. This result also generalizes a theorem for Lukasiewicz logic due to Robert Kosik and Christian G. Fermüller. (This talk is based on joint work with Guillermo Badia.)
Stone’s representation theorem for Boolean algebras gives a bridge between algebra and topology in the form of a categorical duality. In his PhD thesis, de Vries generalized this duality to a duality between compact Hausdorff spaces and what are nowadays called de Vries algebras, which are structures that encode the set of regular open sets of a given compact Hausdorff space. One drawback of the category of de Vries algebras is that the composition of morphisms is not function composition. We propose to work with relations (rather than functions) as morphisms between de Vries algebras: this has the advantage that the composition of morphisms is usual relation composition. Moreover, this approach allows for an extension of de Vries duality to a duality for the category of compact Hausdorff spaces and closed relations between them.
We consider generalisations of Kripke's semantics for Intuitionistic logic adequate for a range of constructive, contractionless logics. These same are obtained partly thru algebraic insights gleaned from Jipsen-Montagna's work on poset products, as well as more recent work by others, including e.g. Castiglioni and Zuluaga. We compare these approaches to other so-called "relational" approaches from the literature, and consider some problems left open in this line of research.
We introduce a cut-free sequent calculus for the alternation-free fragment of the modal mu-calculus. This system allows both for infinite and for finite, circular proofs and uses a simple focus mechanism to control the unravelling of fixpoints along infinite branches. We show that the proof system is sound and complete for the set of valid guarded formulas of the alternation-free mu-calculus. As an application we use the system to show that the alternation-free mu-calculus is closed under Craig interpolation.
(joint work with Yde Venema)
In this talk, I discuss Kleene algebra as a framework for program verification, and in particular focus on a concurrent extension of NetKAT (CNetKAT). NetKAT is an algebraic theory used to describe network behaviour, and CNetKAT extends that with operators for specifying and reasoning about concurrency in scenarios where multiple packets interact through state. More generally, CNetKAT can be understood as an algebraic framework for reasoning about programs with both local state (in packets) and global state (in a global store). The talk will cover some of the syntax and semantics of CNetKAT through an example, and present a soundness and completeness result. The remaining time will be spent on highlighting some of the issues that were encountered creating CNetKAT, and in particular the general issues surrounding extensions of KA with concurrency.
In this talk, I will try to present several results concerning the monadic second-order (MSO) theory of (labelled) countable linear orders. This theory is known to be decidable since the results of Rabin (69) and Shelah (75). After setting up the more recent algebraic framework of countable monoids, I will give a high level intuition on how it can be used to answer several questions:
Propositional provability logics capture in a sense the always provable or always true principles that concern the formalised provability predicate. Since Solovay we know that all sound theories of some minimal strength have the same logic: the PSPACE complete logic GL (Gödel-Löb). The hope arose that something similar could be obtained for predicate provability logics. Through beautiful results by Montagna, Artemov, Vardanyan and McGee, among others, it turned out that the situation is as ghastly as it could possibly be and the resulting logics are highly undecidable or even non-arithmetical. Furthermore, various results by Vardanyan, Berarducci and Visser-de Jonge have shown that, in a sense, this wild behaviour is pretty stable as the influential paper “No escape from Vardanyan’s Theorem” indeed suggests.
Recently we wrote a paper “An escape from Vardanyan’s Theorem” where we carve out a decidable fragment of quantified provability logic that is sound and complete in a certain sense. The key move is switching to so-called strictly positive fragments. In these fragments, we consider statements of the form “A entails B” where A and B allow only for predicate symbols, the logical constant true, conjunctions, universal quantifications and consistency statements. It was a known phenomenon that strictly positive fragments typically lower the complexity of the logic, mostly from PSPACE to PTIME. In our case we fall from Π2 complete all the way back to decidable. We will expose the results and comment on some proof techniques. Near the end of the talk we will comment on work in progress where instead of one consistency statement we wish to allow for an arbitrary sequence of consistency statements of increasing strength.
This is joint work with Ana de Almeida Borges: https://arxiv.org/abs/2102.13091
Composing monads via distributive laws is tricky business, as too often small mistakes are overlooked. After failing to find a distributive law for the list monad over itself, I started proving that finding such a distributive law is impossible. At the same time, Dan Marsden was studying the famous counterexample by Gordon Plotkin that probability does not distribute over non-determinism. Together we developed an algebraic method to find and generalise such counterexamples, resulting in our no-go theorems. In this talk I will explain the main ideas behind our method, illustrated by a proof that 'plus does not distribute over times'. Then, I will highlight some crucial steps in our method, which tell us which type of monads are "high risk" in failing to compose with other monads. Lastly (time permitting), I will say a few words about my current research on combining monads with guarded recursion.
It is well-known that for any regular language there exists a unique minimal DFA accepting it. Establishing a similar result for NFA is more difficult, but of great practical importance, as NFA can be exponentially more succinct than DFA. The main issue is that a regular language can be accepted by several size-minimal NFA that are not isomorphic. A number of sub-classes of NFA have been identified in the literature to tackle this issue, which all admit canonical representatives. In this paper we provide a general categorical framework based on distributive laws that unifies constructions of canonical NFA and unveils new ones.
I present a generic partition refinement algorithm that quotients coalgebraic systems by behavioural equivalence, an important task in system analysis and verification. Coalgebraic generality allows to cover not only classical relational systems but also, e.g. various forms of weighted systems and furthermore to flexibly combine existing system types. Under assumptions on the type functor that allow representing its coalgebras in terms of nodes and edges, the algorithm runs in time O(m log n) for input coalgebras with n states and m edges. The generic complexity result and the possibility of combining system types yields a toolbox for efficient partition refinement algorithms. Instances of our generic algorithm match the run-time of the best known algorithms for unlabelled transition systems, Markov chains, deterministic automata, Segala systems, and for color refinement. For weighted tree automata, the instance of the generic algorithm even improves the best run-time known. The algorithm is implemented in a tool (CoPaR) in full generality and allows the composition of existing systems types at run-time.
We consider Horn formula equations, which are special existential second-order formulas. Our main result states that Horn formula equations are valid iff there are solutions expressible in least fixed-point logic. These canonical solutions correspond to the least and greatest solution in the Horn and dual-Horn case, respectively. As applications and corollaries of the fixed-point theorem we obtain new results and simpler proofs of existing results in second-order quantifier elimination, program verification and proof theory.
This is joint work with Stefan Hetzl, see also https://arxiv.org/abs/2109.04633.
Recently, Baltag & van Benthem introduced a logic of functional dependence LFD with atomic dependence statements and dependence quantifiers that is interpreted over generalised assignment models. Indeed, the logic is closely related to the logic of Cylindrical Relativized Set Algebras (CRS); CRS quantifiers and dependence quantifiers are interdefinable. It follows that LFD can be seen as an extension of CRS with dependence atoms. Decidability was shown by Baltag & van Benthem by a detour via quasi-models. However, this essentially involved an infinite construction, leaving the finite model property (fmp) as an open problem. We solve this by showing that LFD has the fmp, using Herwig's theorem on extending partial isomorphisms. Furthermore, we introduce dependence bisimulations and prove a van Benthem-characterization theorem, characterizing LFD as the largest fragment FOL invariant under this notion.
(These results are the culmination of a project I did under the supervision of Alexandru Baltag.)
https://link.springer.com/article/10.1007/s10992-020-09588-z
https://arxiv.org/abs/2109.08313
I will first explain STV counting and the parlous state of computer-counting code implemented by various Election Commissions from around Australia. I will then explain how we used Coq to specify a "vanilla" version of STV as a proof-calculus and used it to extract a computer program which not only counts votes according to this specification but also produces a certificate during the count. The specification of the certificate is derived from the counting rules. We have proved, in Coq, that if the certificate is correct with respect to its specification, then the result it encapsulates must be correct with respect to the relevant specification of STV. The certificate is designed so that an average third-year computer science student could write a computer program to check the correctness of the certificate. In particular, each political party could hire their own programmer to easily scrutiny the count produced by any computer program, including our own, that produces such certificates. The only caveat is that we require the publication of all ballots.
In this talk, we propose a general method to "decode" (propositional) intuitionistic logic and various intermediate logics as (dynamic) epistemic logics of knowing how. Our approach is inspired by some scattered ideas hidden in the vast literature of math, philosophy, CS, and linguistics about intuitionistic logic in the past 100 years, which echoed Heyting's initial (but forgotten?) conception about intuitionistic truth as "knowing how to prove". The core technical idea is to use a bundled know-how modality to unify the formalized Brouwer–Heyting–Kolmogorov interpretation and the Kripke semantics of intuitionistic logic. As we will see, this approach can reveal the hidden complicated dynamic-epistemic information behind the innocent-looking connectives but makes intuitionistic logic intuitive. With the natural but precise epistemic readings of intuitionistic formulas, various technical results about intuitionistic logic and intermediate logics become transparent, if not trivial. If time permits, we show how to decode inquisitive logic and some version of dependence logic as epistemic logics in order to demonstrate the power of this general approach, which could also be a meeting point of various research themes at ILLC.
(The talk is partly based on the joint work with Haoyu Wang and Yunsong Wang)
I will present recent results establishing decidability and complexity upper bounds for many axiomatic extensions of FLec and FLew. The substructural logics FLec and FLew are obtained from the sequent calculus for intuitionistic propositional logic (FLecw) by omitting the weakening rule and contraction rule respectively. This work can be seen as extending Kripke's decidability argument (1959) and Urquhart's upper bound (1999) for FLec. A notable consequence is a first upper bound for the basic fuzzy logic MTL.
Joint work with A. R. Balasubramanian and Timo Lang. See https://arxiv.org/abs/2104.09716 for details.
While modal logics are robustly decidable, first order modal logics (FOML) are notoriously undecidable, in the sense that modal extensions of decidable fragments of first order logic are undecidable. For example, the two-variable fragment of FOML is undecidable.
Despite such discouragement, this century has seen several positive results on new fragments, such as the monodic fragment, bundled fragment, term-modal logic etc. We give an overview of this exciting area of research, including our own results.
The results presented are joint with Anantha Padmanabha.
Various problems arising in the context of example-driven approaches to conjunctive query discovery turns out to be intimately related to basic structural properties of the homomorphism lattice of finite structures, such as density, or the existence of duals. I will discuss some of these connections, and highlight relevant results from recent work together with Victor Dalmau. See also https://arxiv.org/abs/2008.06824.
Cyclic and non-well-founded proofs have turned out to be very useful tools in the proof theory of modal fixpoint logics. However, most of the known results in this area only treat a particular modal fixpoint logic at once. This is in contrast to basic modal logic, where much more general proof-theoretic methods exist. In this talk I will outline a first attempt to combine these existing general methods with non-well-founded proof theory in order to uniformly treat multiple modal fixpoint logics.
More precisely, I will build on a technique presented by Ori Lahav (LICS 2013) for constructing cut-free hypersequent calculi for basic modal logics characterised by frames satisfying so-called 'simple' first-order conditions. I investigate the generalisation of this technique to a relatively inexpressive modal fixpoint language, namely unimodal logic with the master modality (a.k.a. the reflexive-transitive closure modality). Soundness is obtained for all simple frame conditions, completeness only for those that will be called 'equable'.
Talks from 2020 and earlier can also be found on the Algebra|Coalgebra seminar website.
The seminar is currently organized by Benno van den Berg, Nick Bezhanishvili, Marianna Girlando and Yde Venema. If you're interested in giving a talk at the seminar or have any questions regarding the seminar please contact Marianna at m.girlando@uva.nl.
If you want to subscribe to or unsubscribe from the LLAMA seminar mailing list, you can do so here. Please heed the warning about not re-using a valuable password.
The LLAMA seminar, under the Institute for Logic, Language and Computing (ILLC), located at Science Park 107, 1098 XG Amsterdam, is responsible for the processing of personal data as shown in this privacy statement.
The LLAMA seminar does not process personal data via this website (https://events.illc.uva.nl/llama), because it is not possible to leave personal data behind. We also do not use social media plugins.
The LLAMA seminar does not take decisions based on automated processing related to this website on matters that can have (significant) consequences for individuals, e.g., decisions taken by computer programs or systems, without involving a person (for example an organiser of the LLAMA seminar).
The LLAMA seminar does not use cookies or similar techniques on this website.
The LLAMA seminar website has been set up by Tobias Kappé.