Home Logical Characterizations of Recurrent GNNs
Post
Cancel

Logical Characterizations of Recurrent GNNs

Graph neural networks (GNNs) are a popular formalism for machine learning on graphs. In 2019, Barcelo et al showed that, relative to first-order logic, the expressive power of GNNs with a constant number of iterations is identical to that of graded modal logic, thus to the description logic ALCQ. In this talk, I will report about our NeurIPS2024 paper with Veeti Ahvonen, Damian Heiman, and Antti Kuusisto in which we consider recurrent GNNs and provide exact logical characterizations in two scenarios: (1) in the setting with floating-point numbers and (2) with reals. For floats, the formalism matching recurrent GNNs is a blend of ALCQ and Datalog, while for reals we use a suitable infinitary version of ALCQ. These results give exact matches between logics and GNNs in the recurrent setting without relativising to a background logic, but using some natural assumptions about floating-point arithmetic. We also prove that, relative to graph properties definable in monadic second-order logic (MSO), our infinitary and Datalog-based versions of ALCQ are equally expressive. This implies that recurrent GNNs with reals and floats have the same expressive power over MSO-definable properties and shows that, for such properties, also recurrent GNNs with reals are characterized by the Datalog-based version of ALCQ.

This post is licensed under CC BY 4.0 by the author.