Graph Neural Networks (GNNs) are deep learning models for graphs and networks. Despite the widespread popularity of GNNs in the practical setting, the mathematical principles underlying the design and analysis of these models are still not very well-understood. This naturally calls for a first-principles approach to machine learning on graphs, rooted in classical graph theory, as opposed to leveraging existing ML for other data domains, such as vectors, text, and images.
In this talk, we describe some of our important results towards establishing the graph-theoretic foundations for graph learning. We begin with our earlier results which showed, for the first time, a deep connection between the expressive power of message-passing GNNs and the fundamental Graph Isomorphism problem. We highlight the key role played by mathematical logic, in particular counting logics on graphs, towards expanding this connection for obtaining scalable GNN models. Finally, we describe our work on Graph Homomorphism numbers, a much broader setting for interpreting learning on graphs. This connection forms the starting point for analyzing generalization properties of GNNs.