Decision trees are an effective and concise way of conveying information, easily understood by virtually everyone regardless of the topic. Given the recent interest in explainable AI and related fields, decision trees stand out as a popular choice. From the algorithmic side, the unique structure of decision trees is interesting since it may be exploited to obtain much more efficient algorithms than structure-oblivious approaches.

In this talk, I will give an overview of the research we have been doing on leveraging the decision tree structure from three disjoint angles: 1) machine learning with constraints, where the goal is construct the optimal regression/decision tree representing tabular data whilst respecting different types of constraints such as fairness, 2) controller/policy verification, where the aim is to verify that a decision tree controller satisfies desired properties in continuous time, and 3) explaining the unsatisfiability of a combinatorial optimisation problem on a graph, by representing proofs of unsatisfiability as a tree using graph-specific concepts. We show that for each of these problems, exploiting the decision tree structure is important in obtain orders of magnitude runtime improvements and/or interpretability.

The talk summarises about half a dozen of our papers (AAAI’21/24, JMLR’22, NeurIPS’22/23, ICML’23/24) and is meant to be accessible to all backgrounds, with plenty of time for discussion!