Home From Theory to Practice: Explaining Logical (Non-)Inferences for OWL Ontologies
Post
Cancel

From Theory to Practice: Explaining Logical (Non-)Inferences for OWL Ontologies

In theory, logic-based AI is explainable by design, since all inferences can be explained using logical proofs or counterexamples. In practice, which inferences performed by an automated reasoner can still be challenging to end-users. This talk focuses on reasoning with ontologies formulated in OWL and description logics, and discusses different problems related to explanations, in particular how to compute “nice” proofs to explaining logical entailments, and performing signature-based abduction to explain missing entailments. We will look at the theoretical complexity of these problems, as well as at practical aspects, and have a look at how these and other types of explanations are implemented in our tool Evee presented at this year’s KR conference.

This post is licensed under CC BY 4.0 by the author.