Nash equilibria have been central to game theory since Von Neumann, but how fast can we compute them? The best known upper bound is achieved by letting a minimizing and a maximizing online learning algorithm play against each other. But is there a better way? I will present recent insights into why all existing techniques to prove lower bounds must fail, as well as the first non-trivial (but still far from optimal) lower bound.

This talk is based on the following joint paper:

H. Hadiji, S. Sachs, T. van Erven and W. M. Koolen. Towards Characterizing the First-order Query Complexity of Learning (Approximate) Nash Equilibria in Zero-sum Matrix Games. Advances in Neural Information Processing Systems (NeurIPS), vol. 36, pp. 13356-13373, 2023.