Flows of probability measures for mean-field optimization problems
Abstract
Description
This thesis studies flows of probability measures and discrete-time iterative schemes
for solving mean-field optimization problems, including minimization tasks and
min-max games, with applications in machine learning. In the single-agent optimization setting, we introduce discrete-time proximal descent schemes with linear
convergence rates in the Wasserstein space, without relying on geodesic convexity. In the min-max problems setting, we propose a Fisher-Rao gradient flow and
prove its exponential convergence to the mixed Nash equilibrium (MNE) of an
entropy-regularized convex-concave game with continuous strategy spaces. We further analyze Mirror Descent-Ascent (MDA) algorithms, demonstrating that sequential MDA, where players move in turn, converges faster than simultaneous MDA,
providing theoretical support for sequential training in Generative Adversarial Networks. Additionally, we introduce the Mean-Field Best Response (MF-BR) flow, an
optimization method that characterizes MNEs via a fixed-point property, proving its
exponential convergence to the MNE of the regularized game. These contributions
integrate infinite-dimensional convex optimization, gradient flow theory, optimal
transport, and game theory.