Proximal gradient descent on the smoothed duality gap to solve saddle point problems
Résumé
In this paper, we minimize the self-centered smoothed gap, a recently introduced optimality measure, in order to solve convex-concave saddle point problems. The self-centered smoothed gap can be computed as the sum of a convex, possibly nonsmooth function and a smooth weakly convex function. Although it is not convex, we propose an algorithm that minimizes this quantity, effectively reducing convex-concave saddle point problems to a minimization problem. Its worst case complexity is comparable to the one of the restarted and averaged primal dual hybrid gradient method, and the algorithm enjoys linear convergence in favorable cases.
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|