Delay-tolerant distributed Bregman proximal algorithms
Résumé
Many problems in machine learning write as the minimization of a sum of individual loss functions over the training examples. These functions are usually differentiable but, in some cases, their gradients are not Lipschitz continuous, which compromises the use of (proximal) gradient algorithms. Fortunately, changing the geometry and using Bregman divergences can alleviate this issue in several applications, such as for Poisson linear inverse problems.
However, the Bregman operation makes the aggregation of several points and gradients more involved, hindering the distribution of computations for such problems.
In this paper, we propose an asynchronous variant of the Bregman proximal-gradient method, able to adapt to any centralized computing system. In particular, we prove that the algorithm copes with arbitrarily long delays and we illustrate its behavior on distributed Poisson inverse problems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|