Cycles in adversarial regularized learning

Abstract : Regularized learning is a fundamental technique in online optimization, machine learning, and many other fields of computer science. A natural question that arises in this context is how regularized learning algorithms behave when faced against each other. We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games. We show that the system's behavior is Poincare recurrent, implying that almost every trajectory revisits any (arbitrarily small) neighborhood of its starting point infinitely often. This cycling behavior is robust to the agents' choice of regularization mechanism (each agent could be using a different regularizer), to positive-affine transformations of the agents' utilities, and it also persists in the case of networked competition (zero-sum polymatrix games).
Type de document :
Communication dans un congrès
SODA '18 - Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, Jan 2018, New Orleans, United States. pp.2703-2717
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01643338
Contributeur : Panayotis Mertikopoulos <>
Soumis le : mardi 9 octobre 2018 - 18:13:09
Dernière modification le : jeudi 8 novembre 2018 - 14:28:02

Fichier

1709.02738.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01643338, version 1

Citation

Panayotis Mertikopoulos, Christos H. Papadimitriou, Georgios Piliouras. Cycles in adversarial regularized learning. SODA '18 - Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, Jan 2018, New Orleans, United States. pp.2703-2717. 〈hal-01643338〉

Partager

Métriques

Consultations de la notice

205

Téléchargements de fichiers

16