Joint Majorization-Minimization for Nonnegative Matrix Factorization with the $\beta$-divergence - Signal et Communications Access content directly
Journal Articles Signal Processing Year : 2023

Joint Majorization-Minimization for Nonnegative Matrix Factorization with the $\beta$-divergence

Abstract

This article proposes new multiplicative updates for nonnegative matrix factorization (NMF) with the $\beta$-divergence objective function. Our new updates are derived from a joint majorization-minimization (MM) scheme, in which an auxiliary function (a tight upper bound of the objective function) is built for the two factors jointly and minimized at each iteration. This is in contrast with the classic approach in which a majorizer is derived for each factor separately. Like that classic approach, our joint MM algorithm also results in multiplicative updates that are simple to implement. They however yield a significant drop of computation time (for equally good solutions), in particular for some $\beta$-divergences of important applicative interest, such as the quadratic loss and the Kullback-Leibler or Itakura-Saito divergences. We report experimental results using diverse datasets: face images, an audio spectrogram, hyperspectral data and song play counts. Depending on the value of $\beta$ and on the dataset, our joint MM approach can yield CPU time reductions from about $13\%$ to $86\%$ in comparison to the classic alternating scheme.
Fichier principal
Vignette du fichier
2106.15214.pdf (688.61 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03799284 , version 1 (09-05-2023)

Identifiers

Cite

Arthur Marmin, José Henrique de M Goulart, Cédric Févotte. Joint Majorization-Minimization for Nonnegative Matrix Factorization with the $\beta$-divergence. Signal Processing, inPress, 209, pp.109048. ⟨10.1016/j.sigpro.2023.109048⟩. ⟨hal-03799284⟩
67 View
23 Download

Altmetric

Share

Gmail Facebook X LinkedIn More