Algorithms for nonnegative matrix factorization with the beta-divergence

View Researcher II's Other Codes

Disclaimer: “The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).”

Please contact us in case of a broken link from here

Authors C. FĂ©votte & J. Idier
Journal/Conference Name Neural Computation
Paper Category
Paper Abstract This paper describes algorithms for nonnegative matrix factorization (NMF)with the beta-divergence (beta-NMF). The beta-divergence is a family of costfunctions parametrized by a single shape parameter beta that takes theEuclidean distance, the Kullback-Leibler divergence and the Itakura-Saitodivergence as special cases (beta = 2,1,0, respectively). The proposedalgorithms are based on a surrogate auxiliary function (a local majorization ofthe criterion function). We first describe a majorization-minimization (MM)algorithm that leads to multiplicative updates, which differ from standardheuristic multiplicative updates by a beta-dependent power exponent. Themonotonicity of the heuristic algorithm can however be proven for beta in (0,1)using the proposed auxiliary function. Then we introduce the concept ofmajorization-equalization (ME) algorithm which produces updates that move alongconstant level sets of the auxiliary function and lead to larger steps than MM.Simulations on synthetic and real data illustrate the faster convergence of theME approach. The paper also describes how the proposed algorithms can beadapted to two common variants of NMF : penalized NMF (i.e., when a penaltyfunction of the factors is added to the criterion function) and convex-NMF(when the dictionary is assumed to belong to a known subspace).
Date of publication 2011
Code Programming Language MATLAB

Copyright Researcher II 2022