Majorization-minimization and distance-to-set penalties for constrained statistical learning

  • Jason Xu (Duke University)
E1 05 (Leibniz-Saal)


We consider a penalty framework based on regularizing the squared distance to set-based constraints for several core statistical tasks. These distance-to-set penalties convert problems cast as constrained optimization problems to more tractable unconstrained forms, and are more flexible than many existing algebraic and regularization penalties. We will see that they often avoid drawbacks that arise from popular alternatives such as shrinkage. We discuss a general strategy for eliciting effective algorithms in this framework using majorization-minimization (MM), a principle that transfers difficult problems onto a sequence of more manageable subproblems through the use of surrogate functions. Methods derived from this perspective feature monotonicity, are often amenable to acceleration, and come with global convergence guarantees. We showcase new progress on classical problems including constrained generalized linear models and sparse covariance estimation using this approach, and discuss recent connections to Bayesian perspectives on constraint relaxation.

Katharina Matschke

MPI for Mathematics in the Sciences Contact via Mail