"Iterative
regularization for convex regularizers"
Abstract:
Iterative
regularization exploits the implicit bias of an optimization algorithm to regularize ill-posed problems. Constructing algorithms with such built-in regularization mechanisms is a classic challenge in inverse problems but also in modern machine learning, where
it provides both a new perspective on algorithms analysis, and significant speed-ups compared to explicit regularization. In this talk, we propose and study the first iterative regularization procedure able to handle biases described by non smooth and non
strongly convex functionals, prominent in low-complexity regularization. Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible. The general results
are illustrated considering the special case of sparse recovery with the â„“1 penalty. Our theoretical results are complemented by experiments showing the computational benefits of our approach.
We encourage in-person partecipation. Should you be unable
to come, here is the link to the event on Teams:
The seminar is part of the Excellence Project Math@TOV.
You can find a schedule with the next events at the following link: https://www.mat.uniroma2.it/~rds/events.php .