Cari colleghi,
scusandomi per eventuali ripetizioni, vi annuncio i prossimi due seminari della serie UMI-Prisma, che si svolgeranno online su piattaforma teams lunedì prossimo 7 febbraio dalle 16 alle 18:
-------------------------------------------------------------------------------------------------------------------
Speaker: Ernesto De Vito, Dipartimento di Matematica e MaLGa Center, Università di Genova
Title: Empirical risk minimization: old and new results
Abstract: The first part of the talk is devoted to a brief introduction to supervised learning focusing on the regularised empirical risk minimization (ERM) on Reproducing Kernel Hilbert spaces. Though ERM achieves optimal convergence rates [1], it requires huge computational resources on high dimensional datasets. The second half of the talk is devoted to discuss some recent ideas where the hypothesis space is a low dimensional random space. This approach naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. If the random subspace is spanned by a random subset of the data, the statistical-computational tradeoff has been first explored for the least squares loss [2,3], for the least squares loss, then for self-concordant loss functions [4] , as the logistic loss, and, quite recently, for non-smooth convex Lipschitz loss functions [5], as the hinge loss.
References: [1] Caponnetto, A. and De Vito, E. (2007). Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368. [2] Rudi, A., Calandriello, D., Carratino, L., and Rosasco, L. (2018). On fast leverage score sampling and optimal learning. In Advances in Neural Information Processing Systems, pages 5672–5682. [3] Rudi, A., Camoriano, R., and Rosasco, L. (2015). Less is more: Nystrom computational regularization. In Advances in Neural Information Processing Systems, pages 1657–1665. [4] Marteau-Ferey, U., Ostrovskii, D., Bach, F., and Rudi, A. (2019). Beyond least-squares: Fast rates for regularized empirical risk minimization through self-concordance. arXiv preprint arXiv:1902.03046. [5] Andrea Della Vecchia, Jaouad Mourtada, Ernesto De Vito, Lorenzo Rosasco, Regularized ERM on random subspaces arXiv:2006.10016
Speaker: Alessandro Rudi, ENS Paris
Title: "Representing non-negative function with applications to non-convex optimization and beyond"
Abstract: In this talk we present a rather flexible and expressive model for non-negative functions. We will show direct applications in probability representation and non-convex optimization. In particular, the model allows to derive an algorithm for non-convex optimization that is adaptive to the degree of differentiability of the objective function and achieves optimal rates of convergence. Finally, we show how to apply the same technique to other interesting problems in applied mathematics that can be easily expressed in terms of inequalities.
References:
Ulysse Marteau-Ferey , Francis Bach, Alessandro Rudi. Non-parametric Models for Non-negative Functions. https://arxiv.org/abs/2007.03926
Alessandro Rudi, Ulysse Marteau-Ferey, Francis Bach. Finding Global Minima via Kernel Approximations. https://arxiv.org/abs/2012.11978
------------------------------------------------------------------------------------------------------------------------------
Il link teams per partecipare è il seguente:
https://teams.microsoft.com/l/meetup-join/19%3a667d2414be564c5d8fba30acffeb8...
Grazie e saluti, Domenico Marinucci