Dear all,
On Wednesday, February 1st, at 14h00 in Aula Dal Passo at Roma Tor Vergata, RoMaDS (https://www.mat.uniroma2.it/~rds/about.php) will host Giovanni Conforti (École Polytechnique) with the seminar
"A probabilistic approach to exponential convergence of Sinkhorn's algorithm"
Abstract: The entropic optimal transport problem (EOT) is obtained adding an entropic regularisation term in the cost function of the Monge-Kantorovich problem and is nowadays regularly employed in machine learning applications as a more tractable and numerically more stable version of the optimal transport problem. On the other hand, E.Schrödinger asked back in 1931 the question of finding the most likely evolution of a cloud of independent Brownian particles conditionally to observations. The mathematical formulation of his question through large deviations theory is known as the Schrödinger problem and turns out to be fully equivalent to EOT. In this talk, I shall illustrate both viewpoints and then move on to sketch the ideas of a probabilistic method to show exponential convergence of Sinkhorn's algorithm, whose application the heart of the recent successful applications of EOT in statistical machine learning and beyond. In particular, we shall discuss how the proposed method opens new perspective for showing exponential convergence for marginal distribution that are non compactly supported.
We encourage in-person partecipation. Should you be unable to come, here is the link to the event on Teams:
https://teams.microsoft.com/l/meetup-join/19%3arfsL73KX-fw86y1YnXq2nk5VnZFwP...
The seminar is part of the Excellence Project Math@TOV.
You can find a schedule with the next events at the following link: https://www.mat.uniroma2.it/~rds/events.php .
Dear all,
On Wednesday, February 8th, at 14h00 in Aula Dal Passo at Roma Tor Vergata, RoMaDS (https://www.mat.uniroma2.it/~rds/about.php) will host Francesco Tudisco (GSSI - Gran Sasso Science Institute) with the seminar
"Efficient training of low-rank neural networks"
Abstract: Neural networks have achieved tremendous success in a variety of applications. However, their memory footprint and computational demand can render them impractical in application settings with limited hardware or energy resources. At the same time, overparametrization seems to be necessary in order to overcome the highly nonconvex nature of the optimization problem. An optimal trade-off is then to be found in order to reduce networks' dimensions while maintaining high performance. Popular approaches in the literature are based on pruning techniques that look for "winning tickets", smaller subnetworks achieving approximately the initial performance. However, these techniques are not able to reduce the memory footprint of the training phase and can be unstable with respect to the input weights. In this talk, we will present a training algorithm that looks for "low-rank lottery tickets" by interpreting the training phase as a continuous ODE and by integrating it within the manifold of low-rank matrices. The low-rank subnetworks and their ranks are determined and adapted during the training phase, allowing the overall time and memory resources required by both training and inference phases to be reduced significantly. We will illustrate the efficiency of this approach on a variety of fully connected and convolutional networks. The talk is based on: S Schotthöfer, E Zangrando, J Kusch, G Ceruti, F Tudisco Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations NeurIPS 2022 https://arxiv.org/pdf/2205.13571.pdf
We encourage in-person partecipation. Should you be unable to come, here is the link to the event on Teams:
https://teams.microsoft.com/l/meetup-join/19%3arfsL73KX-fw86y1YnXq2nk5VnZFwP...
The seminar is part of the Excellence Project Math@TOV.
You can find a schedule with the next events at the following link: https://www.mat.uniroma2.it/~rds/events.php .