Cari tutti,
come gli anni scorsi (ed in particolare quelli non-pandemici), vorremmo
provare ad organizzare una cena di "fine seminari"; più che altro
sarebbe una scusa per vedersi, a maggior ragione quest'anno dove non ci
siamo (quasi) mai incontrati di persona.
Ho predisposto un Doodle dove potete segnare la vostra disponibilità,
se intendete partecipare:
https://doodle.com/poll/z2xw2k6u5zzfybn3
Siete tutti benvenuti (dagli ordinari ai dottorandi, ma anche
simpatizzanti, ecc.). Vi chiederei solo di segnare la vostra presenza
quanto prima così che riusciamo ad organizzarci per bene.
Il luogo non è stato ancora scelto, ma proposte sono sempre bene
accette; probabilmente dovremo scegliere anche in funzione del numero
di partecipanti.
A presto! -- Leonardo Robol.
Speaker: Stefano Cipolla
Affiliation: University of Edinburgh
Time: Friday, 18/06/2021, 16:00
Title: Random multi-block ADMM: an ALM based view for the QP case
Because of its wide versatility and applicability in multiple fields,
the n-block alternating direction method of multipliers (ADMM) for
solving nonseparable convex minimization problems, has recently
attracted the attention of many researchers [1, 2, 4]. When the n-block
ADMM is used for the minimization of quadratic functions, it consists
in a cyclic update of the primal variables xi for i = 1,...,n in the
Gauss-Seidel fashion and a dual ascent type update of the dual variable
μ. Despite the fact the connections between ADMM and Gauss-Seidel are
quite well known, to the best of our knowledge, an analysis from the
purely numerical linear algebra point of view is lacking in literature.
Aim of this talk is to present a series of very recent results obtained
on this topic which shed further light on basic issues as convergence
and efficiency [3].
[1] Chen, C., Li, M., Liu, X., Ye, Y. (2019). Extended ADMM and BCD for
nonseparable convex minimization models with quadratic coupling terms:
convergence analysis and insights. Mathematical Programming, 173(1-2),
37-77.
[2] Chen, C., He, B., Ye, Y., Yuan, X. (2016). The direct extension of
ADMM for multi-block convex minimization problems is not necessarily
convergent. Mathematical Programming, 155(1-2), 57-79.
[3] Cipolla, S., Gondzio, J (2020). ADMM and inexact ALM: the QP case.
arXiv 2012.09230.
[4] Sun, R., Luo, Z. Q., Ye, Y. (2020). On the efficiency of random
permutation for ADMM and coordinate descent. Mathematics of Operations
Research, 45(1), 233-271.
https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base… <https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base…>
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4 <https://hausdorff.dm.unipi.it/b/leo-xik-xu4>
Dear all,
An online colloquium of the Department of Mathematics at UniPi is
planned on June 16 at 17:00. You are all welcome!
The speaker is Volker Mehrmann (TU Berlin).
Title:
Energy based modeling, simulation, and control. Mathematical theory and
algorithms for the solution of real world problems.
Abstract:
Energy based modeling via port-Hamiltonian systems is a relatively new
paradigm in all areas of science and engineering. These systems have
wonderful mathematical properties, concerning their analytic, geometric
and algebraic properties, but also with respect to their use in numerical
algorithms for space-time discretization, model reduction and control.
We will introduce the model class and their mathematical properties and we
illustrate their usefulness with several real world applications.
Google Meet link: https://meet.google.com/qii-tcks-rrr
Speaker: Stefano Cipolla
Affiliation: University of Edinburgh
Time: Friday, 18/06/2021, 16:00
Title: Random multi-block ADMM: an ALM based view for the QP case
Because of its wide versatility and applicability in multiple fields,
the n-block alternating direction method of multipliers (ADMM) for
solving nonseparable convex minimization problems, has recently
attracted the attention of many researchers [1, 2, 4]. When the n-block
ADMM is used for the minimization of quadratic functions, it consists
in a cyclic update of the primal variables xi for i = 1,...,n in the
Gauss-Seidel fashion and a dual ascent type update of the dual variable
μ. Despite the fact the connections between ADMM and Gauss-Seidel are
quite well known, to the best of our knowledge, an analysis from the
purely numerical linear algebra point of view is lacking in literature.
Aim of this talk is to present a series of very recent results obtained
on this topic which shed further light on basic issues as convergence
and efficiency [3].
[1] Chen, C., Li, M., Liu, X., Ye, Y. (2019). Extended ADMM and BCD for
nonseparable convex minimization models with quadratic coupling terms:
convergence analysis and insights. Mathematical Programming, 173(1-2),
37-77.
[2] Chen, C., He, B., Ye, Y., Yuan, X. (2016). The direct extension of
ADMM for multi-block convex minimization problems is not necessarily
convergent. Mathematical Programming, 155(1-2), 57-79.
[3] Cipolla, S., Gondzio, J (2020). ADMM and inexact ALM: the QP case.
arXiv 2012.09230.
[4] Sun, R., Luo, Z. Q., Ye, Y. (2020). On the efficiency of random
permutation for ADMM and coordinate descent. Mathematics of Operations
Research, 45(1), 233-271.
https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Dear all,
the next GSSI Math Colloquium will be held on *Thursday June 17 at 5pm*
(please note **the time is *5pm* instead of the usual 3pm**).
The speaker is Lars Ruthotto, with a lecture connecting numerical
methods for differential equations and deep learning architectures. More
details below.
Lars Ruthotto is Associate Professor of Mathematics and Computer Science
at Emory University (Atlanta, USA).
To attend the talk please use to the following Zoom link:
https://us02web.zoom.us/j/85179454721?pwd=TjA0V2M3L3lVTk1NNEdVcGpQcXlTdz09
Please feel free to distribute this announcement as you see fit.
Looking forward to seeing you all on Thursday!
Paolo Antonelli, Stefano Marchesani, Francesco Tudisco and Francesco Viola
---------------------
Title:
Numerical Methods for Deep Learning motivated by Partial Differential
Equations
Abstract:
Understanding the world through data and computation has always formed
the core of scientific discovery. Amid many different approaches, two
common paradigms have emerged. On the one hand, primarily data-driven
approaches—such as deep neural networks—have proven extremely successful
in recent years. Their success is based mainly on their ability to
approximate complicated functions with generic models when trained using
vast amounts of data and enormous computational resources. But despite
many recent triumphs, deep neural networks are difficult to analyze and
thus remain mysterious. Most importantly, they lack the robustness,
explainability, interpretability, efficiency, and fairness needed for
high-stakes decision-making. On the other hand, increasingly realistic
model-based approaches—typically derived from first principles and
formulated as partial differential equations (PDEs)—are now available
for various tasks. One can often calibrate these models—which enable
detailed theoretical studies, analysis, and interpretation—with
relatively few measurements, thus facilitating their accurate
predictions of phenomena.
In this talk, I will highlight recent advances and ongoing work to
understand and improve deep learning by using techniques from partial
differential equations. I will demonstrate how PDE techniques can yield
better insight into deep learning algorithms, more robust networks, and
more efficient numerical algorithms. I will also expose some of the
remaining computational and numerical challenges in this area.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io <https://ftudisco.gitlab.io>
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/51fd86c2-9ac2-482f-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Speaker: Margherita Porcelli
Affiliation: Università di Bologna
Time: Friday, 04/06/2021, 16:00
Title: Relaxed Interior point methods for low-rank semidefinite
programming problems
In this talk we will discuss a relaxed variant of interior point
methods for semidefinite programming problems (SPDs). We focus on
problems in which the primal variable is expected to be low-rank at
optimality. Such situations are common in relaxations of combinatorial
optimization problems, for example in maximum cut problems as well as
in matrix completion problems. We exploit the structure of the sought
solution and relax the rigid structure of IPMs for SDP. In
anticipation to converging to a low-rank primal solution, a special
nearly low-rank form of all primal iterates is imposed. To accommodate
such a (restrictive) structure, the first order optimality conditions
have to be relaxed and are therefore approximated by solving an
auxiliary least-squares problem. The relaxed interior point framework
opens numerous possibilities how primal and dual approximated Newton
directions can be computed. In particular, it admits the application of
both the first- and the second-order methods in this context. In this
talk we will focus on second-order approaches and discuss the
difficulties arising in the linear algebra phase. A prototype
implementation is shown as well as computational results on matrix
completion problems. In particular, we will consider mildly-ill
conditioned and noisy random problems as well as problems arising in
diverse applications as the matrix to be recovered represents city-to-
city distances, a grayscale image, game parameters in a basketball
tournament.
This is joint work with S. Bellavia (Unifi) and J. Gondzio (Univ.
Edinburgh)
https://www.dm.unipi.it/webnew/it/seminari/relaxed-interior-point-methods-l…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4