Cari tutti,
come gli anni scorsi (ed in particolare quelli non-pandemici), vorremmo
provare ad organizzare una cena di "fine seminari"; più che altro
sarebbe una scusa per vedersi, a maggior ragione quest'anno dove non ci
siamo (quasi) mai incontrati di persona.
Ho predisposto un Doodle dove potete segnare la vostra disponibilità,
se intendete partecipare:
https://doodle.com/poll/z2xw2k6u5zzfybn3
Siete tutti benvenuti (dagli ordinari ai dottorandi, ma anche
simpatizzanti, ecc.). Vi chiederei solo di segnare la vostra presenza
quanto prima così che riusciamo ad organizzarci per bene.
Il luogo non è stato ancora scelto, ma proposte sono sempre bene
accette; probabilmente dovremo scegliere anche in funzione del numero
di partecipanti.
A presto! -- Leonardo Robol.
Speaker: Stefano Cipolla
Affiliation: University of Edinburgh
Time: Friday, 18/06/2021, 16:00
Title: Random multi-block ADMM: an ALM based view for the QP case
Because of its wide versatility and applicability in multiple fields,
the n-block alternating direction method of multipliers (ADMM) for
solving nonseparable convex minimization problems, has recently
attracted the attention of many researchers [1, 2, 4]. When the n-block
ADMM is used for the minimization of quadratic functions, it consists
in a cyclic update of the primal variables xi for i = 1,...,n in the
Gauss-Seidel fashion and a dual ascent type update of the dual variable
μ. Despite the fact the connections between ADMM and Gauss-Seidel are
quite well known, to the best of our knowledge, an analysis from the
purely numerical linear algebra point of view is lacking in literature.
Aim of this talk is to present a series of very recent results obtained
on this topic which shed further light on basic issues as convergence
and efficiency [3].
[1] Chen, C., Li, M., Liu, X., Ye, Y. (2019). Extended ADMM and BCD for
nonseparable convex minimization models with quadratic coupling terms:
convergence analysis and insights. Mathematical Programming, 173(1-2),
37-77.
[2] Chen, C., He, B., Ye, Y., Yuan, X. (2016). The direct extension of
ADMM for multi-block convex minimization problems is not necessarily
convergent. Mathematical Programming, 155(1-2), 57-79.
[3] Cipolla, S., Gondzio, J (2020). ADMM and inexact ALM: the QP case.
arXiv 2012.09230.
[4] Sun, R., Luo, Z. Q., Ye, Y. (2020). On the efficiency of random
permutation for ADMM and coordinate descent. Mathematics of Operations
Research, 45(1), 233-271.
https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base… <https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base…>
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4 <https://hausdorff.dm.unipi.it/b/leo-xik-xu4>
Dear all,
An online colloquium of the Department of Mathematics at UniPi is
planned on June 16 at 17:00. You are all welcome!
The speaker is Volker Mehrmann (TU Berlin).
Title:
Energy based modeling, simulation, and control. Mathematical theory and
algorithms for the solution of real world problems.
Abstract:
Energy based modeling via port-Hamiltonian systems is a relatively new
paradigm in all areas of science and engineering. These systems have
wonderful mathematical properties, concerning their analytic, geometric
and algebraic properties, but also with respect to their use in numerical
algorithms for space-time discretization, model reduction and control.
We will introduce the model class and their mathematical properties and we
illustrate their usefulness with several real world applications.
Google Meet link: https://meet.google.com/qii-tcks-rrr
Speaker: Stefano Cipolla
Affiliation: University of Edinburgh
Time: Friday, 18/06/2021, 16:00
Title: Random multi-block ADMM: an ALM based view for the QP case
Because of its wide versatility and applicability in multiple fields,
the n-block alternating direction method of multipliers (ADMM) for
solving nonseparable convex minimization problems, has recently
attracted the attention of many researchers [1, 2, 4]. When the n-block
ADMM is used for the minimization of quadratic functions, it consists
in a cyclic update of the primal variables xi for i = 1,...,n in the
Gauss-Seidel fashion and a dual ascent type update of the dual variable
μ. Despite the fact the connections between ADMM and Gauss-Seidel are
quite well known, to the best of our knowledge, an analysis from the
purely numerical linear algebra point of view is lacking in literature.
Aim of this talk is to present a series of very recent results obtained
on this topic which shed further light on basic issues as convergence
and efficiency [3].
[1] Chen, C., Li, M., Liu, X., Ye, Y. (2019). Extended ADMM and BCD for
nonseparable convex minimization models with quadratic coupling terms:
convergence analysis and insights. Mathematical Programming, 173(1-2),
37-77.
[2] Chen, C., He, B., Ye, Y., Yuan, X. (2016). The direct extension of
ADMM for multi-block convex minimization problems is not necessarily
convergent. Mathematical Programming, 155(1-2), 57-79.
[3] Cipolla, S., Gondzio, J (2020). ADMM and inexact ALM: the QP case.
arXiv 2012.09230.
[4] Sun, R., Luo, Z. Q., Ye, Y. (2020). On the efficiency of random
permutation for ADMM and coordinate descent. Mathematics of Operations
Research, 45(1), 233-271.
https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Dear all,
the next GSSI Math Colloquium will be held on *Thursday June 17 at 5pm*
(please note **the time is *5pm* instead of the usual 3pm**).
The speaker is Lars Ruthotto, with a lecture connecting numerical
methods for differential equations and deep learning architectures. More
details below.
Lars Ruthotto is Associate Professor of Mathematics and Computer Science
at Emory University (Atlanta, USA).
To attend the talk please use to the following Zoom link:
https://us02web.zoom.us/j/85179454721?pwd=TjA0V2M3L3lVTk1NNEdVcGpQcXlTdz09
Please feel free to distribute this announcement as you see fit.
Looking forward to seeing you all on Thursday!
Paolo Antonelli, Stefano Marchesani, Francesco Tudisco and Francesco Viola
---------------------
Title:
Numerical Methods for Deep Learning motivated by Partial Differential
Equations
Abstract:
Understanding the world through data and computation has always formed
the core of scientific discovery. Amid many different approaches, two
common paradigms have emerged. On the one hand, primarily data-driven
approaches—such as deep neural networks—have proven extremely successful
in recent years. Their success is based mainly on their ability to
approximate complicated functions with generic models when trained using
vast amounts of data and enormous computational resources. But despite
many recent triumphs, deep neural networks are difficult to analyze and
thus remain mysterious. Most importantly, they lack the robustness,
explainability, interpretability, efficiency, and fairness needed for
high-stakes decision-making. On the other hand, increasingly realistic
model-based approaches—typically derived from first principles and
formulated as partial differential equations (PDEs)—are now available
for various tasks. One can often calibrate these models—which enable
detailed theoretical studies, analysis, and interpretation—with
relatively few measurements, thus facilitating their accurate
predictions of phenomena.
In this talk, I will highlight recent advances and ongoing work to
understand and improve deep learning by using techniques from partial
differential equations. I will demonstrate how PDE techniques can yield
better insight into deep learning algorithms, more robust networks, and
more efficient numerical algorithms. I will also expose some of the
remaining computational and numerical challenges in this area.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io <https://ftudisco.gitlab.io>
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/51fd86c2-9ac2-482f-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Speaker: Margherita Porcelli
Affiliation: Università di Bologna
Time: Friday, 04/06/2021, 16:00
Title: Relaxed Interior point methods for low-rank semidefinite
programming problems
In this talk we will discuss a relaxed variant of interior point
methods for semidefinite programming problems (SPDs). We focus on
problems in which the primal variable is expected to be low-rank at
optimality. Such situations are common in relaxations of combinatorial
optimization problems, for example in maximum cut problems as well as
in matrix completion problems. We exploit the structure of the sought
solution and relax the rigid structure of IPMs for SDP. In
anticipation to converging to a low-rank primal solution, a special
nearly low-rank form of all primal iterates is imposed. To accommodate
such a (restrictive) structure, the first order optimality conditions
have to be relaxed and are therefore approximated by solving an
auxiliary least-squares problem. The relaxed interior point framework
opens numerous possibilities how primal and dual approximated Newton
directions can be computed. In particular, it admits the application of
both the first- and the second-order methods in this context. In this
talk we will focus on second-order approaches and discuss the
difficulties arising in the linear algebra phase. A prototype
implementation is shown as well as computational results on matrix
completion problems. In particular, we will consider mildly-ill
conditioned and noisy random problems as well as problems arising in
diverse applications as the matrix to be recovered represents city-to-
city distances, a grayscale image, game parameters in a basketball
tournament.
This is joint work with S. Bellavia (Unifi) and J. Gondzio (Univ.
Edinburgh)
https://www.dm.unipi.it/webnew/it/seminari/relaxed-interior-point-methods-l…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Alfredo Buttari
Affiliation: CNRS, IRIT, Toulouse
Time: Friday, 14/05/2021, 16:00
Title: Reducing the complexity of linear systems solvers through the
block low-rank format
Direct linear system solvers are commonly regarded as robust methods
for computing the solution of linear systems of equations. Nonetheless,
their complexity makes the handling of very large size problems
difficult or unfeasible due to excessive execution
time or memory consumption. In this talk we discuss the use of low-rank
approximation techniques that allow for reducing this complexity at the
price of a loss in precision which can be reliably controlled. To take
advantage of these low-rank approximations, we
have designed a format called block low-rank (BLR) whose objective is
to achieve a favorable compromise between complexity and efficiency of
operations thanks to its regular structure. We will present the basic
BLR format as well as more advanced variants and the associated
algorithms; we will analyze their theoretical properties and discuss
the issues related to their efficient implementation on parallel
computers. We will specifically focus on the use of BLR for the
solution of sparse linear systems. The ombination of this format with
sparse direct methods, such as the multifrontal one, leads to efficient
parallel solvers with scalable complexity. These can either be used as
standalone direct solvers or in combination with other techniques such
as iterative or multigrid methods. We will present experimental results
on real-life problems obtained by integrating the BLR format within the
MUMPS parallel sparse direct solver.
https://www.dm.unipi.it/webnew/it/seminari/reducing-complexity-linear-syste…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Stefano Pozza
Affiliation: Charles University, Prague
Time: Friday, 07/05/2021, 16:00
Title: The Short-term Rational Lanczos Method and Applications
Rational Krylov subspaces have become a reference tool in dimension
reduction procedures for several application problems. When data
matrices are symmetric, a short-term recurrence can be used to generate
an associated orthonormal basis. In the past this procedure was
abandoned because it requires twice the number of linear system solves
per iteration than with the classical long-term method. We propose an
implementation that allows one to obtain key rational subspace matrices
without explicitly storing the whole orthonormal basis, with a moderate
computational overhead associated with sparse system solves. Several
applications are discussed to illustrate the advantages of the proposed
procedure.
https://www.dm.unipi.it/webnew/it/seminari/short-term-rational-lanczos-meth…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Philipp Birken
Affiliation: Lund University
Time: Friday, 23/04/2021, 16:00
Title: Conservative iterative solvers in computational fluid dynamics
The governing equations in computational fluid dynamics such as the
Navier-Stokes- or Euler equations are conservation laws. Finite volume
methods are designed to respect this and the theorem of Lax-Wendroff
underscores the importance of it. It roughly states that for a
nonlinear (!) scalar conservation law in 1D , if the numerical method
with explicit Euler time integration is consistent and (locally)
conservative, then in case of convergence, the numerical method
converges to a weak solution. When using implicit time integration, the
widespread believe in the community is that conservation is lost. This
is however, not necessarily due to the time integration, but due to the
use of iterative solvers. We first present a catalogue of iterative
solvers that preserve the weaker property of global conservation to
identify candidates of solvers that preserve local conservation as used
in the Lax-Wendroff theorem. We then proceed to prove an extension of
the Lax-Wendroff theorem for the situation that we perform a fixed
number of steps of a so called pseudo time iteration per time step. It
turns out that in this case, the numerical method converges to a weak
solution of the conservation law with a modified propagation speed.
This can be exploited to improve performance of the iterative method.
https://www.dm.unipi.it/webnew/it/seminari/conservative-iterative-solvers-c…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Jie Meng
Affiliation: Università di Pisa
Time: Friday, 09/04/2021, 16:00
Title: Geometric means of quasi-Toeplitz matrices
We study means of geometric type of quasi-Toeplitz matrices, that are
semi-infinite matrices A = (a_{i,j}) i,j=1,2,... of the form A = T(a) +
E, where E represents a compact operator, and T(a) is a semi-infinite
Toeplitz matrix associated with the function a, with Fourier series
\sum_{l} a_l e^{ilt} , in the sense that (T(a))_{i,j} = a_{j-i}. If a
is real valued and essentially bounded, then these matrices represent
bounded self-adjoint operators on l^2. We consider the case where a is
a continuous function, where quasi-Toeplitz matrices coincide with a
classical Toeplitz algebra, and the case where a is in the Wiener
algebra, that is, has absolutely convergent Fourier series. We prove
that if a_1, ... , a_p are continuous and positive functions, or are in
the Wiener algebra with some further conditions, then means of
geometric type, such as the ALM, the NBMP and the Karcher mean of
quasi-Toeplitz positive definite matrices associated with a_1, ..., a_p
, are quasi-Toeplitz matrices associated with the geometric mean (a_1
... a_p)^{1/p}, which differ only by the compact correction. We show by
numerical tests that these operator means can be practically
approximated.
https://www.dm.unipi.it/webnew/it/seminari/geometric-means-quasi-toeplitz-m…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4