Speaker: Stefano Cipolla
Affiliation: University of Edinburgh
Time: Friday, 18/06/2021, 16:00
Title: Random multi-block ADMM: an ALM based view for the QP case
Because of its wide versatility and applicability in multiple fields,
the n-block alternating direction method of multipliers (ADMM) for
solving nonseparable convex minimization problems, has recently
attracted the attention of many researchers [1, 2, 4]. When the n-block
ADMM is used for the minimization of quadratic functions, it consists
in a cyclic update of the primal variables xi for i = 1,...,n in the
Gauss-Seidel fashion and a dual ascent type update of the dual variable
μ. Despite the fact the connections between ADMM and Gauss-Seidel are
quite well known, to the best of our knowledge, an analysis from the
purely numerical linear algebra point of view is lacking in literature.
Aim of this talk is to present a series of very recent results obtained
on this topic which shed further light on basic issues as convergence
and efficiency [3].
[1] Chen, C., Li, M., Liu, X., Ye, Y. (2019). Extended ADMM and BCD for
nonseparable convex minimization models with quadratic coupling terms:
convergence analysis and insights. Mathematical Programming, 173(1-2),
37-77.
[2] Chen, C., He, B., Ye, Y., Yuan, X. (2016). The direct extension of
ADMM for multi-block convex minimization problems is not necessarily
convergent. Mathematical Programming, 155(1-2), 57-79.
[3] Cipolla, S., Gondzio, J (2020). ADMM and inexact ALM: the QP case.
arXiv 2012.09230.
[4] Sun, R., Luo, Z. Q., Ye, Y. (2020). On the efficiency of random
permutation for ADMM and coordinate descent. Mathematics of Operations
Research, 45(1), 233-271.
https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base… <https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base…>
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4 <https://hausdorff.dm.unipi.it/b/leo-xik-xu4>
Dear all,
An online colloquium of the Department of Mathematics at UniPi is
planned on June 16 at 17:00. You are all welcome!
The speaker is Volker Mehrmann (TU Berlin).
Title:
Energy based modeling, simulation, and control. Mathematical theory and
algorithms for the solution of real world problems.
Abstract:
Energy based modeling via port-Hamiltonian systems is a relatively new
paradigm in all areas of science and engineering. These systems have
wonderful mathematical properties, concerning their analytic, geometric
and algebraic properties, but also with respect to their use in numerical
algorithms for space-time discretization, model reduction and control.
We will introduce the model class and their mathematical properties and we
illustrate their usefulness with several real world applications.
Google Meet link: https://meet.google.com/qii-tcks-rrr
Speaker: Stefano Cipolla
Affiliation: University of Edinburgh
Time: Friday, 18/06/2021, 16:00
Title: Random multi-block ADMM: an ALM based view for the QP case
Because of its wide versatility and applicability in multiple fields,
the n-block alternating direction method of multipliers (ADMM) for
solving nonseparable convex minimization problems, has recently
attracted the attention of many researchers [1, 2, 4]. When the n-block
ADMM is used for the minimization of quadratic functions, it consists
in a cyclic update of the primal variables xi for i = 1,...,n in the
Gauss-Seidel fashion and a dual ascent type update of the dual variable
μ. Despite the fact the connections between ADMM and Gauss-Seidel are
quite well known, to the best of our knowledge, an analysis from the
purely numerical linear algebra point of view is lacking in literature.
Aim of this talk is to present a series of very recent results obtained
on this topic which shed further light on basic issues as convergence
and efficiency [3].
[1] Chen, C., Li, M., Liu, X., Ye, Y. (2019). Extended ADMM and BCD for
nonseparable convex minimization models with quadratic coupling terms:
convergence analysis and insights. Mathematical Programming, 173(1-2),
37-77.
[2] Chen, C., He, B., Ye, Y., Yuan, X. (2016). The direct extension of
ADMM for multi-block convex minimization problems is not necessarily
convergent. Mathematical Programming, 155(1-2), 57-79.
[3] Cipolla, S., Gondzio, J (2020). ADMM and inexact ALM: the QP case.
arXiv 2012.09230.
[4] Sun, R., Luo, Z. Q., Ye, Y. (2020). On the efficiency of random
permutation for ADMM and coordinate descent. Mathematics of Operations
Research, 45(1), 233-271.
https://www.dm.unipi.it/webnew/it/seminari/random-multi-block-admm-alm-base…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Dear all,
the next GSSI Math Colloquium will be held on *Thursday June 17 at 5pm*
(please note **the time is *5pm* instead of the usual 3pm**).
The speaker is Lars Ruthotto, with a lecture connecting numerical
methods for differential equations and deep learning architectures. More
details below.
Lars Ruthotto is Associate Professor of Mathematics and Computer Science
at Emory University (Atlanta, USA).
To attend the talk please use to the following Zoom link:
https://us02web.zoom.us/j/85179454721?pwd=TjA0V2M3L3lVTk1NNEdVcGpQcXlTdz09
Please feel free to distribute this announcement as you see fit.
Looking forward to seeing you all on Thursday!
Paolo Antonelli, Stefano Marchesani, Francesco Tudisco and Francesco Viola
---------------------
Title:
Numerical Methods for Deep Learning motivated by Partial Differential
Equations
Abstract:
Understanding the world through data and computation has always formed
the core of scientific discovery. Amid many different approaches, two
common paradigms have emerged. On the one hand, primarily data-driven
approaches—such as deep neural networks—have proven extremely successful
in recent years. Their success is based mainly on their ability to
approximate complicated functions with generic models when trained using
vast amounts of data and enormous computational resources. But despite
many recent triumphs, deep neural networks are difficult to analyze and
thus remain mysterious. Most importantly, they lack the robustness,
explainability, interpretability, efficiency, and fairness needed for
high-stakes decision-making. On the other hand, increasingly realistic
model-based approaches—typically derived from first principles and
formulated as partial differential equations (PDEs)—are now available
for various tasks. One can often calibrate these models—which enable
detailed theoretical studies, analysis, and interpretation—with
relatively few measurements, thus facilitating their accurate
predictions of phenomena.
In this talk, I will highlight recent advances and ongoing work to
understand and improve deep learning by using techniques from partial
differential equations. I will demonstrate how PDE techniques can yield
better insight into deep learning algorithms, more robust networks, and
more efficient numerical algorithms. I will also expose some of the
remaining computational and numerical challenges in this area.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io <https://ftudisco.gitlab.io>
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/51fd86c2-9ac2-482f-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Speaker: Margherita Porcelli
Affiliation: Università di Bologna
Time: Friday, 04/06/2021, 16:00
Title: Relaxed Interior point methods for low-rank semidefinite
programming problems
In this talk we will discuss a relaxed variant of interior point
methods for semidefinite programming problems (SPDs). We focus on
problems in which the primal variable is expected to be low-rank at
optimality. Such situations are common in relaxations of combinatorial
optimization problems, for example in maximum cut problems as well as
in matrix completion problems. We exploit the structure of the sought
solution and relax the rigid structure of IPMs for SDP. In
anticipation to converging to a low-rank primal solution, a special
nearly low-rank form of all primal iterates is imposed. To accommodate
such a (restrictive) structure, the first order optimality conditions
have to be relaxed and are therefore approximated by solving an
auxiliary least-squares problem. The relaxed interior point framework
opens numerous possibilities how primal and dual approximated Newton
directions can be computed. In particular, it admits the application of
both the first- and the second-order methods in this context. In this
talk we will focus on second-order approaches and discuss the
difficulties arising in the linear algebra phase. A prototype
implementation is shown as well as computational results on matrix
completion problems. In particular, we will consider mildly-ill
conditioned and noisy random problems as well as problems arising in
diverse applications as the matrix to be recovered represents city-to-
city distances, a grayscale image, game parameters in a basketball
tournament.
This is joint work with S. Bellavia (Unifi) and J. Gondzio (Univ.
Edinburgh)
https://www.dm.unipi.it/webnew/it/seminari/relaxed-interior-point-methods-l…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Alfredo Buttari
Affiliation: CNRS, IRIT, Toulouse
Time: Friday, 14/05/2021, 16:00
Title: Reducing the complexity of linear systems solvers through the
block low-rank format
Direct linear system solvers are commonly regarded as robust methods
for computing the solution of linear systems of equations. Nonetheless,
their complexity makes the handling of very large size problems
difficult or unfeasible due to excessive execution
time or memory consumption. In this talk we discuss the use of low-rank
approximation techniques that allow for reducing this complexity at the
price of a loss in precision which can be reliably controlled. To take
advantage of these low-rank approximations, we
have designed a format called block low-rank (BLR) whose objective is
to achieve a favorable compromise between complexity and efficiency of
operations thanks to its regular structure. We will present the basic
BLR format as well as more advanced variants and the associated
algorithms; we will analyze their theoretical properties and discuss
the issues related to their efficient implementation on parallel
computers. We will specifically focus on the use of BLR for the
solution of sparse linear systems. The ombination of this format with
sparse direct methods, such as the multifrontal one, leads to efficient
parallel solvers with scalable complexity. These can either be used as
standalone direct solvers or in combination with other techniques such
as iterative or multigrid methods. We will present experimental results
on real-life problems obtained by integrating the BLR format within the
MUMPS parallel sparse direct solver.
https://www.dm.unipi.it/webnew/it/seminari/reducing-complexity-linear-syste…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Stefano Pozza
Affiliation: Charles University, Prague
Time: Friday, 07/05/2021, 16:00
Title: The Short-term Rational Lanczos Method and Applications
Rational Krylov subspaces have become a reference tool in dimension
reduction procedures for several application problems. When data
matrices are symmetric, a short-term recurrence can be used to generate
an associated orthonormal basis. In the past this procedure was
abandoned because it requires twice the number of linear system solves
per iteration than with the classical long-term method. We propose an
implementation that allows one to obtain key rational subspace matrices
without explicitly storing the whole orthonormal basis, with a moderate
computational overhead associated with sparse system solves. Several
applications are discussed to illustrate the advantages of the proposed
procedure.
https://www.dm.unipi.it/webnew/it/seminari/short-term-rational-lanczos-meth…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Philipp Birken
Affiliation: Lund University
Time: Friday, 23/04/2021, 16:00
Title: Conservative iterative solvers in computational fluid dynamics
The governing equations in computational fluid dynamics such as the
Navier-Stokes- or Euler equations are conservation laws. Finite volume
methods are designed to respect this and the theorem of Lax-Wendroff
underscores the importance of it. It roughly states that for a
nonlinear (!) scalar conservation law in 1D , if the numerical method
with explicit Euler time integration is consistent and (locally)
conservative, then in case of convergence, the numerical method
converges to a weak solution. When using implicit time integration, the
widespread believe in the community is that conservation is lost. This
is however, not necessarily due to the time integration, but due to the
use of iterative solvers. We first present a catalogue of iterative
solvers that preserve the weaker property of global conservation to
identify candidates of solvers that preserve local conservation as used
in the Lax-Wendroff theorem. We then proceed to prove an extension of
the Lax-Wendroff theorem for the situation that we perform a fixed
number of steps of a so called pseudo time iteration per time step. It
turns out that in this case, the numerical method converges to a weak
solution of the conservation law with a modified propagation speed.
This can be exploited to improve performance of the iterative method.
https://www.dm.unipi.it/webnew/it/seminari/conservative-iterative-solvers-c…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Speaker: Jie Meng
Affiliation: Università di Pisa
Time: Friday, 09/04/2021, 16:00
Title: Geometric means of quasi-Toeplitz matrices
We study means of geometric type of quasi-Toeplitz matrices, that are
semi-infinite matrices A = (a_{i,j}) i,j=1,2,... of the form A = T(a) +
E, where E represents a compact operator, and T(a) is a semi-infinite
Toeplitz matrix associated with the function a, with Fourier series
\sum_{l} a_l e^{ilt} , in the sense that (T(a))_{i,j} = a_{j-i}. If a
is real valued and essentially bounded, then these matrices represent
bounded self-adjoint operators on l^2. We consider the case where a is
a continuous function, where quasi-Toeplitz matrices coincide with a
classical Toeplitz algebra, and the case where a is in the Wiener
algebra, that is, has absolutely convergent Fourier series. We prove
that if a_1, ... , a_p are continuous and positive functions, or are in
the Wiener algebra with some further conditions, then means of
geometric type, such as the ALM, the NBMP and the Karcher mean of
quasi-Toeplitz positive definite matrices associated with a_1, ..., a_p
, are quasi-Toeplitz matrices associated with the geometric mean (a_1
... a_p)^{1/p}, which differ only by the compact correction. We show by
numerical tests that these operator means can be practically
approximated.
https://www.dm.unipi.it/webnew/it/seminari/geometric-means-quasi-toeplitz-m…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Dear all,
You are all invited to this week's NOMADS seminar at GSSI.
The seminar will take place tomorrow *March 30 at 18:00 (CET)*.
The speaker is Ivan Markovsky from Vrije Universiteit Brussel (Belgium)
who will give a talk on Data-driven dynamic interpolation and
approximation. Abstract and more info below.
The seminar will be given via Zoom. To attend the seminar please use the
following link:
https://us02web.zoom.us/j/85393475759?pwd=ckNDOGNGY0d0bTBZVXBmd1FibXJVUT09
<https://us02web.zoom.us/j/85393475759?pwd=ckNDOGNGY0d0bTBZVXBmd1FibXJVUT09>
Further info about past and future meetings are available at the
webpage: https://num-gssi.github.io/seminar/
Please feel free to distribute this announcement as you see fit.
Hope to see you all Tomorrow!
Francesco and Nicola
-------------------------------------------------
Data-driven dynamic interpolation and approximation
The behavioral system theory give theoretical foundation for
nonparameteric representations of linear time-invariant systems based on
Hankel matrices constructed from data. These data-driven representations
led in turn to new system identification, signal processing, and control
methods. In particular, data-driven simulation and linear quadratic
tracking control problems were solved using the new approach [1,2]. This
talk shows how the approach can be used further on for solving
data-driven interpolation and approximation problems (missing data
estimation) and how it can be generalized to some classes of nonlinear
systems. The theory leads to algorithms that are both general (can deal
simultaneously with missing, exact, and noisy data of multivariable
systems) and simple (require existing numerical linear algebra methods
only). This opens a practical computational way of doing system theory
and signal processing directly from data without identification of a
transfer function or a state space representation and doing model-based
design.
References:
[1] I. Markovsky and P. Rapisarda. “Data-driven simulation and control”.
Int. J. Control 81.12 (2008), pp. 1946--1959.
[2] I. Markovsky. A missing data approach to data-driven filtering and
control. IEEE Trans. Automat. Contr., 62:1972--1978, April 2017.
[3] I. Markovsky and F. Dörfler. Data-driven dynamic interpolation and
approximation. Technical report, Vrije Universiteit Brussel, 2021.
Available from http://homepages.vub.ac.be/~imarkovs/publications/ddint.pdf
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io <https://ftudisco.gitlab.io>
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/51aafd85-e7f6-58a0-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Speaker: Alice Cortinovis
Affiliation: EPFL
Time: Friday, 19/03/2021, 16:00
Title: Randomized trace estimates for indefinite matrices with an application to determinants
Randomized trace estimation is a popular technique to approximate the
trace of a large-scale matrix A by computing the average of quadratic
forms x^T * A * x for many samples of a random vector X. We show new
tail bounds for randomized trace estimates in the case of Rademacher
and Gaussian random vectors, which significantly improve existing
results for indefinite matrices. Then we focus on the approximation of
the determinant of a symmetric positive definite matrix B, which can be
done via the relation log(det(B)) = trace(log(B)), where the matrix
log(B) is usually indefinite. We analyze the convergence of the Lanczos
method to approximate quadratic forms x^T * log(B) * x by exploiting
its connection to Gauss quadrature. Finally, we combine our tail bounds
on randomized trace estimates with the analysis of the Lanczos method
to improve and extend an existing result on log determinant
approximation to not only cover Rademacher but also Gaussian random
vectors.
https://www.dm.unipi.it/webnew/it/seminari/randomized-trace-estimates-indef…
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Good morning everyone,
This is just a gentle reminder about *today's seminar by Eugene
Tyrtyshnikov* (MSU and INM-RAS).
The seminar will take place *at 18:00* via the Zoom meeting:
https://us02web.zoom.us/j/84101660726?pwd=TDhrWlFKdnhQVnBTZFdMWmw3Q3J4QT09
Hope to see you all there!
Francesco and Nicola
----------------
Tikhonov's solution to a class of linear systems equivalent within
perturbations
A standard approach to incorrect problems suggests that a problem of
interest is reformulated with the knowledge of some additional a-priori
information. This can be done by several well-known regularization
techniques. Many practical problems are successfully solved on this way.
What does not still look as completely satisfactory is that the new
reset problem seems to appear rather implicitly in the very process of
its solution.
In 1980, A. N. Tikhonov proposed a reformulation [1] that arises
explicitly before the discussion of the solution methods. He suggested a
notion of normal solution to a family of linear algebraic systems
described by a given individual system and its vicinity comprising
perturbed systems, under the assumption that there are compatible
systems in the class notwithstanding the compatibility property of the
given individual system. Tikhovov proved that the normal solution exists
and is unique. However, a natural question about the correctness of the
reset problem was not answered. In this talk we address a question of
correctness of the reformulated incorrect problems that seems to have
been missed in all previous considerations. The main result is the proof
of correctness for Tikhonov's normal solution. Possible generalizations
and difficulties will be also discussed.
[1] A. N. Tikhonov, Approximate systems of linear algebraic equations,
USSR Computational Mathematics and Mathematical Physics, vol. 20, issue
6 (1980)
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/2fe6fc1f-c30c-9ee8-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Dear all,
You are all invited to this week's NOMADS seminar at GSSI.
*The seminar schedule is changed and will now run on Tuesdays @ 18:00
(CET)* most
of the times. This week's seminar is on *March 09 at 18:00 (CET).*
The speaker is Eugene Tyrtyshnikov from Moscow University and INM-RAS
(Russia). The talk will be focused on the Tikhonov solution to a class of
linear system problems. Please find abstract and title below.
The seminar will be given via Zoom. To attend the seminar please use the
following link:
https://us02web.zoom.us/j/84101660726?pwd=TDhrWlFKdnhQVnBTZFdMWmw3Q3J4QT09
Further info about past and future meetings are available at the webpage:
https://num-gssi.github.io/seminar/
Please feel free to distribute this announcement as you see fit.
Hope to see you all on Tuesday!
Francesco and Nicola
------
Tikhonov's solution to a class of linear systems equivalent within
perturbations
A standard approach to incorrect problems suggests that a problem of
interest is reformulated with the knowledge of some additional a-priori
information. This can be done by several well-known regularization
techniques. Many practical problems are successfully solved on this way.
What does not still look as completely satisfactory is that the new reset
problem seems to appear rather implicitly in the very process of its
solution.
In 1980, A. N. Tikhonov proposed a reformulation [1] that arises
explicitly before the discussion of the solution methods. He suggested a
notion of normal solution to a family of linear algebraic systems
described by a given individual system and its vicinity comprising
perturbed systems, under the assumption that there are compatible systems
in the class notwithstanding the compatibility property of the given
individual system. Tikhovov proved that the normal solution exists and is
unique. However, a natural question about the correctness of the reset
problem was not answered. In this talk we address a question of correctness
of the reformulated incorrect problems that seems to have been missed in
all previous considerations. The main result is the proof of correctness
for Tikhonov's normal solution. Possible generalizations and difficulties
will be also discussed.
[1] A. N. Tikhonov, Approximate systems of linear algebraic equations, USSR
Computational Mathematics and Mathematical Physics, vol. 20, issue 6 (1980)
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/CAO2_FWkAyR1tebRPom….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Speaker: Davide Bianchi
Affiliation: University of Insubria
Time: Friday, 05/03/2021, 16:00
Title: Compatibility, embedding and regularization of non-local random walks on graphs
Several variants of the graph Laplacian have been introduced to model non-local diffusion pro-
cesses, which allow a random walker to “jump” to non-neighborhood nodes, most notably the path
graph Laplacians and the fractional graph Laplacian, see [2, 3]. From a rigorous point of view, this
new dynamics is made possible by having replaced the original graph G with a weighted complete
graph G 0 on the same node-set, that depends on G and wherein the presence of new edges allows a
direct passage between nodes that were not neighbors in G.
A natural question arises: are the dynamics on the “old” walks along the edges of G compatible
with the new dynamics? Indeed, it would be desirable to introduce long-range jumps but preserving
at the same time the original dynamics if we move along the edges of G. In other words, for
any time-interval where does not take place any long-range jump, a random walk on G 0 should be
indistinguishable from the original random walk on G. One can easily figure this by a simple but
clarifying example: let us suppose that our random walker is surfing the Net (the original graph G),
and just for the sake of simplicity let us suppose that the Net is undirected. The walker then can
move towards linked web-pages with a probability that can be both uniforms on the number of total
links or dependent on some other parameters. Suppose now that we allow the walker to jump from
one web-page to non-linked web-pages by just typing an URL address in the navigation bar so that
he can virtually reach directly any possible web-pages on the Net (the induced graph G 0 ). If in any
moment, for any reason, the walker is forced again to surf the Net by just following the links, then
we should see him moving exactly as he used to do, namely, the probability he moves to the next
linked web-page has to be the same as before.
Unfortunately, in general, the induced complete graph G 0 , defined accordingly to the proposal in
the literature, breaks that compatibility and the new models cease to be expressions of the original
model G.
In this talk, we will present some of the main results obtained in [1]. We will first introduce a
rigorous definition of compatibility and embedding, which stem from a probabilistic and purely an-
alytical point of view, respectively. Secondly, we will propose a regularization method to guarantee
such compatibility and preserving at the same time all the nice properties granted by G 0 .
Meeting link: https://hausdorff.dm.unipi.it/b/leo-xik-xu4
Good morning everyone,
This is just a gentle reminder about *today's seminar* on "Learning from
signals on graphs with unobserved edges" by Michael Schaub (RWTH Aachen
University). Abstract below.
Please note that the seminar talk has been pushed back by one hour and
*will take place at 18:00*.
The Zoom link for the talk is:
https://us02web.zoom.us/j/87171939595
Hope to see you all there!
Francesco and Nicola
----------------------------------------------
Speaker:
Michael Schaub, RWTH Aachen University
https://michaelschaub.github.io/
Title:
Learning from signals on graphs with unobserved edges
Abstract:
In many applications we are confronted with the following system
identification scenario: we observe a dynamical process that describes
the state of a system at particular times. Based on these observations
we want to infer the (dynamical) interactions between the entities we
observe. In the context of a distributed system, this typically
corresponds to a "network identification" task: find the (weighted)
edges of the graph of interconnections. However, often the number of
samples we can obtain from such a process are far too few to identify
the edges of the network exactly. Can we still reliably infer some
aspects of the underlying system?
Motivated by this question we consider the following identification
problem: instead of trying to infer the exact network, we aim to recover
a (low-dimensional) statistical model of the network based on the
observed signals on the nodes. More concretely, here we focus on
observations that consist of snapshots of a diffusive process that
evolves over the unknown network. We model the (unobserved) network as
generated from an independent draw from a latent stochastic blockmodel
(SBM), and our goal is to infer both the partition of the nodes into
blocks, as well as the parameters of this SBM. We present simple
spectral algorithms that provably solve the partition and parameter
inference problems with high-accuracy.
We further discuss some possible variations and extensions of this
problem setup.
This talk is part of NOMADS — Numerical ODEs, Matrix Analysis and Data
Science — seminar at GSSI:
https://num-gssi.github.io/seminar/
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/c82561f8-8a98-5698-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Dear all,
as usual, we will soon start with the NumPi seminars for the second
semester. As in the first semester, they will be online.
In order to minimize the overlap with teaching activities and other
commitments, we have prepared a Doodle where (if you wish to attend the
seminars) you can choose your preferred time slot(s).
Please note that the Doodle is for this week, but it is intended for a
"generic week" of this semester. The first seminar will likely be at
the beginning of March.
Doodle link: https://doodle.com/poll/a4fpmfwat7q62rnu
P.S.: In case you wish to propose some speakers (or yourself as a
speaker), feel free to drop us an e-mail.
Best wishes, -- Fabio Durastante and Leonardo Robol.
Dear all,
You are all invited to this week's NOMADS seminar at GSSI.
The seminar will be on Wednesday February 17 at 17:00 (CET) by Michael
Schaub from RWTH Aachen University (Germany).
The talk will be focused on a method for learning the structure of a
network given few observations of a diffusive process on the unknown graph.
Title and abstract are below.
To attend the semimar please use the following link:
https://us02web.zoom.us/j/87171939595
Further info about past and future meetings are available at the webpage:
https://num-gssi.github.io/seminar/
Hope to see you all on Wednesday! And, please feel free to distribute
this announcement as you see fit.
Francesco and Nicola
--------
Title: Learning from signals on graphs with unobserved edges
In many applications we are confronted with the following system
identification scenario: we observe a dynamical process that describes
the state of a system at particular times. Based on these observations
we want to infer the (dynamical) interactions between the entities we
observe. In the context of a distributed system, this typically
corresponds to a "network identification" task: find the (weighted)
edges of the graph of interconnections. However, often the number of
samples we can obtain from such a process are far too few to identify
the edges of the network exactly. Can we still reliably infer some
aspects of the underlying system?
Motivated by this question we consider the following identification
problem: instead of trying to infer the exact network, we aim to recover
a (low-dimensional) statistical model of the network based on the
observed signals on the nodes. More concretely, here we focus on
observations that consist of snapshots of a diffusive process that
evolves over the unknown network. We model the (unobserved) network as
generated from an independent draw from a latent stochastic blockmodel
(SBM), and our goal is to infer both the partition of the nodes into
blocks, as well as the parameters of this SBM. We present simple
spectral algorithms that provably solve the partition and parameter
inference problems with high-accuracy.
We further discuss some possible variations and extensions of this
problem setup.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/2f2274b3-9a34-5b45-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Good morning everyone,
This is just a gentle reminder about today's seminar "Large-scale
regression with non-convex loss and penalty" by Lothar Reichel (Kent
State University, USA). Abstract below.
The seminar is at 17:00 (CET). To attend please use the zoom link:
https://us02web.zoom.us/j/89724684523
Please feel free to distribute this announcement as you see fit.
Hope to see you there!
Francesco and Nicola
----------
Title:
Large-scale regression with non-convex loss and penalty
Description:
We do non-convex optimization with application to image restoration and
regression problems for which a sparse solution is desired.
----------
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/02613376-994d-c51e-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Dear all,
You are all invited to this week's NOMADS seminar at GSSI.
The seminar will be given on *Thursday* (not Wednesday as usual)
*February 4 at 17:00 (CET)* by *Lothar Reichel* from Kent State
University (USA).
---
Title:
Large-scale regression with non-convex loss and penalty
Description:
We do non-convex optimization with application to image restoration and
regression problems for which a sparse solution is desired.
---
To attend the seminar please use the following link:
https://us02web.zoom.us/j/89724684523
Further info about past and future meetings are available at the webpage:
https://num-gssi.github.io/seminar/
Please feel free to distribute this announcement as you see fit.
Hope to see you all on Thursday!
Francesco and Nicola
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/2f5d2866-9537-1186-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Inoltro questo annuncio per i potenziali interessati non iscritti al GNCS.
Buon week-end,
-federico
-------- Forwarded Message --------
Subject: Summer School plus Conference on “Mathematics for Nonstationary Signals,and applications in Geophysics and other fields” - L'Aquila (Italy) and online, July 2021
Date: Sat, 30 Jan 2021 10:29:15 +0100
From: Ruggiero Valeria <valeria.ruggiero(a)unife.it>
To: gncs-aderenti(a)altamatematica.it
Dear Colleagues,
we kindly inform you that a Summer School plus Conference on
“Mathematics for Nonstationary Signals and applications in Geophysics and other fields”,
will take place at the Università degli Studi dell'Aquila, L'Aquila, Italy, and online on July 19-24, 2021.
The event will be hybrid, providing the opportunity to everyone to join either in-person or virtually.
During the Summer School young researchers and PhD students will have a chance to learn and deepen
their knowledge on Mathematics of Signal Processing, in particular on new data analysis tools/techniques
for non-stationary time series and their theoretical foundation.
The summer school will take place during the first 4 days and it will consist of three courses of 8 hours each.
Confirmed Lecturers:
Patrick Flandrin - ENS Lyon
Yang Wang - HKSTU
Hau-tieng Wu - Duke University
At the end of the school there will be a 2 days and half Conference and Poster Session during which
the speakers will show both the applications of these techniques to real life data
and present the current frontiers of the theoretical research.
Some slots for contributed talks and posters are still available.
Contributed talks will be 30 minutes long (25+5 for questions).
Submission deadline is April 30, 2021.
Applications for prospective students of the Summer School,
as well as speakers of the conference and poster session are now open.
Financial support is available for a limited number of participants.
For more information and to apply please visit www.cicone.com/NoSAG21.html <http://www.cicone.com/NoSAG21.html>
Best regards,
The local organizing committee
Antonio Cicone - DISIM - Università degli Studi dell'Aquila - L'Aquila
Giulia D'Angelo - INAF - Istituto di Astrofisica e Planetologia Spaziali - Roma
Enza Pellegrino - DIIIE - Università degli Studi dell'Aquila - L'Aquila
Mirko Piersanti - INFN - Universita di Roma "Tor Vergata" - Roma
Angela Stallone - INGV - Istituto Nazionale di Geofisica e Vulcanologia - Roma
Dear all, *
*the next GSSI Math Colloquium will be held on *Thursday January 28
*at***3pm* (Italian time).
The speaker is Anders Hansen,
<http://www.damtp.cam.ac.uk/research/afha/anders/> with a lecture
connecting computational mathematics with deep learning and AI. More
details below.
Anders Hansen is Associate Professor at University of Cambridge, where
he leads the Applied Functional and Harmonic Analysis group, and Full
Professor of Mathematics at the University of Oslo.
To attend the talk please use to the following *Zoom link*:
https://us02web.zoom.us/j/84038062394
Please feel free to distribute this announcement as you see fit.
Looking forward to seeing you all on Thursday!
Paolo Antonelli, Stefano Marchesani, Francesco Tudisco and Francesco Viola
---------------------
Title: On the foundations of computational mathematics, Smale's 18th
problem and the potential limits of AI
Abstract:
There is a profound optimism on the impact of deep learning (DL) and AI
in the sciences with Geoffrey Hinton concluding that 'They should stop
training radiologists now'. However, DL has an Achilles heel: it is
universally unstable so that small changes in the initial data can lead
to large errors in the final result. This has been documented in a wide
variety of applications. Paradoxically, the existence of stable neural
networks for these applications is guaranteed by the celebrated
Universal Approximation Theorem, however, the stable neural networks are
never computed by the current training approaches. We will address this
problem and the potential limitations of AI from a foundations point of
view. Indeed, the current situation in AI is comparable to the situation
in mathematics in the early 20th century, when David Hilbert’s optimism
(typically reflected in his 10th problem) suggested no limitations to
what mathematics could prove and no restrictions on what computers could
compute. Hilbert’s optimism was turned upside down by Goedel and Turing,
who established limitations on what mathematics can prove and which
problems computers can solve (however, without limiting the impact of
mathematics and computer science).
We predict a similar outcome for modern AI and DL, where the
limitations of AI (the main topic of Smale’s 18th problem) will be
established through the foundations of computational mathematics. We
sketch the beginning of such a program by demonstrating how there exist
neural networks approximating classical mappings in scientific
computing, however, no algorithm (even randomised) can compute such a
network to even 1-digit accuracy (with probability better than 1/2). We
will also show how instability is inherit in the methodology of DL
demonstrating that there is no easy remedy, given the current
methodology. Finally, we will demonstrate basic examples in inverse
problems where there exists (untrained) neural networks that can easily
compute a solution to the problem, however, the current DL techniques
will need 10^80 data points in the training set to get even 1% success rate.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/b3099d6a-bcef-2052-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Buongiorno,
inoltro questo annuncio, che può essere di interesse per qualche
iscritto alla lista.
**Postdoc Position, Krylov Methods, Charles Univ, Czech Rep**
A postdoc position is available within the framework of the Primus
Research Programme "A Lanczos-like Method for the Time-Ordered
Exponential" at the Faculty of Mathematics and Physics, Charles
University, Prague.
The appointment period is one year, with the possibility of
extension. The postdoc will start before the end of 2021. The start
date is negotiable.
We are looking for candidates with a strong background in numerical
linear algebra. In particular, we seek applicants with expertise in
matrix function approximation and Krylov subspace methods. The
applicant must hold a Ph.D. degree by the start date.
Application deadline: March 15, 2021.
More information and application instructions:
https://www.starlanczos.cz/open-positions
<https://www.starlanczos.cz/open-positions>
--
--federico poloni
Dipartimento di Informatica, Università di Pisa
https://www.di.unipi.it/~fpoloni/ tel:+39-050-2213143
Dear all,
on January 19, 3 pm, Dario Bini will give a talk on "Solving Structured
Matrix Equations Encountered in the Analysis of Stochastic Processes".
The talk is part of the NEPA seminar series, and many more talks will
take place in the next weeks. Participation is free, but a registration
is required to obtain the Zoom link [1].
[1] https://sites.google.com/unisa.it/nepaseminars
Best wishes, -- Leonardo.
Good morning everyone,
This is just a gentle reminder about today's seminar "Numerical
integrators for dynamical low-rank approximation" by Gianluca Ceruti
(Uni Tuebingen). Abstract below.
The seminar is at 17:00 (CET). To attend please use the zoom link:
https://us02web.zoom.us/j/82131676880
Hope to see you there!
Francesco and Nicola
-----------
Gianluca Ceruti <https://na.uni-tuebingen.de/~ceruti/> - University of
Tuebingen
Numerical integrators for dynamical low-rank approximation
Discretization of time-dependent high-dimensional PDEs suffers of an
undesired effect, known as curse of dimensionality. The amount of data
to be stored and treated, grows exponentially, and exceeds standard
capacity of common computational devices.
In this setting, time dependent model order reductions techniques are
desirable.
In the present seminar, together with efficient numerical integrators,
we present a recently developed approach: dynamical low-rank approximation.
Dynamical low-rank approximation for matrices will be firstly presented,
and a numerical integrator with two remarkable properties will be
introduced: the matrix projector splitting integrator.
Based upon this numerical integrator, we will construct two equivalent
extensions for tensors, multi-dimensional arrays, in Tucker format - a
high-order generalization of the SVD decomposition for matrices. These
extensions are proven to preserve the excellent qualities of the matrix
integrator.
To conclude, via a novel compact formulation of the Tucker integrator,
we will further extend the matrix and Tucker projector splitting
integrators to the most general class of Tree Tensor Networks. Important
examples belonging to this class and of interest for applications are
given, but not only restricted to, by Tensor Trains.
This seminar is based upon a joint work with Ch. Lubich and H. Walach.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/46a4a726-fee9-9f0b-….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.
Dear all,
We hope you all had joyful holidays and wish you all a great start for the
new year!
We are ready to start with this year's NOMADS seminar at GSSI and would
like to invite you to this week's talk.
The seminar will be given on Wednesday January 13 at 17:00 (CET) by
Gianluca Ceruti from University of Tuebingen.
Title, abstract and zoom link are below.
Further info about past and future meetings are available at the webpage:
https://num-gssi.github.io/seminar/
Please feel free to distribute this announcement as you see fit.
Hope to see you all on Wednesday!
Francesco and Nicola
================================
Speaker: Gianluca Ceruti, University of Tuebingen
https://na.uni-tuebingen.de/~ceruti/
Zoom link:
https://us02web.zoom.us/j/82131676880
Numerical integrators for dynamical low-rank approximation
Discretization of time-dependent high-dimensional PDEs suffers of an
undesired effect, known as curse of dimensionality. The amount of data to
be stored and treated, grows exponentially, and exceeds standard capacity
of common computational devices.
In this setting, time dependent model order reductions techniques are
desirable.
In the present seminar, together with efficient numerical integrators, we
present a recently developed approach: dynamical low-rank approximation.
Dynamical low-rank approximation for matrices will be firstly presented,
and a numerical integrator with two remarkable properties will be
introduced: the matrix projector splitting integrator.
Based upon this numerical integrator, we will construct two equivalent
extensions for tensors, multi-dimensional arrays, in Tucker format - a
high-order generalization of the SVD decomposition for matrices. These
extensions are proven to preserve the excellent qualities of the matrix
integrator.
To conclude, via a novel compact formulation of the Tucker integrator, we
will further extend the matrix and Tucker projector splitting integrators
to the most general class of Tree Tensor Networks. Important examples
belonging to this class and of interest for applications are given, but not
only restricted to, by Tensor Trains.
This seminar is based upon a joint work with Ch. Lubich and H. Walach.
—
Francesco Tudisco
Assistant Professor
School of Mathematics
GSSI Gran Sasso Science Institute
Web: https://ftudisco.gitlab.io
--
You received this message because you are subscribed to the Google Groups "nomads-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nomads-list+unsubscribe(a)gssi.it.
To view this discussion on the web visit https://groups.google.com/a/gssi.it/d/msgid/nomads-list/CAO2_FW%3DqD1z%3D9V….
For more options, visit https://groups.google.com/a/gssi.it/d/optout.