https://us02web.zoom.us/j/84038062394
Hope to see you there!
Paolo Antonelli, Stefano Marchesani, Francesco Tudisco and
Francesco Viola
---------------------
Title: On the foundations of computational mathematics, Smale's
18th problem and the potential limits of AI
Abstract:
There is a profound optimism on the impact of deep learning (DL)
and AI in the sciences with Geoffrey Hinton concluding that 'They
should stop training radiologists now'. However, DL has an
Achilles heel: it is universally unstable so that small changes in
the initial data can lead to large errors in the final result.
This has been documented in a wide variety of applications.
Paradoxically, the existence of stable neural networks for these
applications is guaranteed by the celebrated Universal
Approximation Theorem, however, the stable neural networks are
never computed by the current training approaches. We will address
this problem and the potential limitations of AI from a
foundations point of view. Indeed, the current situation in AI is
comparable to the situation in mathematics in the early 20th
century, when David Hilbert’s optimism (typically reflected in his
10th problem) suggested no limitations to what mathematics could
prove and no restrictions on what computers could compute.
Hilbert’s optimism was turned upside down by Goedel and Turing,
who established limitations on what mathematics can prove and
which problems computers can solve (however, without limiting the
impact of mathematics and computer science).
We predict a similar outcome for modern AI and DL, where the
limitations of AI (the main topic of Smale’s 18th problem) will be
established through the foundations of computational mathematics.
We sketch the beginning of such a program by demonstrating how
there exist neural networks approximating classical mappings in
scientific computing, however, no algorithm (even randomised) can
compute such a network to even 1-digit accuracy (with probability
better than 1/2). We will also show how instability is inherit in
the methodology of DL demonstrating that there is no easy remedy,
given the current methodology. Finally, we will demonstrate basic
examples in inverse problems where there exists (untrained) neural
networks that can easily compute a solution to the problem,
however, the current DL techniques will need 10^80 data points in
the training set to get even 1% success rate.
-------- Forwarded Message --------
Dear all,
the next GSSI Math Colloquium will be held on
Thursday
January 28 at
3pm (Italian time).
The speaker is
Anders
Hansen, with a lecture connecting computational mathematics
with deep learning and AI. More details below.
Anders Hansen is Associate Professor at University of Cambridge,
where he leads the Applied Functional and Harmonic Analysis group,
and Full Professor of Mathematics at the University of Oslo.
To attend the talk please use to the following
Zoom link:
https://us02web.zoom.us/j/84038062394
Please feel free to distribute this announcement as you see fit.
Looking forward to seeing you all on Thursday!
Paolo Antonelli, Stefano Marchesani, Francesco Tudisco and
Francesco Viola