Abstract:
Quantum neural networks constitute the quantum version of deep neural models. These new models are based on quantum circuits and generate functions given by the expectation values of a quantum observable measured on the output of a quantum circuit made
by parametric one-qubit and two-qubit gates. The parameters of the circuit encode both the input data and the parameters of the model itself. These parameters are typically optimized by gradient descent, which involves iterative adjustment to minimize a cost
function and improve the performance of the quantum circuit in the processing and analysis of data. Significant progress has been made in addressing the question of whether training can perfectly fit the training examples while simultaneously avoiding overfitting.
A fundamental breakthrough has been the proof that, in the limit of infinite width, the probability distribution of the function generated by a deep neural network trained on a supervised learning problem converges to a Gaussian process. Recent developments
have inspired renewed interest in quantum machine learning, raising the question of whether quantum neural networks exhibit analogous properties. In this presentation, I will explore some of the recent advancements in this area, highlighting key insights
and findings. This talk is based on joint collaborations with Giacomo de Palma, Filippo Girardi, and Davide Pastorello.
Link Zoom:
Link Seminario Polimi:
-----
Prof. Luca Scarpa, PhD
Associate Professor in Probability
Department of Mathematics
Politecnico di Milano
Via E. Bonardi 9
20133 Milano, Italy