Next Webinar PRISMA - April 13, 2026 - Speakers: Jodi Dianetti & Giorgio Ferrari
Webinar of the team UMI - PRISMA (http://www.umi-prisma.polito.it/) PRISMA webinars follow a colloquium-style format designed to foster exchange and discussion within the Italian probability and statistics community. Each session features two speakers, who give two closely connected 30-minute talks providing the community with a perspective on their research area. Over the past few years, recordings of the seminars have been made available on the UMI YouTube channel: https://youtube.com/playlist?list=PLmySpc-jrtAMq84VH71evyqPc1hl6eEQb The next event is scheduled for Monday, April 13, 2026. The speakers will be Jodi Dianetti (Università di Roma Tor Vergata) and Giorgio Ferrari (University of Bielefeld), who will speak on: Singular Stochastic Controls in Reinforcement Learning and Mean-field Problems According to the following schedule 16:00 1st seminar 16:30 Break and discussions 16:45 2nd seminar 17:15 Conclusions and discussions The abstract can be found below. The seminars will be streamed on Teams at the following link: <https://teams.microsoft.com/meet/336647845045160?p=BZrq2TCGqLcOqxzVsv> teams.microsoft.com<https://teams.microsoft.com/meet/336647845045160?p=BZrq2TCGqLcOqxzVsv> [X]<https://teams.microsoft.com/meet/336647845045160?p=BZrq2TCGqLcOqxzVsv> We look forward to seeing many of you there! Luciano Campi, Maurizia Rossi %%%%%%%%%%%%%%%%%%%%%%%%%%%% SPEAKERS: Jodi Dianetti (Università di Roma Tor Vergata) and Giorgio Ferrari (Bielefeld University) TITLE: Singular Stochastic Controls in Reinforcement Learning and Mean-field Problems ABSTRACT: This talk is divided into two parts, both devoted to stochastic control problems with singular controls. In both cases, a connection with optimal stopping plays a key role, although it arises in different ways and from different perspectives. In the first part, we consider optimal stopping problems from a reinforcement learning perspective. We formulate the stopping decision through randomized stopping times, modeled by bounded, nondecreasing, càdlàg control processes, and introduce an entropy regularization that promotes exploration. The resulting problem can be rewritten as a degenerate, finite-fuel singular stochastic control problem. Using dynamic programming, we characterize the optimal exploratory policy, obtain semi-explicit solutions in a real-option setting, analyze the vanishing-entropy limit, and propose a policy-iteration reinforcement learning algorithm with convergence guarantees. In the second part, we turn to mean-field control problems with singular controls over a finite horizon, allowing for general dependence on the distribution of the state. We show that these problems can be linked to a mean-field game with singular controls, and that equilibria of this game yield optimal controls for the original problem. For a mean-field version of the classical monotone follower problem, the associated equilibrium is characterized by exploiting its connection with optimal stopping together with a Kakutani-Fan-Glicksberg fixed-point argument, leading to a complete characterization of the optimal control in terms of a moving free boundary that uniquely solves a nonlinear integral equation. The talk is based on (ongoing) joint works with Andrea Amato, Federico Cannerozzi, and Renyuan Xu. --------------------------------------------------------
participants (1)
-
Luciano Campi