Dear all,
We are glad to invite you to the *seminar* that will take place on the *24th of October, at 14.30*, in presence in Room *Aula Seminari DiSMeQ 4026, building U7* Civitas, 4th floor, *University of Milano-Bicocca*, Via Bicocca degli Arcimboldi 8, 20126 Milano.
*Erik Lindström https://www.maths.lu.se/staff/erik-lindstroem/*from *Lund University* will present a seminar on “*Feature Selection in Jump Models*” (see abstract below).
The seminar is also available online at the following link:
https://unimib.webex.com/unimib/j.php?MTID=m7d32c71febb43ee95aabde1ca5428861
You are invited to forward the event to your students, PhDs and colleagues who may be interested in the Seminar.
Kind Regards,
Fulvia Pennoni
On behalf of the Department of Statistics and Quantitative Methods
/--------------/
*Speaker*: Erik Lindström from Lund University
*Title*: /Feature Selection in Jump Models/
*Abstract*: Hidden Markov models are a popular choice for inferring the hidden state of financial markets as the states often are interpreted as market regimes. This interpretation implicitly assumes that the underlying state sequence has a certain level of persistence. However, when a hidden Markov model is misspecified or misestimated, it often leads to unrealistically rapid switching dynamics.
We propose a novel estimation approach based on clustering temporal features while penalizing jumps. The advantages of the proposed jump estimator include that it learns the hidden state sequence and model parameters simultaneously while providing control over the transition rate, it is less sensitive to initialization, it performs better when the number of states increases, and it is robust to misspecified conditional distributions. In addition, feature selection is necessary in high-dimensional settings where the number of features is large compared to the number of observations and the underlying states differ only with respect to a subset of the features.
We develop and implement a coordinate descent algorithm that alternates between selecting the features and estimating the model parameters and state sequence, which scales to large data sets with large numbers of (noisy) features. We demonstrate the usefulness of the proposed framework by comparing it with a number of other methods on both simulated and real data in the form of financial returns, protein sequences, and text. The resulting sparse jump model outperforms all other methods considered and is remarkably robust to noise.