1st OWABI Talk: 1-1.30pm Uk time
Title: Approximate Bayesian Fusion
Abstract: Bayesian Fusion is a powerful approach that enables distributed inference while maintaining exactness. However, the approach is computationally expensive. In this work, we propose a novel method that incorporates numerical approximations to
alleviate the most computationally expensive steps, thereby achieving substantial reductions in runtime. Our approach retains the flexibility to approximate the target posterior distribution to an arbitrary degree of accuracy, and is scalable with respect
to both the size of the dataset and the number of computational cores. Our method offers a practical and efficient alternative for large-scale Bayesian inference in distributed environments.
2nd OWABI Talk: 1.30-2pm Uk time
Title: GANs Secretly Perform Approximate Bayesian Model Selection
Abstract: Generative Adversarial Networks (GANs) are popular models achieving impressive performance in various generative modeling tasks. In this work, we aim at explaining the undeniable success of GANs by interpreting them as probabilistic generative
models. In this view, GANs transform a distribution over latent variables Z into a distribution over inputs X through a function parameterized by a neural network, which is usually referred to as the generator. This probabilistic interpretation enables us
to cast the GAN adversarial-style optimization as a proxy for marginal likelihood optimization. More specifically, it is possible to show that marginal likelihood maximization with respect to model parameters is equivalent to the minimization of the Kullback-Leibler
(KL) divergence between the true data generating distribution and the one modeled by the GAN. By replacing the KL divergence with other divergences and integral probability metrics we obtain popular variants of GANs such as f-GANs, Wasserstein-GANs, and Maximum
Mean Discrepancy (MMD)-GANs. This connection has profound implications because of the desirable properties associated with marginal likelihood optimization, such as (i) lack of overfitting, which explains the success of GANs, and (ii) allowing for model selection,
which opens to the possibility of obtaining parsimonious generators through architecture search.
------
Dr. Massimiliano Tamborrino
Reader (Associate Professor) and WIHEA Fellow
Department of Statistics