Speaker
Description
Normalizing Flows (NF) are Generative models which transform a simple prior distribution into the desired target. They however require the design of an invertible mapping whose Jacobian determinant has to be computable. Recently introduced, Neural Hamiltonian Flows (NHF) are Hamiltonian dynamics-based flows, which are continuous, volume-preserving and invertible and thus make for natural candidates for robust NF architectures. In particular, their similarity to classical Mechanics could lead to easier interpretability of the learned mapping. In this presentation, I will detail the NHF architecture and show that they may still pose a challenge to interpretability. For this reason, I will introduce a fixed kinetic energy version of the model. Inspired by physics, this approach improves interpretability and requires less parameters than the original model. I will talk about the robustness of the NHF architecture, especially its fixed-kinetic version, on a simple 2D problem and present first results in higher dimension. Finally, I will show how to adapt NHF to the context of Bayesian inference and illustrate the method on an example from cosmology.