Model and Inference Combinators for Deep Probabilistic Programming
Probabilistic programs with dynamic computation graphs can define measures over sample spaces with unbounded dimensionality. Owing to the generality of this model class, inference often relies on “black-box” Monte Carlo methods that are generally not able to exploit optimizations based on conditional independence and exchangeability, which have historically been the cornerstones of efficient inference. We here seek to develop a “middle ground” between probabilistic models with fully dynamic and fully static computation graphs. To this end, we introduce a combinator library for the Probabilistic Torch framework.
Combinators are functions that accept probabilistic programs and return probabilistic programs. We define combinators for both model and inference composition. Model combinators alter the measure that a program denotes. Inference combinators leave the measure invariant, but alter the evaluation strategy. Because combinators can be applied before a program is evaluated on data, they allow us to define a static computation graph at a coarsened level of representation. In this graph, individual nodes correspond to primitive model components, which may themselves have dynamic computation graphs, and are treated as black boxes for purposes of performing inference.
Tue 15 JanDisplayed time zone: Belfast change
11:00 - 12:30
BLAFI at Sala VI
Chair(s): Steven Holtzen University of California, Los Angeles
|The Geometry of Bayesian Programming|
Ugo Dal Lago University of Bologna, Italy / Inria, France, Naohiko Hoshino Kyoto University
|Model and Inference Combinators for Deep Probabilistic Programming|
Eli Sennesh Northeastern University, Adam Ścibior University of Cambridge and MPI Tuebingen, Hao Wu Northeastern University, Jan-Willem van de Meent Northeastern UniversityFile Attached
|Server-side Probabilistic Programming|
David Tolpin PUB+Media Attached