AbstractsAstronomy & Space Science

Accelerating Bayesian inference in computationally expensive computer models using local and global approximations

by Patrick Raymond Conrad

Institution: MIT
Department: Department of Aeronautics and Astronautics
Year: 2014
Keywords: Aeronautics and Astronautics.
Record ID: 2026773
Full text PDF: http://hdl.handle.net/1721.1/90599


Computational models of complex phenomena are an important resource for scientists and engineers. However, many state-of-the-art simulations of physical systems are computationally expensive to evaluate and are black box-meaning that they can be run, but their internal workings cannot be inspected or changed. Directly applying uncertainty quantification algorithms, such as those for forward uncertainty propagation or Bayesian inference, to these types of models is often intractable because the analyses use many evaluations of the model. Fortunately, many physical systems are well behaved, in the sense that they may be efficiently approximated with a modest number of carefully chosen samples. This thesis develops global and local approximation strategies that can be applied to black-box models to reduce the cost of forward uncertainty quantification and Bayesian inference. First, we develop an efficient strategy for constructing global approximations using an orthonormal polynomial basis. We rigorously construct a Smolyak pseudospectral algorithm, which uses sparse sample sets to efficiently extract information from loosely coupled functions. We provide a theoretical discussion of the behavior and accuracy of this algorithm, concluding that it has favorable convergence characteristics. We make this strategy efficient in practice by introducing a greedy heuristic that adaptively identifies and explores the important input dimensions, or combinations thereof. When the approximation is used within Bayesian inference, however, it is difficult to translate the theoretical behavior of the global approximations into practical controls on the error induced in the resulting posterior distribution. Thus, the second part of this thesis introduces a new framework for accelerating MCMC algorithms by constructing local surrogates of the computational model within the Metropolis-Hastings kernel, borrowing ideas from deterministic approximation theory, optimization, and experimental design. Exploiting useful convergence characteristics of local approximations, we prove the ergodicity of our approximate Markov chain and show that it samples asymptotically from the exact posterior distribution of interest. Our theoretical results reinforce the key observation underlying this work: when the likelihood has some local regularity, the number of model evaluations per MCMC step can be greatly reduced, without incurring significant bias in the Monte Carlo average. We illustrate that the inference framework is robust and extensible by describing variations that use different approximation families, MCMC kernels, and computational environments. Our numerical experiments demonstrate order-of-magnitude reductions in the number of forward model evaluations used in representative ODE or PDE inference problems, in both real and synthetic data examples. Finally, we demonstrate the local approximation algorithm by performing parameter inference for the ice-ocean coupling in Pine Island Glacier, Antarctica. This problem constitutes a challenging…