**JAMES JOHNDROW **

*Department of Statistics*

*Stanford University*

**“Bayesian Computation in Large Scale and High Dimensional Problems”**

**ABSTRACT**

An important motivation for Bayesian analysis in large scale (big *p* and/or *n*) settings is uncertainty quantification via the posterior distribution. MCMC provides a default computational algorithm, but it can be unacceptably slow for large scale problems. Computational complexity of MCMC is a function of two factors: the cost of one step from the Markov kernel and the rate at which the spectral gap converges to zero in *n* and *p*. The first part of this talk will consider reducing the computational complexity per step by using approximations to the exact MCMC transition kernel, focusing on error bounds and optimality. The second part of the talk will consider a case where commonly used Gibbs samplers have spectral gaps that converge to zero rapidly in *n*, while an alternative Metropolis-Hastings sampler has a spectral gap that is nearly independent of *n*. New algorithms are proposed that have superior empirical performance in large scale cases where existing MCMC performs poorly.

*Friday, March 31, 2017, 11:30 AM, BLOC 113*