Texas A&M - Labors of Lab
A new spotlight of "Labors of Lab" shines brightly on statistics Ph.D candidate Minsuk Shin, Class of 2016. Studying under the advisement of Department Head, Dr. Valen Johnson, Minsuk shows passion for his research in Bayesian statistics and power in data. "Everything in Bayesian statistics can be explained by one single theorem, a Bayes theorem," Shin said. "I found that it's kind of beautiful, and also, it has very strong coherence."
Texas A&M statistician Cliff Spiegelman has been recognized with the San Antonio Chapter of the American Statistical Association's Don Owen Award and will be honored with a special issue of the journal Chemometrics and Intelligent Laboratory Systems for his career contributions to statistical science.Read More →
Texas A&M statistician Valen E. Johnson is one of three faculty campus-wide appointed as 2016 University Distinguished Professors, a perpetual title representing the highest level of faculty achievement at Texas A&M. He is the fourth statistics professor to be appointed to the prestigious title.Read More →
03:00 PM / 04:00 PM Blocker Building (BLOC), Room 113 979-845-3141
Department of Statistics
University of Warwick
"Bayesian Model Selection for Mixtures and Orthogonal Regression"
We will overview two pieces of work on Bayesian model selection. First we consider variable selection when the gram matrix X'X is block-orthogonal, e.g. as in principal component or wavelet regression. Conditional on the residual variance (phi) most posterior quantities of interest have closed-form, but integrating out phi to duly account for uncertainty has proven challenging as in principle it requires a sum over 2p models, and led to a number of adhoc solutions in the literature. We solved this bottleneck with a fast expression to integrate phi exactly (e.g. O(p) operations when X'X is diagonal), avoiding MCMC or other costly iterative schemes. Coupled with an efficient model search and other tricks the framework delivers extremely exact computation for large p, as we show in our examples.
The second topic is on the use of non-local priors (NLPs) to select the number of components in mixture models. We carefully define NLPs in this non-standard setting, show that it leads to faster asymptotic parsimony rates and more interpretable solutions (by avoiding components that overlap or have negligible weight), and by making a couple trivial observations derive computational algorithms that extend directly from those for standard priors and incur essentially no extra cost. The work has connections with state-of-the-art approaches such as repulsive mixtures, but there are fundamental differences between the frameworks (essentially, shrinkage vs. point mass priors) and as illustrated in our examples their performance can also differ substantially. We also show how local priors often enforce parsimony to an insufficient extent, whereas the classical BIC exhibits a troubling lack of detection power (which is explained by recent results from the algebraic statistics literature).