Statistics Undergraduate Students Turned on the Charm at Aggieland Saturday 2019!
An annual campus-wide open house for prospective students and their families. These bright students dazzled potential Aggies while showcasing our Undergraduate Program. Prospects were able to meet current students, attend departmental presentations, tour the dorms, visit libraries and computer labs and see everything that Texas A&M has to offer. Students could also learn more about colleges and majors, get information about admission and pick up some financial aid and scholarship information, too.
Showing their school spirit is what they do best! Pictured here are current undergraduate students. GIG 'EM!
I Am Texas A&M Science - Irina Gaynanova
Even for the most seasoned academic, career inspiration starts somewhere, and it all begins with a story. Here's one in our I Am Texas A&M Science series from Texas A&M statistician Irina Gaynanova, who, despite not exactly loving her academic field as an undergraduate, grew to appreciate working with data enough to make a career out of developing statistical methodology for high dimensional data problems.
Jianhua Huang, a respected researcher and educator in statistical machine learning, computational statistics and statistical methods for big data sets, has been appointed as associate director of the Texas A&M Institute of Data Science.Read More →
Texas A&M statistician Derya Akleman will work closely with each of the college's five departments and across the university in her new administrative role as assistant dean for diversity and college climate in the College of Science.Read More →
11:30 AM / 12:30 PM Blocker Building (BLOC,) Room 113 979-845-3141
Department of Econometrics and Statistics
University of Chicago Booth School of Business
On Theory for BART
Since their inception in the 1980¹s, regression trees have been one of the more widely used non-parametric prediction methods. Tree-structured methods yield a histogram reconstruction of the regression surface, where the bins correspond to terminal nodes of recursive partitioning. Trees are powerful, yet susceptible to over-fitting. Strategies against overfitting have traditionally relied on pruning greedily grown trees. The Bayesian framework offers an alternative remedy against overfitting through priors. Roughly speaking, a good prior charges smaller trees where overfitting does not occur. While the consistency of random histograms, trees and their ensembles has been studied quite extensively, the theoretical understanding of the Bayesian counterparts has been missing. In this paper, we take first steps towards understanding why/when do Bayesian trees and forests not overfit. To address this question, we study the speed at which the posterior concentrates around the true smooth regression function. We propose a spike-and-tree variant of the popular Bayesian CART prior and establish new theoretical results showing that regression trees (and forests) (a) are capable of recovering smooth regression surfaces (with smoothness not exceeding one), achieving optimal rates up to a log factor, (b) can adapt to the unknown level of smoothness and (c) can perform effective dimension reduction when p > n. Going further, we also show semi and non-parametric Bernstein-von Mises theorems showing that BART is fundamentally justified from a frequentist point of view. These results provide a piece of missing theoretical evidence explaining why Bayesian trees (and additive variants thereof) have worked so well in practice.
Friday, 3/1/2019, 11:30 AM, BLOC 113
11:30 AM / 12:30 PM Blocker Building (BLOC), Room 113 979-845-3141
Department of Industrial & Systems Engineering
Texas A&M University
Uncertainty Quantification with Gaussian Processes: Uniform Error Bounds and Convergence Properties
Kriging based on Gaussian random fields is widely used in reconstructing unknown functions. The kriging method has pointwise predictive distributions which are computationally simple. However, in many applications one would like to predict for a range of untried points simultaneously. In this work we obtain some error bounds for the kriging predictor under the uniform metric. It works for a scattered set of input points in an arbitrary dimension, and also covers cases where the covariance function of the Gaussian process is misspecified. These results lead to a better understanding of the rate of convergence of kriging under the Gaussian or the Matérn correlation functions, the relationship between space-filling designs and kriging models, and the robustness of the Matérn correlation functions.
Friday, 3/8/2019, 11:30 AM, BLOC 113
11:30 AM / 12:30 PM 979-845-3141