Statistical methodology for massive datasets and model selection

G. Jogesh Babu, James P. McDermott

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Astronomy is facing a revolution in data collection, storage, analysis, and interpretation of large datasets. The data volumes here are several orders of magnitude larger than what astronomers and statisticians are used to dealing with, and the old methods simply do not work. The National Virtual Observatory (NVO) initiative has recently emerged in recognition of this need and to federate numerous large digital sky archives, both ground based and space based, and develop tools to explore and understand these vast volumes of data. In this paper, we address some of the critically important statistical challenges raised by the NVO. In particular a low-storage, single-pass, sequential method for simultaneous estimation of multiple quantiles for massive datasets will be presented. Density estimation based on this procedure and a multivariate extension will also be discussed. The NVO also requires statistical tools to analyze moderate size databases. Model selection is an important issue for many astrophysical databases. We present a simple likelihood based 'leave one out' method to select the best among the several possible alternatives. The performance of the method is compared to those based on Akaike Information Criterion and Bayesian Information Criterion.

Original languageEnglish (US)
Pages (from-to)228-237
Number of pages10
JournalProceedings of SPIE - The International Society for Optical Engineering
Volume4847
DOIs
StatePublished - Dec 1 2002
EventAstronomical data Analysis II - Waikoloa, HI, United States
Duration: Aug 27 2002Aug 28 2002

All Science Journal Classification (ASJC) codes

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Statistical methodology for massive datasets and model selection'. Together they form a unique fingerprint.

Cite this