Two-stage sampled learning theory on distributions

Zoltán Szabó, Arthur Gretton, Barnabás Póczos, Bharath Kumar Sriperumbudur

Research output: Contribution to journalConference article

9 Citations (Scopus)

Abstract

We focus on the distribution regression problem: regressing to a real-valued response from a probability distribution. Although there exist a large number of similarity measures between distributions, very little is known about their generalization performance in specific learning tasks. Learning problems formulated on distributions have an inherent two-stage sampled difficulty: in practice only samples from sampled distributions are observable, and one has to build an estimate on similarities computed between sets of points. To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean. In this paper, we provide theoretical guarantees for a remarkably simple algorithmic alternative to solve the distribution regression problem: embed the distributions to a reproducing kernel Hilbert space, and learn a ridge regressor from the embeddings to the outputs. Our main contribution is to prove the consistency of this technique in the two-stage sampled setting under mild conditions (on separable, topological domains endowed with kernels). For a given total number of observations, we derive convergence rates as an explicit function of the problem difficulty. As a special case, we answer a 15-year-old open question: we establish the consistency of the classical set kernel [Haussler, 1999; Görtner et. al, 2002] in regression, and cover more recent kernels on distributions, including those due to [Christmann and Steinwart, 2010].

Original languageEnglish (US)
Pages (from-to)948-957
Number of pages10
JournalJournal of Machine Learning Research
Volume38
StatePublished - Jan 1 2015
Event18th International Conference on Artificial Intelligence and Statistics, AISTATS 2015 - San Diego, United States
Duration: May 9 2015May 12 2015

Fingerprint

Learning Theory
Hilbert spaces
Probability distributions
Regression
kernel
Kernel Density Estimation
Reproducing Kernel Hilbert Space
Ridge
Similarity Measure
Set of points
Higher Dimensions
Convergence Rate
Euclidean
Probability Distribution
Cover
Output
Alternatives

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Cite this

Szabó, Zoltán ; Gretton, Arthur ; Póczos, Barnabás ; Sriperumbudur, Bharath Kumar. / Two-stage sampled learning theory on distributions. In: Journal of Machine Learning Research. 2015 ; Vol. 38. pp. 948-957.
@article{38ec10b1a15f4c4aa0f0e35b527a34c2,
title = "Two-stage sampled learning theory on distributions",
abstract = "We focus on the distribution regression problem: regressing to a real-valued response from a probability distribution. Although there exist a large number of similarity measures between distributions, very little is known about their generalization performance in specific learning tasks. Learning problems formulated on distributions have an inherent two-stage sampled difficulty: in practice only samples from sampled distributions are observable, and one has to build an estimate on similarities computed between sets of points. To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean. In this paper, we provide theoretical guarantees for a remarkably simple algorithmic alternative to solve the distribution regression problem: embed the distributions to a reproducing kernel Hilbert space, and learn a ridge regressor from the embeddings to the outputs. Our main contribution is to prove the consistency of this technique in the two-stage sampled setting under mild conditions (on separable, topological domains endowed with kernels). For a given total number of observations, we derive convergence rates as an explicit function of the problem difficulty. As a special case, we answer a 15-year-old open question: we establish the consistency of the classical set kernel [Haussler, 1999; G{\"o}rtner et. al, 2002] in regression, and cover more recent kernels on distributions, including those due to [Christmann and Steinwart, 2010].",
author = "Zolt{\'a}n Szab{\'o} and Arthur Gretton and Barnab{\'a}s P{\'o}czos and Sriperumbudur, {Bharath Kumar}",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
volume = "38",
pages = "948--957",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",

}

Two-stage sampled learning theory on distributions. / Szabó, Zoltán; Gretton, Arthur; Póczos, Barnabás; Sriperumbudur, Bharath Kumar.

In: Journal of Machine Learning Research, Vol. 38, 01.01.2015, p. 948-957.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Two-stage sampled learning theory on distributions

AU - Szabó, Zoltán

AU - Gretton, Arthur

AU - Póczos, Barnabás

AU - Sriperumbudur, Bharath Kumar

PY - 2015/1/1

Y1 - 2015/1/1

N2 - We focus on the distribution regression problem: regressing to a real-valued response from a probability distribution. Although there exist a large number of similarity measures between distributions, very little is known about their generalization performance in specific learning tasks. Learning problems formulated on distributions have an inherent two-stage sampled difficulty: in practice only samples from sampled distributions are observable, and one has to build an estimate on similarities computed between sets of points. To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean. In this paper, we provide theoretical guarantees for a remarkably simple algorithmic alternative to solve the distribution regression problem: embed the distributions to a reproducing kernel Hilbert space, and learn a ridge regressor from the embeddings to the outputs. Our main contribution is to prove the consistency of this technique in the two-stage sampled setting under mild conditions (on separable, topological domains endowed with kernels). For a given total number of observations, we derive convergence rates as an explicit function of the problem difficulty. As a special case, we answer a 15-year-old open question: we establish the consistency of the classical set kernel [Haussler, 1999; Görtner et. al, 2002] in regression, and cover more recent kernels on distributions, including those due to [Christmann and Steinwart, 2010].

AB - We focus on the distribution regression problem: regressing to a real-valued response from a probability distribution. Although there exist a large number of similarity measures between distributions, very little is known about their generalization performance in specific learning tasks. Learning problems formulated on distributions have an inherent two-stage sampled difficulty: in practice only samples from sampled distributions are observable, and one has to build an estimate on similarities computed between sets of points. To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean. In this paper, we provide theoretical guarantees for a remarkably simple algorithmic alternative to solve the distribution regression problem: embed the distributions to a reproducing kernel Hilbert space, and learn a ridge regressor from the embeddings to the outputs. Our main contribution is to prove the consistency of this technique in the two-stage sampled setting under mild conditions (on separable, topological domains endowed with kernels). For a given total number of observations, we derive convergence rates as an explicit function of the problem difficulty. As a special case, we answer a 15-year-old open question: we establish the consistency of the classical set kernel [Haussler, 1999; Görtner et. al, 2002] in regression, and cover more recent kernels on distributions, including those due to [Christmann and Steinwart, 2010].

UR - http://www.scopus.com/inward/record.url?scp=84954309811&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84954309811&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:84954309811

VL - 38

SP - 948

EP - 957

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

ER -