On quasi likelihood equations with non-parametric weights

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

To apply the quasi likelihood method one needs both the mean and the variance functions to determine its optimal weights. If the variance function is unknown, then the weights should be acquired from the data. One way to do so is by adaptive estimation, which involves non-parametric estimation of the variance function. Adaptation, however, also brings in noise that hampers its improvement for moderate samples. In this paper we introduce an alternative method based not on the estimation of the variance function, but on the penalized minimization of the asymptotic variance of the estimator. By doing so we are able to retain a restricted optimality under the smoothness condition, however strong that condition may be. This is important because for moderate sample sizes we need to impose a strong smoothness constraint to damp the noise - often stronger than would be adequate for the adaptive method. We will give a rigorous development of the related asymptotic theory, and provide the simulation evidence for the advantage of this method.

Original languageEnglish (US)
Pages (from-to)577-602
Number of pages26
JournalScandinavian Journal of Statistics
Volume28
Issue number4
DOIs
StatePublished - Jan 1 2001

Fingerprint

Quasi-likelihood
Variance Function
Smoothness
Adaptive Estimation
Likelihood Methods
Asymptotic Variance
Adaptive Method
Asymptotic Theory
Nonparametric Estimation
Optimality
Sample Size
Estimator
Unknown
Alternatives
Simulation

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Cite this

@article{c35404373b694b62814a2b8bd9161648,
title = "On quasi likelihood equations with non-parametric weights",
abstract = "To apply the quasi likelihood method one needs both the mean and the variance functions to determine its optimal weights. If the variance function is unknown, then the weights should be acquired from the data. One way to do so is by adaptive estimation, which involves non-parametric estimation of the variance function. Adaptation, however, also brings in noise that hampers its improvement for moderate samples. In this paper we introduce an alternative method based not on the estimation of the variance function, but on the penalized minimization of the asymptotic variance of the estimator. By doing so we are able to retain a restricted optimality under the smoothness condition, however strong that condition may be. This is important because for moderate sample sizes we need to impose a strong smoothness constraint to damp the noise - often stronger than would be adequate for the adaptive method. We will give a rigorous development of the related asymptotic theory, and provide the simulation evidence for the advantage of this method.",
author = "Bing Li",
year = "2001",
month = "1",
day = "1",
doi = "10.1111/1467-9469.00256",
language = "English (US)",
volume = "28",
pages = "577--602",
journal = "Scandinavian Journal of Statistics",
issn = "0303-6898",
publisher = "Wiley-Blackwell",
number = "4",

}

On quasi likelihood equations with non-parametric weights. / Li, Bing.

In: Scandinavian Journal of Statistics, Vol. 28, No. 4, 01.01.2001, p. 577-602.

Research output: Contribution to journalArticle

TY - JOUR

T1 - On quasi likelihood equations with non-parametric weights

AU - Li, Bing

PY - 2001/1/1

Y1 - 2001/1/1

N2 - To apply the quasi likelihood method one needs both the mean and the variance functions to determine its optimal weights. If the variance function is unknown, then the weights should be acquired from the data. One way to do so is by adaptive estimation, which involves non-parametric estimation of the variance function. Adaptation, however, also brings in noise that hampers its improvement for moderate samples. In this paper we introduce an alternative method based not on the estimation of the variance function, but on the penalized minimization of the asymptotic variance of the estimator. By doing so we are able to retain a restricted optimality under the smoothness condition, however strong that condition may be. This is important because for moderate sample sizes we need to impose a strong smoothness constraint to damp the noise - often stronger than would be adequate for the adaptive method. We will give a rigorous development of the related asymptotic theory, and provide the simulation evidence for the advantage of this method.

AB - To apply the quasi likelihood method one needs both the mean and the variance functions to determine its optimal weights. If the variance function is unknown, then the weights should be acquired from the data. One way to do so is by adaptive estimation, which involves non-parametric estimation of the variance function. Adaptation, however, also brings in noise that hampers its improvement for moderate samples. In this paper we introduce an alternative method based not on the estimation of the variance function, but on the penalized minimization of the asymptotic variance of the estimator. By doing so we are able to retain a restricted optimality under the smoothness condition, however strong that condition may be. This is important because for moderate sample sizes we need to impose a strong smoothness constraint to damp the noise - often stronger than would be adequate for the adaptive method. We will give a rigorous development of the related asymptotic theory, and provide the simulation evidence for the advantage of this method.

UR - http://www.scopus.com/inward/record.url?scp=0035541828&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035541828&partnerID=8YFLogxK

U2 - 10.1111/1467-9469.00256

DO - 10.1111/1467-9469.00256

M3 - Article

VL - 28

SP - 577

EP - 602

JO - Scandinavian Journal of Statistics

JF - Scandinavian Journal of Statistics

SN - 0303-6898

IS - 4

ER -