Quickly Assessing Contributions to Input Uncertainty

Eunhye Song, Barry L. Nelson

Research output: Contribution to journalArticle

22 Citations (Scopus)

Abstract

"Input uncertainty" refers to the (often unmeasured) variability in simulation-based performance estimators that is a consequence of driving the simulation with input models (e.g., fully specified univariate distributions of i.i.d. inputs) that are based on real-world data. In 2012 Ankenman and Nelson presented a quick-and-easy diagnostic experiment to assess the overall effect of input uncertainty on simulation output. When their method reveals that input uncertainty is substantial, then the natural next questions are which input distributions contribute the most to input uncertainty, and from which input distributions would it be most beneficial to collect more data? They proposed a possibly lengthy sequence of additional diagnostic experiments to answer these questions. In this paper we provide a method that obtains an estimator of the overall variance due to input uncertainty, the relative contribution to this variance of each input distribution, and a measure of the sensitivity of overall uncertainty to increasing the real-world sample-size used to fit each distribution, all from a single diagnostic experiment. Our approach exploits a metamodel that relates the means and variances of the input distributions to the mean response of the simulation output, and bootstrapping of the real-world data to represent input-model uncertainty. Further, we investigate whether and how the simulation outputs from the nominal and diagnostic experiments may be combined to obtain a better performance estimator. For the case when the analyst obtains additional real-world data, refines the input models, and runs a follow-up experiment, we analyze whether and how the simulation outputs from all three experiments should be combined. Numerical illustrations are provided.

Original languageEnglish (US)
Pages (from-to)893-909
Number of pages17
JournalIIE Transactions (Institute of Industrial Engineers)
Volume47
Issue number9
DOIs
StatePublished - Sep 2 2015

Fingerprint

Experiments
Uncertainty

All Science Journal Classification (ASJC) codes

  • Industrial and Manufacturing Engineering

Cite this

@article{bc7e77c24b7443f09da1a5a304f9c2ef,
title = "Quickly Assessing Contributions to Input Uncertainty",
abstract = "{"}Input uncertainty{"} refers to the (often unmeasured) variability in simulation-based performance estimators that is a consequence of driving the simulation with input models (e.g., fully specified univariate distributions of i.i.d. inputs) that are based on real-world data. In 2012 Ankenman and Nelson presented a quick-and-easy diagnostic experiment to assess the overall effect of input uncertainty on simulation output. When their method reveals that input uncertainty is substantial, then the natural next questions are which input distributions contribute the most to input uncertainty, and from which input distributions would it be most beneficial to collect more data? They proposed a possibly lengthy sequence of additional diagnostic experiments to answer these questions. In this paper we provide a method that obtains an estimator of the overall variance due to input uncertainty, the relative contribution to this variance of each input distribution, and a measure of the sensitivity of overall uncertainty to increasing the real-world sample-size used to fit each distribution, all from a single diagnostic experiment. Our approach exploits a metamodel that relates the means and variances of the input distributions to the mean response of the simulation output, and bootstrapping of the real-world data to represent input-model uncertainty. Further, we investigate whether and how the simulation outputs from the nominal and diagnostic experiments may be combined to obtain a better performance estimator. For the case when the analyst obtains additional real-world data, refines the input models, and runs a follow-up experiment, we analyze whether and how the simulation outputs from all three experiments should be combined. Numerical illustrations are provided.",
author = "Eunhye Song and Nelson, {Barry L.}",
year = "2015",
month = "9",
day = "2",
doi = "10.1080/0740817X.2014.980869",
language = "English (US)",
volume = "47",
pages = "893--909",
journal = "IISE Transactions",
issn = "2472-5854",
publisher = "Taylor and Francis Ltd.",
number = "9",

}

Quickly Assessing Contributions to Input Uncertainty. / Song, Eunhye; Nelson, Barry L.

In: IIE Transactions (Institute of Industrial Engineers), Vol. 47, No. 9, 02.09.2015, p. 893-909.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Quickly Assessing Contributions to Input Uncertainty

AU - Song, Eunhye

AU - Nelson, Barry L.

PY - 2015/9/2

Y1 - 2015/9/2

N2 - "Input uncertainty" refers to the (often unmeasured) variability in simulation-based performance estimators that is a consequence of driving the simulation with input models (e.g., fully specified univariate distributions of i.i.d. inputs) that are based on real-world data. In 2012 Ankenman and Nelson presented a quick-and-easy diagnostic experiment to assess the overall effect of input uncertainty on simulation output. When their method reveals that input uncertainty is substantial, then the natural next questions are which input distributions contribute the most to input uncertainty, and from which input distributions would it be most beneficial to collect more data? They proposed a possibly lengthy sequence of additional diagnostic experiments to answer these questions. In this paper we provide a method that obtains an estimator of the overall variance due to input uncertainty, the relative contribution to this variance of each input distribution, and a measure of the sensitivity of overall uncertainty to increasing the real-world sample-size used to fit each distribution, all from a single diagnostic experiment. Our approach exploits a metamodel that relates the means and variances of the input distributions to the mean response of the simulation output, and bootstrapping of the real-world data to represent input-model uncertainty. Further, we investigate whether and how the simulation outputs from the nominal and diagnostic experiments may be combined to obtain a better performance estimator. For the case when the analyst obtains additional real-world data, refines the input models, and runs a follow-up experiment, we analyze whether and how the simulation outputs from all three experiments should be combined. Numerical illustrations are provided.

AB - "Input uncertainty" refers to the (often unmeasured) variability in simulation-based performance estimators that is a consequence of driving the simulation with input models (e.g., fully specified univariate distributions of i.i.d. inputs) that are based on real-world data. In 2012 Ankenman and Nelson presented a quick-and-easy diagnostic experiment to assess the overall effect of input uncertainty on simulation output. When their method reveals that input uncertainty is substantial, then the natural next questions are which input distributions contribute the most to input uncertainty, and from which input distributions would it be most beneficial to collect more data? They proposed a possibly lengthy sequence of additional diagnostic experiments to answer these questions. In this paper we provide a method that obtains an estimator of the overall variance due to input uncertainty, the relative contribution to this variance of each input distribution, and a measure of the sensitivity of overall uncertainty to increasing the real-world sample-size used to fit each distribution, all from a single diagnostic experiment. Our approach exploits a metamodel that relates the means and variances of the input distributions to the mean response of the simulation output, and bootstrapping of the real-world data to represent input-model uncertainty. Further, we investigate whether and how the simulation outputs from the nominal and diagnostic experiments may be combined to obtain a better performance estimator. For the case when the analyst obtains additional real-world data, refines the input models, and runs a follow-up experiment, we analyze whether and how the simulation outputs from all three experiments should be combined. Numerical illustrations are provided.

UR - http://www.scopus.com/inward/record.url?scp=84931572956&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84931572956&partnerID=8YFLogxK

U2 - 10.1080/0740817X.2014.980869

DO - 10.1080/0740817X.2014.980869

M3 - Article

AN - SCOPUS:84931572956

VL - 47

SP - 893

EP - 909

JO - IISE Transactions

JF - IISE Transactions

SN - 2472-5854

IS - 9

ER -