"Input uncertainty" refers to the (often unmeasured) variability in simulation-based performance estimators that is a consequence of driving the simulation with input models (e.g., fully specified univariate distributions of i.i.d. inputs) that are based on real-world data. In 2012 Ankenman and Nelson presented a quick-and-easy diagnostic experiment to assess the overall effect of input uncertainty on simulation output. When their method reveals that input uncertainty is substantial, then the natural next questions are which input distributions contribute the most to input uncertainty, and from which input distributions would it be most beneficial to collect more data? They proposed a possibly lengthy sequence of additional diagnostic experiments to answer these questions. In this paper we provide a method that obtains an estimator of the overall variance due to input uncertainty, the relative contribution to this variance of each input distribution, and a measure of the sensitivity of overall uncertainty to increasing the real-world sample-size used to fit each distribution, all from a single diagnostic experiment. Our approach exploits a metamodel that relates the means and variances of the input distributions to the mean response of the simulation output, and bootstrapping of the real-world data to represent input-model uncertainty. Further, we investigate whether and how the simulation outputs from the nominal and diagnostic experiments may be combined to obtain a better performance estimator. For the case when the analyst obtains additional real-world data, refines the input models, and runs a follow-up experiment, we analyze whether and how the simulation outputs from all three experiments should be combined. Numerical illustrations are provided.
|Original language||English (US)|
|Number of pages||17|
|Journal||IIE Transactions (Institute of Industrial Engineers)|
|State||Published - Sep 2 2015|
All Science Journal Classification (ASJC) codes
- Industrial and Manufacturing Engineering