In the aftermath of several recent catastrophic intelligence failures, the U.S. Government commissioned a series of studies to evaluate analytic methods and tradecraft. Those reports made specific recommendations to address consistent and systematic errors known as judgment biases found in all forms of analysis: predictive, estimative, and forensic. To correct for bias, a small number of methodological improvements have been suggested. There is, however, little experimental evidence to validate their impact on analytic quality. This lack of support motivates our present work, which seeks significant improvements in analytic performance by identifying common biases that emerge during analytic tasks, as well as measuring the effects of corresponding corrective measures (a.k.a.. "debiasing" techniques), which we refer to as analytic multipliers. This effort requires an experimental protocol suitable for studying the effects of many types of biases and debiasing techniques on realistic analytic problems. This paper presents our game-based paradigm for studying decision biases and developing analytic multipliers, and includes a description and results of a pilot game we developed to validate the approach.