Rapid advances in visualization technology and virtual world tools provide opportunities for improvements in multisensor data fusion. These technologies can re-engage the human user in the fusion process, improving multi-analyst collaboration, enhancing data understanding by engaging the analyst's visual pattern recognition capabilities, and providing new mechanisms for hypothesis generation and understanding. The virtual world environments can leverage gaming concepts to provide rich story-telling capabilities. Much like the traditional use of cases or logical templates for target identification or event/activity detection, gaming concepts involving characterization of characters and world views can assist the formulation and evaluation of hypotheses for non-traditional targets. As new requirements emerge for fusion systems to support asymmetric warfare and non-traditional operations, these technologies become increasingly important. This paper provides a perspective on these concepts and argues for a systematic theory-driven approach to explore these enhancements to data fusion, grounded in human-in-theloop experiments.