This paper reports on a novel approach to the design and implementation of a spoken dialogue system. A human subject, or wizard, is presented with input of the sort intended for the dialogue system, and selects from among a set of pre-defined actions. The wizard has access to hypotheses generated by noisy automated speech recognition and queries a database with them using partial matching. During the ambitious study reported here, different wizards exhibited different behaviors, elicited different degrees of caller affinity for the system, and achieved different degrees of accuracy on retrieval of the requested items. Our data illustrates that wizards did not trust automated speech recognition hypotheses when they could not lead to a correct database match, and instead asked informed questions. The wealth of data and the richness of the interactions are a valuable resource with which to model expert wizard behavior.