Spoken language is an important and natural way for people to communicate with computers. Nonetheless, habitable, reliable, and efficient human-machine dialogue remains difficult to achieve. This paper describes a multi-threaded semisynchronous architecture for spoken dialogue systems. The focus here is on its utterance interpretation module. Unlike most architectures for spoken dialogue systems, this new one is designed to be robust to noisy speech recognition through earlier reliance on context, a mixture of rationales for interpretation, and fine-grained use of confidence measures. We report here on a pilot study that demonstrates its robust understanding of users' objectives, and we compare it with our earlier spoken dialogue system implemented in a traditional pipeline architecture. Substantial improvements appear at all tested levels of recognizer performance.