Mixture of experts regression modeling by deterministic annealing

Ajit V. Rao, David Miller, Kenneth Rose, Alien Gersho

    Research output: Contribution to journalArticlepeer-review

    36 Scopus citations

    Abstract

    We propose a new learning algorithm for regression modeling. The method is especially suitable for optimizing neural network structures that are amenable to a statistical description as mixture models. These include mixture of experts, hierarchical mixture of experts (HME), and normalized radial basis functions (NRBF). Unlike recent maximum likelihood (ML) approaches, we directly minimize the (squared) regression error. We use the probabilistic framework as means to define an optimization method that avoids many shallow local minima on the complex cost surface. Our method is based on deterministic annealing (DA), where the entropy of the system is gradually reduced, with the expected regression cost (energy) minimized at each entropy level. The corresponding Lagrangian is the system's "free-energy," and this annealing process is controlled by variation of the Lagrange multiplier, which acts as a "temperature" parameter. The new method consistently and substantially outperformed the competing methods for training NRBF and HME regression functions over a variety of benchmark regression examples.

    Original languageEnglish (US)
    Pages (from-to)2811-2820
    Number of pages10
    JournalIEEE Transactions on Signal Processing
    Volume45
    Issue number11
    DOIs
    StatePublished - 1997

    All Science Journal Classification (ASJC) codes

    • Signal Processing
    • Electrical and Electronic Engineering

    Fingerprint

    Dive into the research topics of 'Mixture of experts regression modeling by deterministic annealing'. Together they form a unique fingerprint.

    Cite this