Computational comparisons of dual conjugate gradient algorithms for strictly convex networks

Chih Hang Wu, Jose A. Ventura, Sharon Browning

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve large problems with minimal computer memory requirements because no matrix operations are necessary. An extensive computational study has been carried out using different direction generators, line search procedures, and restart schemes. Two conjugate gradient direction formulas, the Polak-Ribière and the memoryless BFGS, have been compared. In addition, two restart methods - the Beale's restart and the gradient restart - are tested for their effectiveness. A Newton method line search was tested against a bisection line search. Computational comparisons using statistical analysis have also been reported to illustrate the effectiveness of each combination of these procedures. The proposed method works well for network problems with quadratics, entropy, cubic, polynomial, and logarithm types of objective functions. Our computational experiments indicate that the most effective Lagrangian dual algorithm should include the combination of the Polak-Ribière conjugate gradient formula, the Newton line search, and the gradient restart.

Original languageEnglish (US)
Pages (from-to)333-349
Number of pages17
JournalComputers and Operations Research
Volume25
Issue number4
DOIs
StatePublished - Apr 1998

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Modeling and Simulation
  • Management Science and Operations Research

Fingerprint Dive into the research topics of 'Computational comparisons of dual conjugate gradient algorithms for strictly convex networks'. Together they form a unique fingerprint.

Cite this