Computational comparisons of dual conjugate gradient algorithms for strictly convex networks

Chih Hang Wu, Jose A. Ventura, Sharon Browning

Research output: Contribution to journalArticle

Abstract

This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve large problems with minimal computer memory requirements because no matrix operations are necessary. An extensive computational study has been carried out using different direction generators, line search procedures, and restart schemes. Two conjugate gradient direction formulas, the Polak-Ribière and the memoryless BFGS, have been compared. In addition, two restart methods - the Beale's restart and the gradient restart - are tested for their effectiveness. A Newton method line search was tested against a bisection line search. Computational comparisons using statistical analysis have also been reported to illustrate the effectiveness of each combination of these procedures. The proposed method works well for network problems with quadratics, entropy, cubic, polynomial, and logarithm types of objective functions. Our computational experiments indicate that the most effective Lagrangian dual algorithm should include the combination of the Polak-Ribière conjugate gradient formula, the Newton line search, and the gradient restart.

Original languageEnglish (US)
Pages (from-to)333-349
Number of pages17
JournalComputers and Operations Research
Volume25
Issue number4
DOIs
StatePublished - Apr 1998

Fingerprint

Conjugate Gradient Algorithm
Restart
Strictly Convex
Line Search
Nonlinear networks
Conjugate Gradient
Newton-Raphson method
Objective function
Statistical methods
Gradient
Dual Algorithm
Entropy
Polynomials
Bisection
Data storage equipment
Logarithm
Newton Methods
Computational Experiments
Statistical Analysis
Differentiable

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Modeling and Simulation
  • Management Science and Operations Research

Cite this

@article{374e3e875f65421583f1ffd278d50b91,
title = "Computational comparisons of dual conjugate gradient algorithms for strictly convex networks",
abstract = "This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve large problems with minimal computer memory requirements because no matrix operations are necessary. An extensive computational study has been carried out using different direction generators, line search procedures, and restart schemes. Two conjugate gradient direction formulas, the Polak-Ribi{\`e}re and the memoryless BFGS, have been compared. In addition, two restart methods - the Beale's restart and the gradient restart - are tested for their effectiveness. A Newton method line search was tested against a bisection line search. Computational comparisons using statistical analysis have also been reported to illustrate the effectiveness of each combination of these procedures. The proposed method works well for network problems with quadratics, entropy, cubic, polynomial, and logarithm types of objective functions. Our computational experiments indicate that the most effective Lagrangian dual algorithm should include the combination of the Polak-Ribi{\`e}re conjugate gradient formula, the Newton line search, and the gradient restart.",
author = "Wu, {Chih Hang} and Ventura, {Jose A.} and Sharon Browning",
year = "1998",
month = "4",
doi = "10.1016/S0305-0548(97)00056-7",
language = "English (US)",
volume = "25",
pages = "333--349",
journal = "Computers and Operations Research",
issn = "0305-0548",
number = "4",

}

Computational comparisons of dual conjugate gradient algorithms for strictly convex networks. / Wu, Chih Hang; Ventura, Jose A.; Browning, Sharon.

In: Computers and Operations Research, Vol. 25, No. 4, 04.1998, p. 333-349.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Computational comparisons of dual conjugate gradient algorithms for strictly convex networks

AU - Wu, Chih Hang

AU - Ventura, Jose A.

AU - Browning, Sharon

PY - 1998/4

Y1 - 1998/4

N2 - This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve large problems with minimal computer memory requirements because no matrix operations are necessary. An extensive computational study has been carried out using different direction generators, line search procedures, and restart schemes. Two conjugate gradient direction formulas, the Polak-Ribière and the memoryless BFGS, have been compared. In addition, two restart methods - the Beale's restart and the gradient restart - are tested for their effectiveness. A Newton method line search was tested against a bisection line search. Computational comparisons using statistical analysis have also been reported to illustrate the effectiveness of each combination of these procedures. The proposed method works well for network problems with quadratics, entropy, cubic, polynomial, and logarithm types of objective functions. Our computational experiments indicate that the most effective Lagrangian dual algorithm should include the combination of the Polak-Ribière conjugate gradient formula, the Newton line search, and the gradient restart.

AB - This paper presents a Lagrangian dual conjugate gradient algorithm for solving a variety of nonlinear network problems with strictly convex, differentiable, and separable objective functions. The proposed algorithm belongs to an iterative dual scheme, which converges to a point within a given tolerance-relative residual error. By exploiting the special structure of the network constraints, this approach is able to solve large problems with minimal computer memory requirements because no matrix operations are necessary. An extensive computational study has been carried out using different direction generators, line search procedures, and restart schemes. Two conjugate gradient direction formulas, the Polak-Ribière and the memoryless BFGS, have been compared. In addition, two restart methods - the Beale's restart and the gradient restart - are tested for their effectiveness. A Newton method line search was tested against a bisection line search. Computational comparisons using statistical analysis have also been reported to illustrate the effectiveness of each combination of these procedures. The proposed method works well for network problems with quadratics, entropy, cubic, polynomial, and logarithm types of objective functions. Our computational experiments indicate that the most effective Lagrangian dual algorithm should include the combination of the Polak-Ribière conjugate gradient formula, the Newton line search, and the gradient restart.

UR - http://www.scopus.com/inward/record.url?scp=0032043155&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0032043155&partnerID=8YFLogxK

U2 - 10.1016/S0305-0548(97)00056-7

DO - 10.1016/S0305-0548(97)00056-7

M3 - Article

AN - SCOPUS:0032043155

VL - 25

SP - 333

EP - 349

JO - Computers and Operations Research

JF - Computers and Operations Research

SN - 0305-0548

IS - 4

ER -