### Abstract

Matrix scaling is an operation on nonnegative matrices with nonzero permanent. It multiplies the rows and columns of a matrix with positive factors such that the resulting matrix is (approximately) doubly stochastic. Scaling is useful at a preprocessing stage to make certain numerical computations more stable. Linial, Samorodnitsky and Wigderson have developed a strongly polynomial time algorithm for scaling. Furthermore, these authors have proposed to use this algorithm to approximate permanents in deterministic polynomial time. They have noticed an intriguing possibility to attack the notorious parallel matching problem. If scaling could be done efficiently in parallel, then it would approximate the permanent sufficiently well to solve the bipartite matching problem. As a first step towards this goal, we propose a scaling algorithm that is conjectured to run much faster than any previous scaling algorithm. It is shown that this algorithm converges quadratically for strictly scalable matrices. We interpret this as a hint that the algorithm might always be fast. All previously known approaches to matrix scaling can result in linear convergence at best.

Original language | English (US) |
---|---|

Title of host publication | Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algoritms and Combinatorics |

Editors | L. Arge, G.F. Italiano, R. Sedgewick |

Pages | 216-223 |

Number of pages | 8 |

State | Published - 2004 |

Event | Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algorithms and Combinatorics - New Orleans, LA, United States Duration: Jan 10 2004 → Jan 10 2004 |

### Other

Other | Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algorithms and Combinatorics |
---|---|

Country | United States |

City | New Orleans, LA |

Period | 1/10/04 → 1/10/04 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Engineering(all)

### Cite this

*Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algoritms and Combinatorics*(pp. 216-223)

}

*Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algoritms and Combinatorics.*pp. 216-223, Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algorithms and Combinatorics, New Orleans, LA, United States, 1/10/04.

**Quadratic convergence for scaling of matrices.** / Furer, Martin.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

TY - GEN

T1 - Quadratic convergence for scaling of matrices

AU - Furer, Martin

PY - 2004

Y1 - 2004

N2 - Matrix scaling is an operation on nonnegative matrices with nonzero permanent. It multiplies the rows and columns of a matrix with positive factors such that the resulting matrix is (approximately) doubly stochastic. Scaling is useful at a preprocessing stage to make certain numerical computations more stable. Linial, Samorodnitsky and Wigderson have developed a strongly polynomial time algorithm for scaling. Furthermore, these authors have proposed to use this algorithm to approximate permanents in deterministic polynomial time. They have noticed an intriguing possibility to attack the notorious parallel matching problem. If scaling could be done efficiently in parallel, then it would approximate the permanent sufficiently well to solve the bipartite matching problem. As a first step towards this goal, we propose a scaling algorithm that is conjectured to run much faster than any previous scaling algorithm. It is shown that this algorithm converges quadratically for strictly scalable matrices. We interpret this as a hint that the algorithm might always be fast. All previously known approaches to matrix scaling can result in linear convergence at best.

AB - Matrix scaling is an operation on nonnegative matrices with nonzero permanent. It multiplies the rows and columns of a matrix with positive factors such that the resulting matrix is (approximately) doubly stochastic. Scaling is useful at a preprocessing stage to make certain numerical computations more stable. Linial, Samorodnitsky and Wigderson have developed a strongly polynomial time algorithm for scaling. Furthermore, these authors have proposed to use this algorithm to approximate permanents in deterministic polynomial time. They have noticed an intriguing possibility to attack the notorious parallel matching problem. If scaling could be done efficiently in parallel, then it would approximate the permanent sufficiently well to solve the bipartite matching problem. As a first step towards this goal, we propose a scaling algorithm that is conjectured to run much faster than any previous scaling algorithm. It is shown that this algorithm converges quadratically for strictly scalable matrices. We interpret this as a hint that the algorithm might always be fast. All previously known approaches to matrix scaling can result in linear convergence at best.

UR - http://www.scopus.com/inward/record.url?scp=8344267579&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=8344267579&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:8344267579

SN - 0898715644

SN - 9780898715644

SP - 216

EP - 223

BT - Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algoritms and Combinatorics

A2 - Arge, L.

A2 - Italiano, G.F.

A2 - Sedgewick, R.

ER -