Matrix scaling is an operation on nonnegative matrices with nonzero permanent. It multiplies the rows and columns of a matrix with positive factors such that the resulting matrix is (approximately) doubly stochastic. Scaling is useful at a preprocessing stage to make certain numerical computations more stable. Linial, Samorodnitsky and Wigderson have developed a strongly polynomial time algorithm for scaling. Furthermore, these authors have proposed to use this algorithm to approximate permanents in deterministic polynomial time. They have noticed an intriguing possibility to attack the notorious parallel matching problem. If scaling could be done efficiently in parallel, then it would approximate the permanent sufficiently well to solve the bipartite matching problem. As a first step towards this goal, we propose a scaling algorithm that is conjectured to run much faster than any previous scaling algorithm. It is shown that this algorithm converges quadratically for strictly scalable matrices. We interpret this as a hint that the algorithm might always be fast. All previously known approaches to matrix scaling can result in linear convergence at best.