When eigenvalues of symmetric matrices and singular values of general matrices are computed in finite-precision arithmetic, it is generally expected that they will be computed with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, tiny eigenvalues and singular values are usually not computed to high relative accuracy. There are some important classes of matrices that provide a higher level of precision, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices and all symmetric positive-definite matrices that can be consistently ordered (i.e., all symmetric positive-definite tridiagonal matrices). In particular, singular values and eigenvalues are determined to high relative precision, independent of their magnitudes, and there are algorithms to compute them accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel on bidiagonal and tridiagonal matrices.
All Science Journal Classification (ASJC) codes
- Numerical Analysis
- Computational Mathematics
- Applied Mathematics