The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y HA = λy H (y H implies a complex conjugate transposition of y). Vector x is a right eigenvector, vector y is a left eigenvector, corresponding to the eigenvalue λ, which is the same for both eigenvectors.
As opposed to the symmetric problem, the eigenvalues a of non-symmetric matrix do not form an orthogonal system. Moreover, eigenvalues may not form a linear-independent vector system (this is possible, although not necessarily, in case of multiple eigenvalues - a subspace with size less than k can correspond to the eigenvalue of multiplicity k).
The second distinction from the symmetric problem is that a non-symmetric matrix could not be easily reduced to a tridiagonal matrix or a matrix in other compact form - matrix asymmetry causes the fact that after nulling all the elements below the first subdiagonal (using an orthogonal transformation) all the elements in the upper triangle are not nulled, so we get a matrix in upper Hessenberg form. This slows an algorithm down, because it is necessary to update all the upper triangles of the matrix after each iteration of the QR algorithm.
At last, the third distinction is that the eigenvalues of a non-symmetric matrix could be complex (as are their corresponding eigenvectors).
Solving a non-symmetric problem of finding eigenvalues is performed in some steps. In the first step, the matrix is reduced to upper Hessenberg form by using an orthogonal transformation. In the second step, which takes the most amount of time, the matrix is reduced to upper Schur form by using an orthogonal transformation. If we only have to find the eigenvalues, this step is the last because the matrix eigenvalues are located in the diagonal blocks of a quasi-triangular matrix from the canonical Schur form. If we have to find the eigenvectors as well, it is necessary to perform a backward substitution with Schur vectors and quasi-triangular vectors (in fact - solving a system of linear equations; the process of backward substitution itself takes a small amount of time, but the necessity to save all the transformations makes the algorithm twice as slow).
The Schur decomposition, which takes the most amount of time, is performed by using the QR algorithm with multiple shifts. The algorithm is taken from the LAPACK library. This algorithm is a block-matrix analog of the ordinary QR-algorithm with double shift. As all other block-matrix algorithms, this algorithm requires adjustment to achieve optimal performance.
You can tune a value of NS (internal parameter of the InternalSchurDecomposition subroutine) by defining a number of shifts in one iteration. By increasing the number of shifts, the algorithm performance rises, reaching its maximum between NS=4 and NS=16. If NS contains more than 16 algorithms, it slows down considerably. Interval ranges can differ according to different systems, but the situation is mostly same. The default value of NS provides a good performance on most systems, but if the performance is critical, it is worth calibrating this parameter. It should be noted that the optimal parameter value depends both on the system characteristics and on the properties of the matrices processed.
The RMatrixEVD subroutine finds the eigenvalues and, if needed, the eigenvectors (right and/or left) of a general matrix. The subroutine uses matrix A as the input and returns the array of eigenvalues (real and imaginary parts) and the array of eigenvectors (the array structure is described in detail in the subroutine comments).
This article is intended for personal use only.
C++ source. MPFR/GMP is used.
GMP source is available from gmplib.org. MPFR source is available from www.mpfr.org.
Python version (CPython and IronPython are supported).
ALGLIB® - numerical analysis library, 1999-2013.