The solution of various aspects of linear systems is fundamental to the computational sciences. Linear systems of equations can be written in matrix form and expressed as relations between matrices and vectors. Of particular importance is the establishment of efficent and numerically stable algorithms for performing various linear algebraic manipulations. We briefly outline the main classes of problems involving linear systems.
In homogeneous problems involve solving for particular solutions to a set of equations. The most basic relation considered is
where is a square matrix, and and are vectors. The elements of these objects may be taken from any ring, and as long as is not singular, there exists a unique solution .
One may also consider non-square matrices in which one then seeks either a least squares approximate solution to an over-determined set of equations ( has more rows than columns) or specific solutions to the under-determined equations ( has fewer rows than columns).
The above discussion is greatly simplified and there are many subtleties in the structure of the objects involved. From an algorithmic point of view, there is a distinction between algorithms which can directly compute solutions (exact under exact arithmetical operations) and iterative methods which compute solutions with successively greater accuracy.
Homogeneous linear problems are of the variety or . In the case of the former, one seeks non-zero solutions in the kernel of the operator and the latter is a linear eigensystem with eigenvalues . In both cases, one typically expects a multitude of solution vectors , and in the eigensystem case, the corresponding eigenvalues.
Due to Abel's theorem, one can prove there does not exist a direct solution method for eigensystems of dimension greater than 5, so all general eigenvalue solvers are iterative, however direct methods exist for computing nullspaces.