Spectral theorem
From Academic Kids

In mathematics, particularly linear algebra and functional analysis, the spectral theorem is a collection of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finitedimensional spaces, but requires some modification for operators on infinitedimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are selfadjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decomposition of the underlying vector space on which it acts.
In this article we consider mainly the simplest kind of spectral theorem, that for a selfadjoint operator on a Hilbert space. However, as noted above, for a Hilbert space, the spectral theorem also holds for normal operators.
Contents 
Finitedimensional case
We begin by considering a symmetric operator A on a finitedimensional real or complex inner product space V with the standard Hermitian inner product; the symmetry condition means
 <math> \langle A x \mid y \rangle = \langle x \mid A y \rangle <math>
for all x,y elements of V. Recall that an eigenvector of a linear operator A is a vector x such that A x = r x for some scalar r. The value r is the corresponding eigenvalue.
Theorem. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real.
This result is of such importance in many parts of mathematics, that we provide a sketch of a proof in case the underlying field of scalars is the complex numbers. First the property that all the eigenvalues are real. Indeed if λ is an eigenvalue of A, for the corresponding eigenvector x
 <math> \overline{\lambda} \langle x \mid x \rangle= \langle A x \mid x \rangle = \langle x \mid A x \rangle = \lambda \langle x \mid x \rangle .<math>
It follows λ equals its own conjugate and is therefore real.
To prove the existence of an eigenvector basis, we use induction on the dimension of V. In fact it suffices to show A has at least one nonzero eigenvector e. For then we can consider the space K of vectors v orthogonal to e. This is finitedimensional, and A has the property that it maps every vector w in K into K:
 <math> \langle A w \mid e \rangle = \langle w \mid A e \rangle = \lambda \langle w \mid e \rangle = 0. <math>
Moreover, A considered as a linear operator on K is also symmetric so by the induction hypothesis this completes the proof.
It remains however to show A has at least one eigenvector. Since the ground field is algebraically closed, the polynomial function p(x) = det(A − x I) has a root r. This implies the linear operator A − r I is not invertible and hence maps a nonzero vector e to 0. This vector e is a nonzero eigenvector of A. This completes the proof.
The spectral theorem is also true for symmetric operators on finitedimensional real inner product spaces.
The spectral decomposition of an operator A which has an orthonormal basis of eigenvectors, is obtained by grouping together all vectors corresponding to the same eigenvalue. Thus
 <math> V_\lambda = \{\,v \in V: A v = \lambda v\,\}.<math>
Note: these spaces are invariantly defined, that is does not require any choice of specific eigenvectors.
As an immediate consequence of the spectral theorem for symmetric operators we get the spectral decomposition theorem: V is the orthogonal direct sum of the spaces V_{λ} where the index ranges over eigenvalues. Another equivalent formulation is letting P_{λ} be the orthogonal projection onto V_{λ}
 <math> P_\lambda P_\mu=0 \quad \mbox{if } \lambda \neq \mu <math>
and if λ_{1},..., λ_{m} are the eigenvalues of A,
 <math>A =\lambda_1 P_{\lambda_1} +\cdots+\lambda_m P_{\lambda_m}.<math>
If A is a normal operator on a finitedimensional inner product space, A also has a spectral decomposition and the decomposition theorem holds for A. The eigenvalues will be complex numbers in general. The proof is somewhat more complicated and is discussed in the Axler reference below.
These results translate immediately into results about matrices: For any normal matrix A, there exists a unitary matrix U such that
 <math>A=U \Sigma U^* \;<math>
where Σ is the diagonal matrix where the entries are the eigenvalues of A. Furthermore, any matrix which diagonalizes in this way must be normal.
The column vectors of U are the eigenvectors of A and they are orthogonal.
The spectral decomposition is a special case of the Schur decomposition. It is also a special case of the singular value decomposition.
If A is a real symmetric matrix, it follows by the real version of the spectral theorem for symmetric operators that there is an orthogonal matrix such that U A U* is diagonal and all the eigenvalues of A are real.
The spectral theorem for compact selfadjoint operators
In Hilbert spaces in general, the statement of the spectral theorem for compact selfadjoint operators is virtually the same as in the finitedimensional case.
Theorem. Suppose A is a compact selfadjoint operator on a Hilbert space V. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real.
Again the key point is to prove the existence of at least one nonzero eigenvector. To prove this, we cannot rely on determinants to show existence of eigenvalues, but instead we use a maximization argument analogous to proving the minmax theorem for eigenvalues.
Note that the above spectral theorem holds for real or complex Hilbert spaces.
Functional analysis
The next generalization we consider is that of bounded selfadjoint operators A on a Hilbert space V. Such operators may have no eigenvalues: for instance let A be the operator multiplication by t on L^{2}[0, 1], that is
 <math> [A \varphi](t) = t \varphi(t). \;<math>
Theorem. Let A be a bounded selfadjoint operator on a Hilbert space H. Then there is a measure space (X, M, μ) and a realvalued measurable function f on X and a unitary operator U:H → L^{2}_{μ}(X) such that
 <math> U^* T U = A \;<math>
where T is the multiplication operator:
 <math> [T \varphi](x) = f(x) \varphi(x). \;<math>
This is the beginning of the vast research area of functional analysis called operator theory.
A normal operator on a Hilbert space may have no eigenvalues; for example, the bilateral shift on the Hilbert space l^{2}(Z) has no eigenvalues. There is also a spectral theorem for normal operators on Hilbert spaces, though, in which the sum in the finitedimensional spectral theorem is replaced by an integral of the coordinate function over the spectrum against a projectionvalued measure.
When the normal operator in question is compact, this spectral theorem reduces to the finitedimensional spectral theorem above, except that the operator is expressed as a linear combination of possibly infinitely many projections.
The spectral theorem for general selfadjoint operators
Many important linear operators which occur in analysis, such as differential operators are unbounded. There is however a spectral theorem selfadjoint operators which applies in many of these cases. To give an example, any constant coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed the unitary operator which implements this equivalence is the Fourier transform.
See also
 Jordan decomposition, an "algebraic" analogue to spectral decomposition.
 Singular value decomposition, a generalisation of spectral theorem to arbitrary matrices.
Reference
 Sheldon Axler, Linear Algebra Done Right, Springer Verlag, 1997