FACTOID # 11: Oklahoma has the highest rate of women in State or Federal correctional facilities.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Spectral theorem

In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. See also spectral theory for a historical perspective. Euclid, a famous Greek mathematician known as the father of geometry, is shown here in detail from The School of Athens by Raphael. ... Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations, and systems of linear equations in finite dimensions. ... Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of spaces of functions. ... In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that respects the arithmetical operations addition and scalar multiplication defined on vector spaces, or, in other words, it preserves linear combinations. Definition and first consequences Formally, if V and W are... In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied. ... A theorem is a proposition that has been or is to be proved on the basis of explicit assumptions. ... In mathematics, an operator is a function that performs some sort of operation on a number, variable, or function. ... In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main diagonal are all zero. ... In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that respects the arithmetical operations addition and scalar multiplication defined on vector spaces, or, in other words, it preserves linear combinations. Definition and first consequences Formally, if V and W are... In operator theory, a multiplication operator is a linear operator T defined on some vector space of functions and whose value at a function φ is given by multiplication by a fixed function f. ... In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix. ...


Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces. On a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. ... In functional analysis, a normal operator on a Hilbert space is a continuous linear operator that commutes with its hermitian adjoint : The main importance of this concept is that the spectral theorem applies to normal operators. ... In mathematics, a Hilbert space is a generalization of Euclidean space which is not restricted to finite dimensions. ...


The spectral theorem also provides a canonical decomposition, called the spectral decomposition of the underlying vector space on which it acts. Canonical is an adjective derived from canon. ...


In this article we consider mainly the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space. In mathematics, an element x of a star-algebra is self-adjoint if the involution acts trivially upon it. ...

Contents


Finite-dimensional case

Hermitian matrices

We begin by considering a symmetric operator A on a finite-dimensional real or complex inner product space V with the standard Hermitian inner product; in Dirac's bra-ket notation, the symmetry condition means In mathematics, the set of real numbers, denoted R, is the set of all rational numbers and irrational numbers. ... In mathematics, a complex number is a number of the form where a and b are real numbers, and i is the imaginary unit, with the property i 2 = −1. ... In mathematics, an inner product space is a vector space with additional structure, an inner product (also called a scalar product), which allows us to introduce geometrical notions such as angles and lengths of vectors. ... In mathematics, an inner product space is a vector space with additional structure, an inner product (also called a scalar product), which allows us to introduce geometrical notions such as angles and lengths of vectors. ... Bra-ket notation is the standard notation for describing quantum states in the theory of quantum mechanics. ...

for all x, y elements of V. Recall that an eigenvector of a linear operator A is a (non-zero) vector x such that Ax = rx for some scalar r. The value r is the corresponding eigenvalue. In linear algebra, the eigenvectors (from the German eigen meaning own) of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. ... In mathematics, a number is called an eigenvalue of a matrix if there exists a nonzero vector such that the matrix times the vector is equal to the same vector multiplied by the eigenvalue. ...


Theorem. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real. In mathematics, an orthonormal basis of an inner product space V(i. ...


This result is of such importance in many parts of mathematics, that we provide a sketch of a proof for the case wherein the underlying field of scalars is the complex numbers. First we show that all the eigenvalues are real. Suppose that λ is an eigenvalue of A with corresponding eigenvector x. Thus

Since x is non-zero, it follows that λ equals its own conjugate and is therefore real.


To prove the existence of an eigenvector basis, we use induction on the dimension of V. In fact it suffices to show A has at least one non-zero eigenvector e. For then we can consider the space K of vectors v orthogonal to e. This is finite-dimensional because it is a subspace of a finite dimensional space, and A has the property that it maps every vector w in K into K. This is shown as follows: If wK, then using the symmetry property of A,

Moreover, A considered as a linear operator on K is also symmetric, so by the induction hypothesis there is a basis for V consisting of eigenvectors of A.


It remains, however, to show that A has at least one eigenvector. Since the ground field is algebraically closed, the polynomial function (called the characteristic polynomial of A) In mathematics, a field F is said to be algebraically closed if every polynomial of degree at least 1, with coefficients in F, has a zero in F. In that case, every such polynomial splits into linear factors. ... In linear algebra, one associates a polynomial to every square matrix, its characteristic polynomial. ...

p(λ) = det(λIA)

has a complex root r. This implies the linear operator ArI is not invertible and hence maps a non-zero vector e to 0. This vector e is a non-zero eigenvector of A. This implies that r is an eigenvalue, so is actually a real number. This completes the proof.


Notice the second part of the proof works for any square matrices. Clearly any square matrix has at least one eigenvector. Therefore crucial to the argument is the following consequence of the Hermiticity of A: If A is Hermitian and e is an eigenvector of A, then not only is the linear span of e an invariant subspace of A, but so is its orthogonal complement.


The argument is also valid for symmetric operators on finite-dimensional real inner product spaces. A real symmetric matrix has real eigenvalues, therefore eigenvectors with real entries.


The spectral decomposition of an operator A which has an orthonormal basis of eigenvectors is obtained by grouping together all vectors corresponding to the same eigenvalue. Thus

Note that these spaces are invariantly defined, in that the definition does not depend on any choice of specific eigenvectors.


As an immediate consequence of the spectral theorem for symmetric operators we get the spectral decomposition theorem: V is the orthogonal direct sum of the spaces Vλ where the index ranges over eigenvalues. Another equivalent formulation, letting Pλ be the orthogonal projection onto Vλ () and λ1,..., λm the eigenvalues of A, is In geometry, an orthogonal projection of a k-dimensional object onto a d-dimensional hyperplane (d < k) is obtained by intersections of (k − d)- dimensional hyperplanes drawn through the points of an object orthogonally to the d-hyperplane. ...

The spectral decomposition is a special case of the Schur decomposition. It is also a special case of the singular value decomposition. In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation (named after Issai Schur) is an important matrix decomposition. ... In linear algebra singular value decomposition (SVD) is an important factorization of a rectangular real or complex matrix, with several applications in signal processing and statistics. ...


If A is a real symmetric matrix, it follows by the real version of the spectral theorem for symmetric operators that there is an orthogonal matrix U such that UAUT is diagonal and all the eigenvalues of A are real. In matrix theory, a real orthogonal matrix is a square matrix Q whose transpose is its inverse: // Overview An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. ... In mathematics, the set of real numbers, denoted R, is the set of all rational numbers and irrational numbers. ...


Normal matrices

The spectral theorem extends to a more general class of matrices. Let A be an operator on a finite-dimensional inner product space. A is said to be normal if A* A = A A*. One can show that A is normal if and only if it is unitarily diagonalizable: By the Schur decomposition, we have A = U T U*, where U is unitary and T upper-triangular. Since A is normal, T T* = T* T. Therefore T must be diagonal. The converse is also obvious. In functional analysis, a normal operator on a Hilbert space is a continuous linear operator that commutes with its hermitian adjoint : The main importance of this concept is that the spectral theorem applies to normal operators. ...


In other words, A is normal if and only if there exists a unitary matrix U such that In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition where In is the identity matrix and U* is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if it has an inverse...

where Λ is the diagonal matrix the entries of which are the eigenvalues of A. The column vectors of U are the eigenvectors of A and they are orthonormal. Unlike the Hermitian case, the entries of Λ need not be real. In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main diagonal are all zero. ... In mathematics, a number is called an eigenvalue of a matrix if there exists a nonzero vector such that the matrix times the vector is equal to the same vector multiplied by the eigenvalue. ...


The spectral theorem for compact self-adjoint operators

In Hilbert spaces in general, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case. In functional analysis, compact operators on Hilbert spaces are a direct extension of matrices: in the Hilbert spaces, they are the precisely the closure of finite rank operators in the uniform operator topology. ... In functional analysis, a compact operator (or completely continuous operator) is a linear operator L from a Banach space X to another Banach space Y, such that the image under L of any bounded subset of X is a relatively compact subset of Y. Such an operator is necessarily a...


Theorem. Suppose A is a compact self-adjoint operator on a Hilbert space V. There is an orthonormal basis of V consisting of eigenvectors of A. Each eigenvalue is real. In mathematics, an orthonormal basis of an inner product space V(i. ...


As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. To prove this, we cannot rely on determinants to show existence of eigenvalues, but instead one can use a maximization argument analogous to the variational characterization of eigenvalues. The above spectral theorem holds for real or complex Hilbert spaces.


If the compactness assumption is removed, it is not true that a self adjoint operator has eigenvectors.


Generalization to non-symmetric matrices

For a non-symmetric but square ( dimensional) matrix , the right eigenvectors are defined by

whereas the left eigenvectors are defined by

or, equivalently,

where represents the transpose of . In these equations, the eigenvalues λk are the same, being the roots of the same characteristic polynomial In mathematics, and in particular linear algebra, the transpose of a matrix is another matrix, produced by turning rows into columns and vice versa. ... In linear algebra, a scalar λ is called an eigenvalue (in some older texts, a characteristic value) of a linear mapping A if there exists a nonzero vector x such that Ax=λx. ... In linear algebra, one associates a polynomial to every square matrix, its characteristic polynomial. ...

If is a symmetric matrix, the right and left eigenvectors are also the same, i.e., . In linear algebra, a symmetric matrix is a matrix that is its own transpose. ...



If the eigenvalues are distinct, the left and right eigenvectors each form a complete basis and can be scaled to satisfy the orthonormality condition In linear algebra, the eigenvectors (from the German eigen meaning inherent, characteristic) of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. ... In linear algebra, a basis is a minimum set of vectors that, when combined, can address every vector in a given space. ... In linear algebra, two vectors v and w are said to be orthonormal if they are both orthogonal (according to a given inner product) and normalized. ...

where δmn is the Kronecker delta function. Therefore, an arbitrary N-dimensional vector can be represented by the expansion In mathematics, the Kronecker delta or Kroneckers delta, named after Leopold Kronecker (1823-1891), is a function of two variables, usually integers, which is 1 if they are equal, and 0 otherwise. ... In physics and in vector calculus, a spatial vector is a concept characterized by a magnitude, which is a scalar, and a direction (which can be defined in a 3-dimensional space by the Euler angles). ...

This expansion is always possible when the eigenvalues are distinct and usually possible even when they are not, by using Gram-Schmidt orthogonalization to define right and left eigenvectors that satisfy the orthonormality condition. However, if the orthonormality condition cannot be satisfied (i.e., if the expansion is impossible), then is said to be a defective matrix. In mathematics and numerical analysis, the Gram-Schmidt process of linear algebra is a method of orthogonalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn. ... In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonizable. ...


Functional analysis

The next generalization we consider is that of bounded self-adjoint operators A on a Hilbert space V. Such operators may have no eigenvalues: for instance let A be the operator multiplication by t on L2[0, 1], that is In mathematics, the operator norm is a norm defined on the space of bounded operators between two Banach spaces. ...

Theorem. Let A be a bounded self-adjoint operator on a Hilbert space H. Then there is a measure space (X, Σ, μ) and a real-valued measurable function f on X and a unitary operator U:HL2μ(X) such that In mathematics, a measure is a function that assigns a number, e. ...

where T is the multiplication operator: In operator theory, a multiplication operator is a linear operator T defined on some vector space of functions and whose value at a function φ is given by multiplication by a fixed function f. ...

This is the beginning of the vast research area of functional analysis called operator theory. In mathematics, operator theory is the branch of functional analysis which deals with bounded linear operators and their properties. ...


There is also an analogous spectral theorem for normal operators on Hilbert spaces. In this case it is more common to express the spectral theorem as an integral of the coordinate function over the spectrum against a projection-valued measure. In linear algebra, the eigenvectors (from the German eigen meaning own) of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. ... In mathematics, projection-valued measures are used to express results in spectral theory. ...


When the normal operator in question is compact, this spectral theorem reduces to the finite-dimensional spectral theorem above, except that the operator is expressed as a linear combination of possibly infinitely many projections. In functional analysis, a compact operator (or completely continuous operator) is a linear operator L from a Banach space X to another Banach space Y, such that the image under L of any bounded subset of X is a relatively compact subset of Y. Such an operator is necessarily a...


The spectral theorem for general self-adjoint operators

Many important linear operators which occur in analysis, such as differential operators are unbounded. There is however a spectral theorem for self-adjoint operators that applies in many of these cases. To give an example, any constant coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed the unitary operator that implements this equivalence is the Fourier transform. Analysis is the generic name given to any branch of mathematics that depends upon the concepts of limits and convergence. ... In mathematics, a differential operator is a linear operator defined as a function of the differentiation operator. ... On a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. ... The Fourier transform, named after Joseph Fourier, is a reversible integral transform of one function into another. ...


See also

In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. ... In linear algebra, the Jordan normal form, also called the Jordan canonical form, named in honor of the 19th and early 20th-century French mathematician Camille Jordan, answers the question, for a given square matrix M over a field K containing the eigenvalues of M, to what extent can M... In linear algebra singular value decomposition (SVD) is an important factorization of a rectangular real or complex matrix, with several applications in signal processing and statistics. ...

References

  • Sheldon Axler, Linear Algebra Done Right, Springer Verlag, 1997
  • Paul Halmos, "What Does the Spectral Theorem Say?", American Mathematical Monthly, volume 70, number 3 (1963), pages 241-247

  Results from FactBites:
 
NationMaster - Encyclopedia: Spectral theorem (3217 words)
In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis).
In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find.
In Hilbert spaces in general, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case.
Spectral theorem - Wikipedia, the free encyclopedia (1391 words)
In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis).
In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find.
In Hilbert spaces in general, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m