FACTOID # 13: New York has America's lowest percentage of residents who are veterans.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Eigenvalue, eigenvector and eigenspace
Fig. 1. In this shear mapping of the Mona Lisa, the picture was deformed in such a way that its central vertical axis (red vector) has not changed direction, but the diagonal vector (blue) has changed direction. Hence the red vector is an eigenvector of the transformation and the blue vector is not. Since the red vector was neither stretched nor compressed, its eigenvalue is 1. All vectors with the same vertical direction - i.e., parallel to this vector - are also eigenvectors, with the same eigenvalue. Together with the zero-vector, they form the eigenspace for this eigenvalue.
Fig. 1. In this shear mapping of the Mona Lisa, the picture was deformed in such a way that its central vertical axis (red vector) has not changed direction, but the diagonal vector (blue) has changed direction. Hence the red vector is an eigenvector of the transformation and the blue vector is not. Since the red vector was neither stretched nor compressed, its eigenvalue is 1. All vectors with the same vertical direction - i.e., parallel to this vector - are also eigenvectors, with the same eigenvalue. Together with the zero-vector, they form the eigenspace for this eigenvalue.

In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector  of a given linear transformation is a non-zero vector which is multiplied by a constant called the eigenvalue as a result of that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues). In the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. ... Image File history File links Download high-resolution version (1643x1395, 3238 KB) File links The following pages on the English Wikipedia link to this file (pages on other projects are not listed): Eigenvalue, eigenvector and eigenspace ... Image File history File links Download high-resolution version (1643x1395, 3238 KB) File links The following pages on the English Wikipedia link to this file (pages on other projects are not listed): Eigenvalue, eigenvector and eigenspace ... In this shear transformation of an image of the Mona Lisa, the picture was deformed in such a way that its central vertical axis was not modified. ... For other uses, see Mona Lisa (disambiguation). ... For other meanings of mathematics or uses of math and maths, see Mathematics (disambiguation) and Math (disambiguation). ... This article is about vectors that have a particular relation to the spatial coordinates. ... In mathematics, a linear transformation (also called linear map or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ... Image File history File links De-eigenvector. ... Image File history File links De-eigenvalue. ...


For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation is the span of the eigenvectors of that transformation with the same eigenvalue, together with the zero vector (which has no direction). An eigenspace is an example of a subspace of a vector space. In the mathematical subfield of linear algebra, the linear span, also called the linear hull, of a set of vectors in a vector space is the intersection of all subspaces containing that set. ... This article is about linear subspaces of an abstract vector space. ... In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ...


In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below. Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. ... In mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries), which may be numbers or, more generally, any abstract quantities that can be added and multiplied. ...


These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics. Broadly speaking, pure mathematics is mathematics motivated entirely for reasons other than application. ... Applied mathematics is a branch of mathematics that concerns itself with the mathematical techniques typically used in the application of mathematical knowledge to other domains. ... Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. ... Functional analysis is the branch of mathematics, and specifically of analysis, concerned with the study of spaces of functions. ... To do: 20th century mathematics chaos theory, fractals Lyapunov stability and non-linear control systems non-linear video editing See also: Aleksandr Mikhailovich Lyapunov Dynamical system External links http://www. ...


Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency. This article is about functions in mathematics. ... This article is about the components of sound. ... Quite literally, quantum state describes the state of a quantum system. ... For other uses, see Frequency (disambiguation). ... In mathematics, an eigenfunction of a linear operator A defined on some function space is any non-zero function f in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. ... For other types of mode, see mode. ...

Contents

History

Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. ... Matrix theory is a branch of mathematics which focuses on the study of matrices. ... In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. ... Visualization of airflow into a duct modelled using the Navier-Stokes equations, a set of partial differential equations. ...


Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix.[1] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[2] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.[3] Euler redirects here. ... In physics, a rigid body is an idealization of a solid body of finite size in which deformation is neglected. ... Moment of inertia, also called mass moment of inertia or the angular mass, (SI units kg m2, Former British units slug ft2), is the rotational analog of mass. ... Augustin Louis Cauchy (August 21, 1789 – May 23, 1857) was a French mathematician. ... This is an article about quadric in mathematics, to see the computing company go to Quadrics. ... In linear algebra, the characteristic equation of a square matrix A is the equation in one variable λ where I is the identity matrix. ...


Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[4] Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues.[2] This was extended by Hermite in 1855 to what are now called Hermitian matrices.[3] Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[2] and Clebsch found the corresponding result for skew-symmetric matrices.[3] Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.[2] Jean Baptiste Joseph Fourier (March 21, 1768 - May 16, 1830) was a French mathematician and physicist who is best known for initiating the investigation of Fourier series and their application to problems of heat flow. ... The heat equation is an important partial differential equation which describes the variation of temperature in a given region over time. ... In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to re-write an equation so that each of two variables occurs on a different side of the equation. ... Jacques Charles François Sturm (September 29, 1803 - December 15, 1855), French mathematician, of German extraction, was born in Geneva. ... Charles Hermite (pronounced in IPA, ) (December 24, 1822 – January 14, 1901) was a French mathematician who did research on number theory, quadratic forms, invariant theory, orthogonal polynomials, elliptic functions, and algebra. ... A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is equal to its own conjugate transpose — that is, the element in the ith row and jth column is equal to the complex conjugate of the element in the jth row and ith column, for... Francesco Brioschi. ... In matrix theory, a real orthogonal matrix is a square matrix Q whose transpose is its inverse: // Overview An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. ... Illustration of a unit circle. ... Alfred Clebsch (1832-1872) was a German mathematician who made important contributions to algebraic geometry and invariant theory. ... In linear algebra, a skew-symmetric (or antisymmetric) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation: AT = −A or in component form, if A = (aij): aij = − aji   for all i and j. ... Karl Theodor Wilhelm Weierstrass (Weierstraß) (October 31, 1815 – February 19, 1897) was a German mathematician who is often cited as the father of modern analysis. // Karl Weierstrass was born in Ostenfelde, Westphalia (today Germany). ... In mathematics, stability theory deals with the stability of the solutions of differential equations and dynamical systems. ... In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonizable. ...


In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm-Liouville theory.[5] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[6] Joseph Liouville (born March 24, 1809, died September 8, 1882) was a French mathematician. ... In mathematics and its applications, a classical Sturm-Liouville equation, named after Jacques Charles François Sturm (1803-1855) and Joseph Liouville (1809-1882), is a real second-order linear differential equation of the form where the functions p(x), q(x), and w(x) are specified at the outset... Karl Hermann Amandus Schwarz (25 January 1843 – 30 November 1921) was a German mathematician, known for his work in complex analysis. ... In mathematics, Laplaces equation is a partial differential equation named after its discoverer, Pierre-Simon Laplace. ... Jules Henri Poincaré (April 29, 1854 – July 17, 1912) (IPA: [1]) was one of Frances greatest mathematicians and theoretical physicists, and a philosopher of science. ... In mathematics, Poissons equation is a partial differential equation with broad utility in electrostatics, mechanical engineering and theoretical physics. ...


At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[7] He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic", or "individual" — emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[8] | name = David Hilbert | image = Hilbert1912. ... In mathematics, an integral transform is any transform T of the following form: The input of this transform is a function f, and the output is another function Tf. ... Hermann Ludwig Ferdinand von Helmholtz (August 31, 1821 – September 8, 1894) was a German physician and physicist. ...


The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by Francis and Kublanovskaya in 1961.[9] To meet Wikipedias quality standards, this article or section may require cleanup. ... The power method is a iterative approximative method for calculating the eigenvectors of a matrix. ... A QR algorithm is a procedure to calculate the eigenvalues of a matrix. ...


Definitions

See also: Eigenplane

Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space L a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L. For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function A is linear if it has the following two properties: In mathematics, an eigenplane is a two-dimensional invariant subspace in a given vector space. ... In mathematics, a linear transformation (also called linear map or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ... In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ... In geometry and linear algebra, a rotation is a transformation in a plane or in space that describes the motion of a rigid body around a fixed point. ... In erik, a reflection (also spelled reflexion) is a map that transforms an object into its mirror image. ... In this shear transformation of an image of the Mona Lisa, the picture was deformed in such a way that its central vertical axis was not modified. ... This article is about vectors that have a particular relation to the spatial coordinates. ...

  • Additivity: A(x + y) = Ax + Ay
  • Homogeneity: Ax) = αAx

where x and y are any two vectors of the vector space L and α is any scalar.[10] Such a function is variously called a linear transformation, linear operator, or linear endomorphism on the space L. In number theory, an additive function is an arithmetic function f(n) of the positive integer n such that whenever a and b are coprime we have: f(ab) = f(a) + f(b). ... In mathematics, homogeneous has a variety of meanings. ... A scalar may be: Look up scalar in Wiktionary, the free dictionary. ... In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that respects the arithmetical operations addition and scalar multiplication defined on vector spaces, or, in other words, it preserves linear combinations. Definition and first consequences Formally, if V and W are... In mathematics, an endomorphism is a morphism (or homomorphism) from a mathematical object to itself. ...

Given a linear transformation A, a non-zero vector x is defined to be an eigenvector of the transformation if it satisfies the eigenvalue equation Ax = λx for some scalar λ. In this situation, the scalar λ is called an eigenvalue of A corresponding to the eigenvector x.[11] In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. ...

The key equation in this definition is the eigenvalue equation, Ax = λx. Most vectors x will not satisfy such an equation. A typical vector x changes direction when acted on by A, so that Ax is not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors. In linear algebra, the identity matrix of size n is the n-by-n square matrix with ones on the main diagonal and zeros elsewhere. ...


The requirement that the eigenvector be non-zero is imposed because the equation A0 = λ0 holds for every A and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. Each eigenvector is associated with a specific eigenvalue. One eigenvalue can be associated with several or even with infinite number of eigenvectors.

Fig. 2. A acts on to stretch the vector x, not change its direction, so x is an eigenvector of A.
Fig. 2. A acts on to stretch the vector x, not change its direction, so x is an eigenvector of A.

Geometrically (Fig. 2), the eigenvalue equation means that under the transformation A eigenvectors experience only changes in magnitude and sign — the direction of Ax is the same as that of x. The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by A. If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation I under which a vector x remains unchanged, Ix = x, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection. In mathematics, an identity function, also called identity map or identity transformation, is a function which does not have any effect: it always returns the same value that was used as its argument. ... In linear algebra, a reflection is a linear transformation that squares to the identity. ...


If x is an eigenvector of the linear transformation A with eigenvalue λ, then any scalar multiple αx is also an eigenvector of A with the same eigenvalue. Similarly if more than one eigenvector share the same eigenvalue λ, any linear combination of these eigenvectors will itself be an eigenvector with eigenvalue λ. [12]. Together with the zero vector, the eigenvectors of A with the same eigenvalue form a linear subspace of the vector space called an eigenspace. This article is about linear subspaces of an abstract vector space. ...


The eigenvectors corresponding to different eigenvalues are linearly independent[13] meaning, in particular, that in an n-dimensional space the linear transformation A cannot have more than n eigenvectors with different eigenvalues.[14]


If a basis is defined in vector space, all vectors can be expressed in terms of components. For finite dimensional vector spaces with dimension n, linear transformations can be represented with n × n square matrices. Conversely, every such square matrix corresponds a linear transformation for a given basis. Thus, in a the two-dimensional vector space R2 fitted with standard basis, the eigenvector equation for a linear transformation A can be written in the following matrix representation: In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. ... In linear algebra, the standard basis for an -dimensional vector space is the basis obtained by taking the basis vectors where is the vector with a in the th coordinate and elsewhere. ...

 begin{bmatrix} a_{11} & a_{12}  a_{21} & a_{22} end{bmatrix} begin{bmatrix} x  y end{bmatrix} = lambda begin{bmatrix} x  y end{bmatrix},

where the juxtaposition of matrices means matrix multiplication. In mathematics, matrix multiplication is the operation of multiplying a matrix with either a scalar or another matrix. ...


Characteristic equation

When a transformation is represented by a square matrix A, the eigenvalue equation can be expressed as In linear algebra, one associates a polynomial to every square matrix, its characteristic polynomial. ...

A mathbf{x} - lambda I mathbf{x} = mathbf{0}.

It is known from linear algebra that this equation has a non-zero solution for x if and only if the determinantal equation Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. ... In algebra, a determinant is a function depending on n that associates a scalar, det(A), to every n×n square matrix A. The fundamental geometric meaning of a determinant is as the scale factor for volume when A is regarded as a linear transformation. ...

det(A − λI) = 0

holds. This equation is called the characteristic equation (less often, secular equation) of A, and the left-hand side is called the characteristic polynomial. When expanded, this gives a polynomial equation for λ. The eigenvector x or its components are not present in the characteristic equation. In linear algebra, the characteristic equation of a square matrix A is the equation in one variable λ where I is the identity matrix. ... In linear algebra, one associates a polynomial to every square matrix, its characteristic polynomial or secular equation. ... In linear algebra, one associates a polynomial to every square matrix, its characteristic polynomial. ... In mathematics, a polynomial is an expression that is constructed from one variable or more variables and constants, using only the operations of addition, subtraction, multiplication, and constant positive whole number exponents. ...


Example

The matrix

begin{bmatrix} 2 & 11 & 2 end{bmatrix}

defines a linear transformation of the real plane. The eigenvalues of this transformation are given by the characteristic equation

detbegin{bmatrix} 2-lambda & 11 & 2-lambda end{bmatrix} = (2-lambda)^2 - 1 = 0.

The roots of this equation (i.e. the values of λ for which the equation holds) are λ = 1 and λ = 3. Having found the eigenvalues, it is possible to find the eigenvectors. Considering the first the eigenvalue λ = 3, we have

begin{bmatrix} 2 & 11 & 2 end{bmatrix}begin{bmatrix}xyend{bmatrix} = 3timesbegin{bmatrix}xyend{bmatrix}.

Both rows of this matrix equation reduces to the single linear equation x = y. To find an eigenvector, we are free to choose any value for x, so one picking x=1 and setting y=x, we find the eigenvector to be get

begin{bmatrix}11end{bmatrix}.

We can check this is an eigenvector by checking that :begin{bmatrix}2&11&2end{bmatrix}begin{bmatrix}11end{bmatrix} = begin{bmatrix}33end{bmatrix}. For the eigenvalue λ = 1, a similar process leads to the equation x = − y, and hence the eigenvector is given by

begin{bmatrix}1-1end{bmatrix}.

The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space). There are exact solutions for dimensions below 5, but for higher dimensions there are no exact solutions and one has to resort to numerical methods to find them approximately. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors. In the mathematical subfield of numerical analysis a sparse matrix is a matrix populated primarily with zeros. ... The Lanczos algorithm is a popular method to find a zero vector in the process of the quadratic sieve. ...


Existence and multiplicity of eigenvalues

For transformations on real vector spaces, the coefficients of the characteristic polynomial are all real. However, the roots are not necessarily real; they may well be complex numbers, or a mixture of real and complex numbers. For example, a matrix representing a planar rotation of 45 degrees will not leave any non-zero vector pointing in the same direction. Over a complex vector space, the fundamental theorem of algebra guarantees that the characteristic polynomial has at least one root, and thus the linear transformation has at least one eigenvalue. In mathematics, the fundamental theorem of algebra states that every complex polynomial in one variable and of degree  â‰¥  has some complex root. ...


As well as distinct roots, the characteristic equation may also have repeated roots. However, having repeated roots does not imply there are multiple distinct (i.e. linearly independent) eigenvectors with that eigenvalue. The algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace, i.e. number of linearly independent eigenvectors with that eigenvalue. In linear algebra, a set of elements of a vector space is linearly independent if none of the vectors in the set can be written as a linear combination of finitely many other vectors in the set. ... In mathematics, the multiplicity of a member of a multiset is how many memberships in the multiset it has. ... In mathematics, the multiplicity of a member of a multiset is how many memberships in the multiset it has. ...


Over a complex space, the sum of the algebraic multiplicities will equal the dimension of the vector space, but the sum of the geometric multiplicities may be smaller. In a sense, then it is possible that there may not be sufficient eigenvectors to span the entire space. This is intimately related to the question of whether a given matrix may be diagonalized by a suitable choice of coordinates.


Shear

Horizontal shear. The shear angle φ is given by k = cot φ.
Horizontal shear. The shear angle φ is given by k = cot φ.

Shear in the plane is a transformation in which all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line.[15] Shearing a plane figure does not change its area. Shear can be horizontal − along the X axis, or vertical − along the Y axis. In horizontal shear (see figure), a point P of the plane moves parallel to the X axis to the place P' so that its coordinate y does not change while the x coordinate increments to become x' = x + k y, where k is called the shear factor.


The matrix of a horizontal shear transformation is begin{bmatrix}1 & k 0 & 1end{bmatrix}. The characteristic equation is λ2 − 2 λ + 1 = (1 − λ)2 = 0 which has a single, repeated root λ = 1. Therefore, the eigenvalue λ = 1 has algebraic multiplicity 2. The eigenvector(s) are found as solutions of

begin{bmatrix}1 - 1 & -k 0 & 1 - 1 end{bmatrix}begin{bmatrix}x yend{bmatrix} = begin{bmatrix}0 & -k 0 & 0 end{bmatrix}begin{bmatrix}x yend{bmatrix} = -ky = 0.

The last equation equivalent y = 0 which is a straight line along the x axis. This line represents the one-dimensional eigenspace. In the case of shear the algebraic multiplicity of the eigenvalue (2) is greater than its geometric multiplicity (1, the dimension of the eigenspace). The eigenvector is a vector along the x axis. The case of vertical shear with transformation matrix begin{bmatrix}1 & 0 k & 1end{bmatrix} is dealt with in a similar way; the eigenvector in vertical shear is along the y axis. Applying repeatedly the shear transformation changes the direction of any vector in the plane closer and closer to the direction of the eigenvector.


Uniform scaling and reflection

Fig. 3. When a surface is stretching equally in all directions (a homothety) any one of the radial vectors can be the eigenvector.
Fig. 3. When a surface is stretching equally in all directions (a homothety) any one of the radial vectors can be the eigenvector.

As a one-dimensional vector space, consider a rubber string tied to unmoving support in one end, such as that on a child's sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated it preserves its original direction. For a two-dimensional vector space, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at the fixed point on the balloon surface (the origin) are stretched equally with the same scaling factor λ. This transformation in two-dimensions is described by the 2×2 square matrix: In mathematics, a homothety (or homothecy) is a transformation of space which dilates distances with respect to a fixed point called the origin. ...

A mathbf{x} = begin{bmatrix}lambda & 00 & lambdaend{bmatrix} begin{bmatrix} x  y end{bmatrix} = begin{bmatrix}lambda cdot x + 0 cdot y 0 cdot x + lambda cdot yend{bmatrix} = lambda begin{bmatrix} x  y end{bmatrix} = lambda mathbf{x}.

Expressed in words, the transformation is equivalent to multiplying the length of any vector by λ while preserving its original direction. Since the vector taken was arbitrary, every non-zero vector in the vector space is an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching, if λ < 1, it is shrinking. Negative values of λ correspond to a reversal of direction, followed by a stretch or a shrink, depending on the absolute value of λ.


Unequal scaling

Vertical shrink (k2 < 1) and horizontal stretch (k1 > 1) of a unit square. Eigenvectors are u1 and u2 and eigenvalues are λ1 = k1 and λ2 = k2. This transformation orients all vectors towards the principal eigenvector u1.
Vertical shrink (k2 < 1) and horizontal stretch (k1 > 1) of a unit square. Eigenvectors are u1 and u2 and eigenvalues are λ1 = k1 and λ2 = k2. This transformation orients all vectors towards the principal eigenvector u1.

For a slightly more complicated example, consider a sheet that is stretched unequally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: k1 for the scaling in direction x, and k2 for the scaling in direction y. The transformation matrix is begin{bmatrix}k_1 & 00 & k_2end{bmatrix}, and the characteristic equation is (k1 − λ)(k2 − λ) = 0. The eigenvalues, obtained as roots of this equation are λ1 = k1, and λ2 = k2 which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging k1 back in the eigenvalue equation gives one of the eigenvectors:

begin{bmatrix}0 & 00 & k_2 - k_1end{bmatrix} begin{bmatrix} x  yend{bmatrix} = begin{bmatrix}00end{bmatrix} or, more simply, y = 0.

Thus, the eigenspace is the x-axis. Similarly, substituting λ = k2 shows that the corresponding eigenspace is the y-axis. In this case, both eigenvalues have algebraic and geometric multiplicities equal to 1. If a given eigenvalue are greater than 1, the vectors are stretched in the direction of the corresponding eigenvector; if less than 1, they are shrunken in that direction. Negative eigenvalues correspond to reflections followed by a stretch or shrink. In general, matrices that are diagonalizable over the real numbers represent scalings and reflections: the eigenvalues represent the scaling factors (and appear as the diagonal terms), and the eigenvectors are the directions of the scalings. In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i. ...


The figure shows the case where k1 > 1 and 1 > k2 > 0. The rubber sheet is stretched along the x axis and simultaneously shrunk along the y axis. After repeatedly applying this transformation of stretching/shrinking many times, almost any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching). The exceptions are vectors along the y-axis, which will gradually shrink away to nothing.


Rotation

For more details on this topic, see Rotation matrix.

A rotation in a plane is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. Clearly, for rotations other than through 0° and 180°, every vector in the real plane will have it's direction changed, and thus there cannot be any eigenvectors. But this is not necessarily if we consider the same matrix over a complex vector space. A rotation matrix is a matrix which when multiplied by a vector has the effect of changing the direction of the vector but not its magnitude. ... In geometry and linear algebra, a rotation is a transformation in a plane or in space that describes the motion of a rigid body around a fixed point. ...


A counterclockwise rotation in the horizontal plane about the origin at an angle φ is represented by the matrix A clockwise motion is one that proceeds like the clocks hands: from the top to the right, then down and then to the left, and back to the top. ...

mathbf{R} = begin{bmatrix} cos varphi & -sin varphi  sin varphi & cos varphi end{bmatrix}.

The characteristic equation of R is λ2 − 2λ cos φ + 1 = 0. This quadratic equation has a discriminant D = 4 (cos2 φ − 1) = − 4 sin2 φ which is a negative number whenever φ is not equal a multiple of 180°. A rotation of 0°, 360°, … is just the identity transformation, (a uniform scaling by +1) while a rotation of 180°, 540°, …, is a reflection (uniform scaling by -1). Otherwise, as expected, there are no real eigenvalues or eigenvectors for rotation in the plane.


Rotation matrices on complex vector spaces

The characteristic equation has two complex roots λ1 and λ2. If we choose to think of the rotation matrix as a linear operator on the complex two dimensional, we can consider these complex eigenvalues. The roots are complex conjugates of each other: λ1,2 = cos φ ± i sin φ = e ± iφ, each with an algebraic multiplicity equal to 1, where i is the imaginary unit.


The first eigenvector is found by substituting the first eigenvalue, λ1, back in the eigenvalue equation:

 begin{bmatrix} cos varphi - lambda_1 & -sin varphi  sin varphi & cos varphi - lambda_1 end{bmatrix} begin{bmatrix} x  y end{bmatrix} = begin{bmatrix} - i sin varphi & -sin varphi  sin varphi & - i sin varphi end{bmatrix} begin{bmatrix} x  y end{bmatrix} = begin{bmatrix} 0  0 end{bmatrix}.

The last equation is equivalent to the single equation x = iy, and again we are free to set x = 1 to give the eigenvector

begin{bmatrix}1-iend{bmatrix}.

Similarly, substituting in the second eigenvalue gives the single equation x = − iy and so the eigenvector is given by

begin{bmatrix}1iend{bmatrix}.

Although not diagonalizable over the reals, the rotation matrix is diagonalizable over the complex numbers, and again the eigenvalues appear on the diagonal. Thus rotation matrices acting on complex spaces can be thought of as scaling matrices, with complex scaling factors.


Infinite-dimensional spaces and spectral theory

For more details on this topic, see Spectral theorem.

If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which (T − λ)−1 is not defined; that is, such that T − λ has no bounded inverse. In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. ... In mathematics, Banach spaces (pronounced ), named after Stefan Banach who studied them, are one of the central objects of study in functional analysis. ... In functional analysis, the concept of the spectrum of an element of a Banach algebra is a generalisation of the concept of eigenvalues, which is exceedingly useful in the case of operators on infinite-dimensional spaces. ... In mathematics, the operator norm is a norm defined on the space of bounded operators between two Banach spaces. ...


Clearly if λ is an eigenvalue of T, λ is in the spectrum of T. In general, the converse is not true. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following example. The bilateral shift on the Hilbert space 2(Z) (that is, the space of all sequences of scalars … a−1, a0, a1, a2, … such that The mathematical concept of a Hilbert space (named after the German mathematician David Hilbert) generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the plane and three-dimensional space to spaces of functions. ... In mathematics, Banach spaces (pronounced ), named after Stefan Banach who studied them, are one of the central objects of study in functional analysis. ... In mathematics, and in particular functional analysis, the shift operators are examples of linear operators, important for their simplicity and natural occurrence. ...

cdots + |a_{-1}|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + cdots

converges) has no eigenvalue but does have spectral values.


In infinite-dimensional spaces, the spectrum of a bounded operator is always nonempty. This is also true for an unbounded self adjoint operator. Via its spectral measures, the spectrum of any self adjoint operator, bounded or otherwise, can be decomposed into absolutely continuous, pure point, and singular parts. (See Decomposition of spectrum.) In mathematics, the operator norm is a norm defined on the space of bounded operators between two Banach spaces. ... On a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. ... In mathematics, projection-valued measures are used to express results in spectral theory. ... In functional analysis, the spectrum of an operator generalizes the notion of eigenvalues. ...


The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantified). Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius. ... The molecular Hamiltonian is an operator in quantum chemistry and atomic, molecular, and optical physics which describes the motions of electrons and nuclei in a polyatomic molecule. ... In physics, a bound state is a composite of two or more building blocks (particles or bodies) that behaves as a single object. ... The Rydberg formula (Rydberg-Ritz formula) is used in atomic physics for determining the full spectrum of light emission from hydrogen, later extended to be useful with any element by use of the Rydberg-Ritz combination principle. ... Ionization is the physical process of converting an atom or molecule into an ion by changing the difference between the number of protons and electrons. ...


Eigenfunctions

Main article: Eigenfunction

A common example of such maps on infinite dimensional spaces are the action of differential operators on function spaces. As an example, on the space of infinitely differentiable functions, the process of differentiation defines a linear operator since In mathematics, an eigenfunction of a linear operator A defined on some function space is any non-zero function f in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. ... In mathematics, a differential operator is an operator defined as a function of the differentiation operator. ... In mathematics, a function space is a set of functions of a given kind from a set X to a set Y. It is called a space because in many applications, it is a topological space or a vector space or both. ... In mathematics, the derivative of a function is one of the two central concepts of calculus. ... Differentiation can mean the following: In biology: cellular differentiation; evolutionary differentiation; In mathematics: see: derivative In cosmogony: planetary differentiation Differentiation (geology); Differentiation (logic); Differentiation (marketing). ...

 displaystylefrac{d}{dt}(af+bg) = a frac{df}{dt} + b frac{dg}{dt},

where f(t) and g(t) are differentiable functions, and a and b are constants). In mathematics, the derivative of a function is one of the two central concepts of calculus. ... In mathematics and the mathematical sciences, a constant is a fixed, but possibly unspecified, value. ...


The eigenvalue equation for linear differential operators is then a set of one or more differential equations. The eigenvectors are commonly called eigenfunctions. The most simple case is the eigenvalue equation for differentiation of a real valued function by a single real variable. In this case, the eigenvalue equation becomes the linear differential equation In mathematics, a differential operator is a linear operator defined as a function of the differentiation operator. ... Visualization of airflow into a duct modelled using the Navier-Stokes equations, a set of partial differential equations. ...

displaystylefrac{d}{dx} f(x) = lambda f(x).

Here λ is the eigenvalue associated with the function, f(x). This eigenvalue equation has a solution for all values of λ. If λ is zero, the solution is

f(x) = A,

where A is any constant; if λ is non-zero, the solution is the exponential function The exponential function is one of the most important functions in mathematics. ...

f(x) = Aeλx.

If we expand our horizons to complex valued functions, the value of λ can be any complex number. The spectrum of d/dt is therefore the whole complex plane. This is an example of a continuous spectrum. A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram In mathematics, the complex numbers are the extension of the real numbers obtained by adjoining an imaginary unit, denoted i, which satisfies:[1] Every complex number can be... In mathematics, the complex plane is a way of visualising the space of the complex numbers. ... In physics, continuous spectrum refers to a range of values which may be graphed to fill a range with closely-spaced or overlapping intervals. ...


Waves on a string

The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admittable eigenvalues are governed by the length of the string and determine the frequency of oscillation.

The displacement, h(x,t), of a stressed rope fixed at both ends, like the vibrating strings of a string instrument, satisfies the wave equation Animation of a standing wave. ... Animation of a standing wave. ... A vibration in a string is a wave. ... A string instrument (or stringed instrument) is a musical instrument that produces sound by means of vibrating strings. ... The wave equation is an important partial differential equation that describes the propagation of a variety of waves, such as sound waves, light waves and water waves. ...

frac{partial^2 h}{partial t^2} = c^2frac{partial^2 h}{partial x^2},

which is a linear partial differential equation, where c is the constant wave speed. The normal method of solving such an equation is separation of variables. If we assume that h can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations: In mathematics, a partial differential equation (PDE) is a relation involving an unknown function of several independent variables and its partial derivatives with respect to those variables. ... In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to re-write an equation so that each of two variables occurs on a different side of the equation. ...

X''=-frac{omega^2}{c^2}X and T'' = − ω2T.

Each of these is an eigenvalue equation (the unfamiliar form of the eigenvalue is chosen merely for convenience). For any values of the eigenvalues, the eigenfunctions are given by

X = sin(frac{omega x}{c} + phi) and T = sin(ωt + ψ).

If we impose boundary conditions -- that the ends of the string are fixed with X(x)=0 at x=0 and x=L, for example -- we can constrain the eigenvalues. For those boundary conditions, we find In mathematics, boundary conditions are imposed on the solutions of ordinary differential equations and partial differential equations, to fit the solutions to the actual problem. ...

sin(φ) = 0, and so the phase angle φ = 0

and

sinleft(frac{omega L}{c}right) = 0.

Thus, the constant ω is constrained to take one of the values omega_n = frac{ncpi}{L}, where n is any integer. Thus the clamped string supports a family of standing waves of the form

h(x,t) = sin(nπx / L)sin(ωnt).

From the point of view of our musical instrument, the frequency ωn is the frequency of the nth harmonic overtone. This article is about the components of sound. ... Approximate harmonic overtones on a string An overtone is a natural resonance or vibration frequency of a system. ...


Eigendecomposition

Main article: Eigendecomposition (matrix)

The spectral theorem for matrices can be stated as follows. Let A be a square n × n matrix. Let q1 ... qk be an eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the space spanned by the eigenvectors of A. If k = n, then A can be written In the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. ... In mathematics, particularly linear algebra and functional analysis, the spectral theorem is any of a number of results about linear operators or about matrices. ... In linear algebra, a set of elements of a vector space is linearly independent if none of the vectors in the set can be written as a linear combination of finitely many other vectors in the set. ...

mathbf{A}=mathbf{Q}mathbf{Lambda}mathbf{Q}^{-1}

where Q is the square n × n matrix whose i-th column is the basis eigenvector qi of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e. Λii = λi. In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main diagonal are all zero. ...


Applications

Schrödinger equation

Fig. 8. The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward: n=1,2,3,...) and angular momentum (increasing across: s, p, d,...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement. The center of each figure is the atomic nucleus, a proton.
Fig. 8. The wavefunctions associated with the bound states of an electron in a hydrogen atom can be seen as the eigenvectors of the hydrogen atom Hamiltonian as well as of the angular momentum operator. They are associated with eigenvalues interpreted as their energies (increasing downward: n=1,2,3,...) and angular momentum (increasing across: s, p, d,...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher probability density for a position measurement. The center of each figure is the atomic nucleus, a proton.

An example of an eigenvalue equation where the transformation T is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: Image File history File links HAtomOrbitals. ... Image File history File links HAtomOrbitals. ... This article discusses the concept of a wavefunction as it relates to quantum mechanics. ... In physics, a bound state is a composite of two or more building blocks (particles or bodies) that behaves as a single object. ... For other uses, see Electron (disambiguation). ... Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius. ... Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius. ... This box:      This gyroscope remains upright while spinning due to its angular momentum. ... This box:      This gyroscope remains upright while spinning due to its angular momentum. ... In quantum mechanics, a probability amplitude is a complex number-valued function which describes an uncertain or unknown quantity. ... The framework of quantum mechanics requires a careful definition of measurement, and a thorough discussion of its practical and philosophical implications. ... The nucleus of an atom is the very small dense region, of positive charge, in its centre consisting of nucleons (protons and neutrons). ... For other uses, see Proton (disambiguation). ... This box:      For a non-technical introduction to the topic, please see Introduction to quantum mechanics. ... For a generally accessible and less technical introduction to the topic, see Introduction to quantum mechanics. ...

Hpsi_E = Epsi_E ,

where H, the Hamiltonian, is a second-order differential operator and ψE, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E, interpreted as its energy. The quantum Hamiltonian is the physical state of a system, which may be characterized as a ray in an abstract Hilbert space (or, in the case of ensembles, as a trace class operator with trace 1). ... In mathematics, a differential operator is an operator defined as a function of the differentiation operator. ... This article discusses the concept of a wavefunction as it relates to quantum mechanics. ...


However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for ψE within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which ψE and H can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. (Fig. 8 presents the lowest eigenfunctions of the Hydrogen atom Hamiltonian.) In physics, a bound state is a composite of two or more building blocks (particles or bodies) that behaves as a single object. ... In mathematical analysis, a real- or complex-valued function of a real variable is square-integrable on an interval if the integral over that interval of the square of its absolute value is finite. ... The mathematical concept of a Hilbert space (named after the German mathematician David Hilbert) generalizes the notion of Euclidean space in a way that extends methods of vector algebra from the plane and three-dimensional space to spaces of functions. ... In mathematics, the dot product (also known as the scalar product and the inner product) is a function (·) : V × V → F, where V is a vector space and F its underlying field. ... In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. ... Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius. ...


The Dirac notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by |Psi_Erangle. In this notation, the Schrödinger equation is: Bra-ket notation is the standard notation for describing quantum states in the theory of quantum mechanics. ...

H|Psi_Erangle = E|Psi_Erangle

where |Psi_Erangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation above H|Psi_Erangle is understood to be the vector obtained by application of the transformation H to |Psi_Erangle. On a finite-dimensional inner product space, a self-adjoint operator is one that is its own adjoint, or, equivalently, one whose matrix is Hermitian, where a Hermitian matrix is one which is equal to its own conjugate transpose. ... In physics, particularly in quantum physics, a system observable is a property of the system state that can be determined by some sequence of physical operations. ...


Molecular orbitals

In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree-Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree-Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. For a generally accessible and less technical introduction to the topic, see Introduction to quantum mechanics. ... Atomic physics (or atom physics) is the field of physics that studies atoms as isolated systems comprised of electrons and an atomic nucleus. ... Molecular physics is the study of the physical properties of molecules and of the chemical bonds between atoms that bind them into molecules. ... In computational physics and computational chemistry, the Hartree-Fock (HF) or self-consistent field (SCF) calculation scheme is a self-consistent iterative variational procedure to calculate the Slater determinant (or the molecular orbitals which it is made of) for which the expectation value of the electronic molecular Hamiltonian is minimum. ... In chemistry, an atomic orbital is the region in which an electron may be found around a single atom. ... In chemistry, a molecular orbital is a region in which an electron may be found in a molecule. ... In quantum mechanics, the Fock matrix is a matrix approximating the single-electron energy operator of a given quantum system in a given set of basis vectors. ... The ionization potential, ionization energy or EI of an atom or molecule is the energy required to remove one mole of electrons from one mole of isolated gaseous atoms or ions. ... Koopmans theorem is an approximation in molecular orbital theory, such as density functional theory, or Hartree-Fock theory, in which the first ionization energy of a molecule is equal to the energy of the highest occupied molecular orbital (the HOMO), and the electron affinity is the negative of the energy... The word iteration is sometimes used in everyday English with a meaning virtually identical to repetition. ... In computational physics and computational chemistry, the Hartree-Fock (HF) or self-consistent field (SCF) calculation scheme is a self-consistent iterative variational procedure to calculate the Slater determinant (or the molecular orbitals which it is made of) for which the expectation value of the electronic molecular Hamiltonian is minimum. ... Quantum chemistry is a branch of theoretical chemistry, which applies quantum mechanics and quantum field theory to address issues and problems in chemistry. ... In mathematics, orthogonal is synonymous with perpendicular when used as a simple adjective that is not part of any longer phrase with a standard definition. ... In modern computational chemistry, quantum chemical calculations are typically performed within a finite set of basis functions. ... The Roothaan equations are a representation of the Hartree-Fock equation in a non orthonormalized basis set which can be of Gaussian type. ...


Geology and glaciology

In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram [16], [17], or as a Stereonet on a Wulff Net [18]. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 [19] are in the order E1 ≥ E2 ≥ E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength. The clast orientation is defined as the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E1, E2, and E3 are dictated by the nature of the sediment's fabric. If E1 = E2 = E3, the fabric is said to be isotropic. If E1 = E2 > E3 the fabric is planar. If E1 > E2 > E3 the fabric is linear. See 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 [20]. This article includes a list of works cited but its sources remain unclear because it lacks in-text citations. ... Categories: Geology stubs | Glaciology | Sedimentary rocks ...


Factor analysis

In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and eigenvalues to the variance explained by these factors. Factor analysis is a statistical technique used in the social sciences and in marketing, product management, operations research, and other applied sciences that deal with large quantities of data. The objective is to explain most of the covariability among a number of observable random variables in terms of a smaller number of unobservable latent variables called factors. The observable random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered too weak, not explaining a significant portion of the data variability. Principal component analysis (PCA) is a technique used to reduce multidimensional data sets to lower dimensions for analysis. ... Factor analysis is a statistical method used to explain variability among observed random variables in terms of fewer unobserved random variables called factors. ... In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. ... In probability theory and statistics, correlation, also called correlation coefficient, indicates the strength and direction of a linear relationship between two random variables. ... Factor analysis is a statistical method used to explain variability among observed random variables in terms of fewer unobserved random variables called factors. ... This article is about the field of statistics. ... The social sciences are groups of academic disciplines that study the human aspects of the world. ... Next big thing redirects here. ... This article or section does not adequately cite its references or sources. ... Operations Research or Operational Research (OR) is an interdisciplinary branch of mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima... In probability theory, a random variable is a quantity whose values are random and to which a probability distribution is assigned. ... In mathematics, linear combinations are a concept central to linear algebra and related fields of mathematics. ...


Vibration analysis

Main article: Vibration

Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are used to determine the natural frequencies of vibration, and the eigenvectors determine the shapes of these vibrational modes. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis. Look up vibration in Wiktionary, the free dictionary. ... The phrase degrees of freedom is used in three different branches of science: in physics and physical chemistry, in mechanical and aerospace engineering, and in statistics. ... Visualization of how a car deforms in an asymmetrical crash using finite element analysis. ...

The mode shapes of a cantilevered I-beam
1st lateral bending
1st torsional
1st vertical bending
2nd lateral bending
2nd lateral bending
2nd torsional
2nd torsional
2nd vertical bending
2nd vertical bending

Eigenfaces

Fig. 9. Eigenfaces as examples of eigenvectors
Fig. 9. Eigenfaces as examples of eigenvectors
Main article: Eigenfaces

In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel.[21] The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. More on determining sign language letters using eigen systems can be found here: http://www.geigel.com/signlanguage/index.php Image File history File links Eigenfaces. ... Image File history File links Eigenfaces. ... Some eigenfaces from AT&T Laboratories Cambridge. ... Eigenfaces are a set of eigenvectors used in the computer vision problem of human face recognition. ... UPIICSA IPN - Binary image Image processing is any form of information processing for which the input is an image, such as photographs or frames of video; the output is not necessarily an image, but can be for instance a set of features of the image. ... The face is the front part of the head and includes the hair, forehead, eyebrow, eyes, nose, ears, cheeks, mouth, lips, philtrum, teeth, skin, and chin. ... Brightness is an attribute of visual perception in which a source appears to emit a given amount of light. ... This article is about the picture element. ... In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. ... Some eigenfaces from AT&T Laboratories Cambridge. ... Principal component analysis (PCA) is a technique used to reduce multidimensional data sets to lower dimensions for analysis. ... In mathematics, linear combinations are a concept central to linear algebra and related fields of mathematics. ... A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. ... For the use of biometrics in biology, see Biostatistics. ... Source coding redirects here. ...


Similar to this concept, eigenvoices concept is also developed which represents the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.


Tensor of inertia

In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass. For other uses, see Mechanic (disambiguation). ... Moment of inertia, also called mass moment of inertia or the angular mass, (SI units kg m2, Former British units slug ft2), is the rotational analog of mass. ... In mathematics, particularly linear algebra and functional analysis, the spectral theorem is a collection of results about linear operators or about matrices. ... In physics, a rigid body is an idealization of a solid body of finite size in which deformation is neglected. ... In mathematics, a tensor is (in an informal sense) a generalized linear quantity or geometrical entity that can be expressed as a multi-dimensional array relative to a choice of basis; however, as an object in and of itself, a tensor is independent of any chosen frame of reference. ... This article is about inertia as it applies to local motion. ... In physics, the center of mass of a system of particles is a specific point at which, for many purposes, the systems mass behaves as if it were concentrated. ...


Stress tensor

In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. Solid mechanics is the branch of physics and mathematics that concern the behavior of solid matter under external actions (e. ... This article is in need of attention from an expert on the subject. ... A diagonal can refer to a line joining two nonadjacent vertices of a polygon or polyhedron, or in contexts any upward or downward sloping line. ... In this shear transformation of an image of the Mona Lisa, the picture was deformed in such a way that its central vertical axis was not modified. ...


Eigenvalues of a graph

In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix, which is either TA or IT 1/2AT −1/2, where T is a diagonal matrix holding the degree of each vertex, and in T −1/2, 0 is substituted for 0−1/2. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest eigenvalue of A, or the eigenvector corresponding to the kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. In mathematics, spectral graph theory is the study of properties of a graph in relationship to the eigenvalues and eigenvectors of its adjacency matrix. ... In mathematics and computer science, the adjacency matrix of a finite directed or undirected graph G on n vertices is the n × n matrix where the nondiagonal entry is the number of edges from vertex i to vertex j, and the diagonal entry is either twice the number of loops... In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. ...


The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second principal eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering. Eigenvector centrality is a measure of the importance of a node in a network. ... This article is about the corporation. ... Mathematical PageRanks (out of 100) for a simple network (PageRanks reported by google are rescaled logarithmically). ... In mathematics and computer science, the adjacency matrix of a finite directed or undirected graph G on n vertices is the n × n matrix where the nondiagonal entry is the number of edges from vertex i to vertex j, and the diagonal entry is either twice the number of loops... In mathematics, a (discrete-time) Markov chain, named after Andrei Markov, is a discrete-time stochastic process with the Markov property. ... In mathematics, a Markov chain, named after Andrey Markov, is a discrete-time stochastic process with the Markov property. ... Clustering is the classification of objects into different groups, or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait - often proximity according to some defined distance measure. ...


See also

  • Nonlinear eigenproblem
  • Quadratic eigenvalue problem
  • Eigenspectrum

In mathematics, the quadratic eigenvalue problem[1] (QEP) is to find scalar eigenvalues λ, left eigenvectors y and right eigenvectors x such that where Q(λ) = λ2A2 + Î»A1 + A0, with matrix coefficients A2, A1 and A0 that are of dimension n-by-n. ...

Notes

  1. ^ See Hawkins 1975, §2
  2. ^ a b c d See Hawkins 1975, §3
  3. ^ a b c See Kline 1972, pp. 807-808
  4. ^ See Kline 1972, p. 673
  5. ^ See Kline 1972, pp. 715-716
  6. ^ See Kline 1972, pp. 706-707
  7. ^ See Kline 1972, p. 1063
  8. ^ See Aldrich 2006
  9. ^ See Golub & van Loan 1996, §7.3; Meyer 2000, §7.3
  10. ^ See Beezer 2006, Definition LT on p. 507; Strang 2006, p. 117; Kuttler 2007, Definition 5.3.1 on p. 71; Shilov 1977, Section 4.21 on p. 77; Rowland, Todd and Weisstein, Eric W. Linear transformation From MathWorld − A Wolfram Web Resource
  11. ^ See Korn & Korn 2000, Section 14.3.5a; Friedberg, Insel & Spence 1989, p. 217
  12. ^ For a proof of this lemma, see Shilov 1977, p. 109, and Lemma for the eigenspace
  13. ^ For a proof of this lemma, see Roman 2008, Theorem 8.2 on p. 186; Shilov 1977, p. 109; Hefferon 2001, p. 364; Beezer 2006, Theorem EDELI on p. 469; and Lemma for linear independence of eigenvectors
  14. ^ See Shilov 1977, p. 109
  15. ^ Definition according to Weisstein, Eric W. Shear From MathWorld − A Wolfram Web Resource
  16. ^ Graham, D., and Midgley, N., 2000. Earth Surface Processes and Landforms (25) pp 1473-1477
  17. ^ Sneed ED, Folk RL. 1958. Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis. Journal of Geology 66(2): 114–150
  18. ^ GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system
  19. ^ Stereo32
  20. ^ Benn, D., Evans, D., 2004. A Practical Guide to the study of Glacial Sediments. London: Arnold. pp 103-107
  21. ^ Xirouhakis, A.; Votsis, G. & Delopoulus, A. (2004), Estimation of 3D motion and structure of human faces, Online paper in PDF format, National Technical University of Athens, <http://www.image.ece.ntua.gr/papers/43.pdf> 

References

  • Korn, Granino A. & Korn, Theresa M. (2000), Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review, 1152 p., Dover Publications, 2 Revised edition, ISBN 0-486-41147-8 .
  • Lipschutz, Seymour (1991), Schaum's outline of theory and problems of linear algebra (2nd ed.), Schaum's outline series, New York, NY: McGraw-Hill Companies, ISBN 0-07-038007-4 .
  • Friedberg, Stephen H.; Insel, Arnold J. & Spence, Lawrence E. (1989), Linear algebra (2nd ed.), Englewood Cliffs, NJ 07632: Prentice Hall, ISBN 0-13-537102-3 .
  • Aldrich, John (2006), “Eigenvalue, eigenfunction, eigenvector, and related terms”, in Jeff Miller (Editor), Earliest Known Uses of Some of the Words of Mathematics, <http://members.aol.com/jeff570/e.html>. Retrieved on 22 August 2006 
  • Strang, Gilbert (1993), Introduction to linear algebra, Wellesley-Cambridge Press, Wellesley, MA, ISBN 0-961-40885-5 .
  • Strang, Gilbert (2006), Linear algebra and its applications, Thomson, Brooks/Cole, Belmont, CA, ISBN 0-030-10567-6 .
  • Bowen, Ray M. & Wang, Chao-Cheng (1980), Linear and multilinear algebra, Plenum Press, New York, NY, ISBN 0-306-37508-7 .
  • Cohen-Tannoudji, Claude (1977), “Chapter II. The mathematical tools of quantum mechanics”, Quantum mechanics, John Wiley & Sons, ISBN 0-471-16432-1 .
  • Fraleigh, John B. & Beauregard, Raymond A. (1995), Linear algebra (3rd ed.), Addison-Wesley Publishing Company, ISBN 0-201-83999-7 (international edition) .
  • Golub, Gene H. & van Loan, Charles F. (1996), Matrix computations (3rd Edition), Johns Hopkins University Press, Baltimore, MD, ISBN 978-0-8018-5414-9 .
  • Hawkins, T. (1975), “Cauchy and the spectral theory of matrices”, Historia Mathematica 2: 1-29 .
  • Horn, Roger A. & Johnson, Charles F. (1985), Matrix analysis, Cambridge University Press, ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback) .
  • Kline, Morris (1972), Mathematical thought from ancient to modern times, Oxford University Press, ISBN 0-195-01496-0 .
  • Meyer, Carl D. (2000), Matrix analysis and applied linear algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, ISBN 978-0-89871-454-8 .
  • Brown, Maureen (October 2004), Illuminating Patterns of Perception: An Overview of Q Methodology .
  • Golub, Gene F. & van der Vorst, Henk A. (2000), “Eigenvalue computation in the 20th century”, Journal of Computational and Applied Mathematics 123: 35-65 .
  • Akivis, Max A. (1969), Tensor calculus, Russian, Science Publishers, Moscow .
  • Gelfand, I. M. (1971), Lecture notes in linear algebra, Russian, Science Publishers, Moscow .
  • Alexandrov, Pavel S. (1968), Lecture notes in analytical geometry, Russian, Science Publishers, Moscow .
  • Carter, Tamara A.; Tapia, Richard A. & Papaconstantinou, Anne, Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students, Rice University, Online Edition, <http://ceee.rice.edu/Books/LA/index.html>. Retrieved on 19 February 2008 .
  • Roman, Steven (2008), Advanced linear algebra (3rd ed.), New York, NY: Springer Science + Business Media, LLC, ISBN 978-0-387-72828-5 .
  • Shilov, Georgi E. (1977), Linear algebra (translated and edited by Richard A. Silverman ed.), New York: Dover Publications, ISBN 0-486-63518-X .
  • Hefferon, Jim (2001), Linear Algebra, Online book, St Michael's College, Colchester, Vermont, USA, <http://joshua.smcvt.edu/linearalgebra/> .
  • Kuttler, Kenneth (2007), An introduction to linear algebra, Online e-book in PDF format, Brigham Young University, <http://www.math.byu.edu/~klkuttle/Linearalgebra.pdf> .
  • Demmel, James W. (1997), Applied numerical linear algebra, SIAM, ISBN 0-89871-389-7 .
  • Beezer, Robert A. (2006), A first course in linear algebra, Free online book under GNU licence, University of Puget Sound, <http://linear.ups.edu/> .
  • Lancaster, P. (1973), Matrix theory, Russian, Moscow, Russia: Science Publishers .
  • Halmos, Paul R. (1987), Finite-dimensional vector spaces (8th ed.), New York, NY: Springer-Verlag, ISBN 0387900934 .
  • Pigolkina, T. S. and Shulman, V. S., Eigenvalue (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
  • Pigolkina, T. S. and Shulman, V. S., Eigenvector (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
  • Greub, Werner H. (1975), Linear Algebra (4th Edition), Springer-Verlag, New York, NY, ISBN 0-387-90110-8 .
  • Larson, Ron & Edwards, Bruce H. (2003), Elementary linear algebra (5th ed.), Houghton Mifflin Company, ISBN 0-618-33567-6 .
  • Curtis, Charles W., Linear Algebra: An Introductory Approach, 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0387909923.
  • Shores, Thomas S. (2007), Applied linear algebra and matrix analysis, Springer Science+Business Media, LLC, ISBN 0-387-33194-8 .
  • Sharipov, Ruslan A. (1996), Course of Linear Algebra and Multidimensional Geometry: the textbook, Online e-book in various formats on arxiv.org, Bashkir State University, Ufa, arXiv:math/0405323v1, ISBN 5-7477-0099-5, <http://www.geocities.com/r-sharipov> .
  • Gohberg, Israel; Lancaster, Peter & Rodman, Leiba (2005), Indefinite linear algebra and applications, Basel-Boston-Berlin: Birkhäuser Verlag, ISBN 3-7643-7349-0 .

Claude Cohen-Tannoudji (born April 1, 1933) is a French physicist working at the École Normale Supérieure in Paris, France, where he has also studied physics. ... Paul Halmos Paul Richard Halmos (March 3, 1916 — October 2, 2006) was a Hungarian-born American mathematician who wrote on probability theory, statistics, operator theory, ergodic theory, functional analysis (in particular, Hilbert spaces), and mathematical logic. ... arXiv (pronounced archive, as if the X were the Greek letter χ) is an archive for electronic preprints of scientific papers in the fields of physics, mathematics, computer science and quantitative biology which can be accessed via the Internet. ...

External links

Wikibooks
Wikibooks Algebra has a page on the topic of
Wikibooks
Wikibooks The Book of Mathematical Proofs has a page on the topic of
Algebra/Linear Transformations
  • MIT Video Lecture on Eigenvalues and Eigenvectors at Google Video, from MIT OpenCourseWare
  • ARPACK is a collection of FORTRAN subroutines for solving large scale (sparse) eigenproblems.
  • IRBLEIGS, has MATLAB code with similar capabilities to ARPACK. (See this paper for a comparison between IRBLEIGS and ARPACK.)
  • LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems
  • ALGLIB includes a partial port of the LAPACK to C++, C#, Delphi, etc.
  • Eigenvalue (of a matrix) on PlanetMath
  • MathWorld: Eigenvector
  • Online calculator for Eigenvalues and Eigenvectors
  • Online Matrix Calculator Calculates eigenvalues, eigenvectors and other decompositions of matrices online
  • Vanderplaats Research and Development - Provides the SMS eigenvalue solver for Structural Finite Element. The solver is in the GENESIS program as well as other commercial programs. SMS can be easily use with MSC.Nastran or NX/Nastran via DMAPs.
  • What are Eigen Values? from PhysLink.com's "Ask the Experts"
  • Templates for the Solution of Algebraic Eigenvalue Problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst (a guide to the numerical solution of eigenvalue problems)




  Results from FactBites:
 
Eigenvalue (3866 words)
An eigenspace of a given transformation is the span of the eigenvectors of that transformation with the same eigenvalue, together with the zero vector (which has no direction).
Eigenvalues are often introduced in the context of linear algebra or matrix theory.
The admittable eigenvalues are governed by the length of the string and determine the frequency of oscillation.
eigenspace@Everything2.com (479 words)
An eigenspace of an n x n matrix is the subspace spanned by the eigenvectors corresponding to a given eigenvalue.
Essentially, an eigenspace is the null space of a matrix A-λI, where λ is an eigenvalue of the matrix A and I is the identity matrix.
Since all eigenvectors for a given eigenvalue are linearly independent, the dimension of the eigenspace is equal to the number of linearly independent vectors that satisfy (A-λI) v = 0 which follows from the definition of an eigenvector.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m