FACTOID # 14: North Carolina has a larger Native American population than North Dakota, South Dakota and Montana combined.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Orthogonal matrix

In matrix theory, a real orthogonal matrix is a square matrix Q whose transpose is its inverse: Matrix theory is a branch of mathematics which focuses on the study of matrices. ... In mathematics, the real numbers are intuitively defined as numbers that are in one-to-one correspondence with the points on an infinite line—the number line. ... In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied. ... In mathematics, and in particular linear algebra, the transpose of a matrix is another matrix, produced by turning rows into columns and vice versa. ... In mathematics and especially linear algebra, an n-by-n matrix A is called invertible, non-singular or regular if there exists another n-by-n matrix B such that AB = BA = In, where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. ...

Q^T Q = Q Q^T = I,! .

Contents


Overview

An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from inner products, and for matrices of complex numbers that leads instead to the unitary requirement. In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition where In is the identity matrix and U* is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if it has an inverse... A complex square matrix A is a normal matrix iff where A* is the conjugate transpose of A (if A is a real matrix, this is the same as the transpose of A). ... This article presents the essential definitions. ...


To see the inner product connection, consider a vector v in an n-dimensional real inner product space. Written with respect to an orthonormal basis, the squared length of v is vTv. If a linear transformation, in matrix form Qv, preserves vector lengths, then In mathematics, an inner product space is a vector space with additional structure, an inner product (also called a scalar product), which allows us to introduce geometrical notions such as angles and lengths of vectors. ...

{bold v}^T{bold v} = (Q{bold v})^T(Q{bold v}) = {bold v}^T Q^T Q {bold v} .

Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent. In mathematics, the dimension of a vector space V is the cardinality (i. ... In mathematics, an isometry, isometric isomorphism or congruence mapping is a distance-preserving isomorphism between metric spaces. ... Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations, and systems of linear equations in finite dimensions. ... In linear algebra, an orthogonal matrix is a square matrix G whose transpose is its inverse, i. ...


Orthogonal matrices are important for a number of reasons, both theoretical and practical. The n×n orthogonal matrices form a group, the orthogonal group denoted by O(n), which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as QR decomposition. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogonal matrix. In mathematics, a group is a set, together with a binary operation, such as multiplication or addition, satisfying certain axioms, detailed below. ... In mathematics, the orthogonal group of degree n over a field F (written as O(n,F)) is the group of n-by-n orthogonal matrices with entries from F, with the group operation that of matrix multiplication. ... In mathematics, point group is a group of geometric symmetries (isometries) leaving a point fixed. ... Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations, and systems of linear equations in finite dimensions. ... In linear algebra, the QR decomposition of a matrix is a decomposition of the matrix into an orthogonal and a triangular matrix. ... 2-D DCT compared to the DFT The discrete cosine transform (DCT) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. ... MPEG-1 Audio Layer 3, more commonly referred to as MP3, is a popular digital audio encoding and lossy compression format, designed to greatly reduce the amount of data required to represent audio, yet still sound like a faithful reproduction of the original uncompressed audio to most listeners. ...


Examples

Below are a few examples of small orthogonal matrices and possible interpretations.

  • begin{bmatrix} 1 & 0  0 & 1  end{bmatrix} (identity transformation)
  • begin{bmatrix} 0.96 & -0.28  0.28 & ;;,0.96  end{bmatrix} (rotation by 16.26°)
  • begin{bmatrix} 1 & 0  0 & -1  end{bmatrix} (reflection across the x-axis)
  • begin{bmatrix} 0 & -0.80 & -0.60  0.80 & -0.36 & ;;,0.48  0.60 & ;;,0.48 & -0.64 end{bmatrix} (rotoinversion with axis (0,−3/5,4/5), angle 90°)
  • begin{bmatrix} 0 & 0 & 0 & 1  0 & 0 & 1 & 0  1 & 0 & 0 & 0  0 & 1 & 0 & 0 end{bmatrix} (permutation of axes)

Elementary constructions

Lower dimensions

The simplest orthogonal matrices are the 1×1 matrices [1] and [−1] which we can interpret as the identity and a reflection of the real line across the origin.


The 2×2 matrices have the form

begin{bmatrix} p & t q & u end{bmatrix},

which orthogonality demands satisfy the three equations

1 = p^2+q^2,! ,
1 = t^2+u^2,! ,
0 = pt+qu,! .

In consideration of the first equation, without loss of generality let p = cos θ, q = sin θ; then either t = −q, u = p or t = q, u = −p. We can interpret the first case as a rotation by θ (where θ = 0 is the identity), and the second as a reflection across a line at an angle of θ/2.

begin{bmatrix} cos theta & -sin theta  sin theta & cos theta  end{bmatrix} (rotation), begin{bmatrix} cos theta & sin theta  sin theta & -cos theta  end{bmatrix} (reflection)

The reflection at 45° exchanges x and y; it is a permutation matrix, with a single 1 in each column and row (and otherwise 0): In linear algebra, a permutation matrix is a binary matrix that has exactly one entry 1 in each row and each column and 0s elsewhere. ...

begin{bmatrix} 0 & 1 1 & 0 end{bmatrix}.

The identity is also a permutation matrix.


A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix. In linear algebra, a symmetric matrix is a matrix that is its own transpose. ...


Higher dimensions

Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for 3×3 matrices and larger the non-rotational matrices can be more complicated than reflections. For example,

begin{bmatrix} -1 & 0 & 0 0 & -1 & 0 0 & 0 & -1 end{bmatrix} and begin{bmatrix} 0 & -1 & 0 1 & 0 & 0 0 & 0 & -1 end{bmatrix}

represent an inversion through the origin and a rotoinversion about the z axis. In Euclidean geometry, the inversion of a point X in respect to a point P is a point X* such that P is the midpoint of the line segment with endpoints X and X*. In other words, the vector from X to P is the same as the vector from... In geometry, an improper rotation is the combination of an ordinary rotation of three-dimensional Euclidean space, that keeps the origin fixed, with a coordinate inversion (a vector x goes to −x). ...


Rotations also become more complicated; they can no longer be completely characterized by an angle, and may affect more than one planar subspace. While it is common to describe a 3×3 rotation matrix in terms of an axis and angle, the existence of an axis is an accidental property of this dimension that applies in no other.


However, we have elementary building blocks for permutations, reflections, and rotations that apply in general.


Primitives

The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Any n×n permutation matrix can be constructed as a product of at most n−1 transpositions.


A Householder reflection is constructed from a non-null vector v as In mathematics, a Householder transformation in 3-dimensional space is the reflection of a vector in a plane. ...

Q = I - 2 {{bold v}{bold v}^T over {bold v}^T{bold v}} .

Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of v. This is a reflection in the hyperplane perpendicular to v (negating any vector component parallel to v). If v is a unit vector, then Q = I−2vvT suffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of size n×n can be constructed as a product of at most n such reflections.


A Givens rotation acts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of size n×n can be constructed as a product of at most n(n−1)/2 such rotations. In the case of 3x3 matrices, three such rotations suffice; and by fixing the sequence we can thus describe all 3×3 rotation matrices (though not uniquely) in terms of the three angles used, often called Euler angles. In mathematics, a Givens rotation is a matrix of the form where c = cos(θ) and s = sin(θ) appear in the i-th / k-th row and column, respectively. ... Euler angles are the classical way of representing rotations in 3-dimensional Euclidean space, named after Leonhard Euler. ...


A Jacobi rotation has the same form as a Givens rotation, but is used as a similarity transformation chosen to zero both off-diagonal entries of a 2×2 symmetric submatrix. The Jacobi Eigenvalue Method is a numerical procedure for the calculation of all eigenvalues and eigenvectors of a symmetric real matrix. ... Several equivalence relations in mathematics are called similarity. ...


Properties

Matrix properties

A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean space Rn with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Rn. It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfy MTM = D, with D a diagonal matrix. In mathematics, an orthonormal basis of an inner product space V(i. ... In mathematics, Euclidean space is a generalization of the 2- and 3-dimensional spaces studied by Euclid. ... In mathematics, the dot product, also known as the scalar product, is a binary operation which takes two vectors and returns a scalar quantity. ... In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main diagonal are all zero. ...


The determinant of any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows: In algebra, a determinant is a function depending on n that associates a scalar det(A) to every n×n square matrix A. The fundamental geometric meaning of a determinant is as the scale factor for volume when A is regarded as a linear transformation. ...

1=det(I)=det(Q^TQ)=det(Q^T)det(Q)=(det(Q))^2,! .

The converse is not true; having a determinant of +1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample.

begin{bmatrix} 2 & 0  0 & frac{1}{2} end{bmatrix}

With permutation matrices the determinant matches the signature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows. In mathematics, the permutations of a finite set (i. ...


Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) absolute value 1. In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i. ... Wikibooks Algebra has more about this subject: Complex numbers In mathematics, a complex number is an expression of the form where a and b are real numbers, and i is a specific imaginary number, called the imaginary unit, with the property i 2 = −1. ... Fig. ... In mathematics, the absolute value (or modulus1) of a real number is its numerical value without regard to its sign. ...


Group properties

The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all n×n orthogonal matrices satisfies all the axioms of a group. It is a compact Lie group of dimension n(n−1)/2, called the orthogonal group and denoted by O(n). In mathematics, a group is a set, together with a binary operation, such as multiplication or addition, satisfying certain axioms, detailed below. ... In mathematics, a subset of Euclidean space Rn is called compact if it is closed and bounded. ... In mathematics, a Lie group is a group whose elements can be continuously parametrized by real numbers, such as the rotation group, which can be parametrized by the Euler angles. ... In mathematics, the orthogonal group of degree n over a field F (written as O(n,F)) is the group of n-by-n orthogonal matrices with entries from F, with the group operation that of matrix multiplication. ...


The orthogonal matrices whose determinant is +1 form a path-connected normal subgroup of O(n) of index 2, the special orthogonal group SO(n) of rotations. The quotient group O(n)/SO(n) is isomorphic to O(1), with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, O(n) is a semidirect product of SO(n) by O(1). In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with 2×2 matrices. If n is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. Connected and disconnected subspaces of R². The space A at top is connected; the shaded space B at bottom is not. ... In mathematics, a normal subgroup N of a group G is a subgroup invariant under conjugation; that is, for each element n in N and each g in G, the element g−1ng is still in N. The statement N is a normal subgroup of G is written: . There are... In mathematics, if G is a group, H a subgroup of G, and g an element of G, then gH = { gh : h an element of H } is a left coset of H in G, and Hg = { hg : h an element of H } is a right coset of H in G... In mathematics, the orthogonal group of degree n over a field F (written as O(n,F)) is the group of n-by-n orthogonal matrices with entries from F, with the group operation that of matrix multiplication. ... Rotation of a plane, seen as the rotation of the terrain relative to the plane (exposure time 1. ... In mathematics, given a group G and a normal subgroup N of G, the quotient group, or factor group, of G over N is intuitively a group that collapses the normal subgroup N to the identity element. ... In mathematics, if G is a group, H a subgroup of G, and g an element of G, then gH = { gh : h an element of H } is a left coset of H in G, and Hg = { hg : h an element of H } is a right coset of H in G... In mathematics, especially in homological algebra and other applications of Abelian category theory, as well as in group theory, an exact sequence is a (finite or infinite) sequence of objects and morphisms between them such that the image of one morphism equals the kernel of the next. ... In abstract algebra, a semidirect product describes a particular way in which a group can be put together from two subgroups. ... In mathematics, one can often define a direct product of objects already known, giving a new one. ...


Now consider (n+1)×(n+1) orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is an n×n orthogonal matrix; thus O(n) is a subgroup of O(n+1) (and of all higher groups).

begin{bmatrix} & & & 0 & O(n) & & vdots & & & 0 0 & cdots & 0 & 1 end{bmatrix}

Since an elementary reflection in the form of a Householder matrix can reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is a reflection group. The last column can be fixed to any unit vector, and each choice gives a different copy of O(n) in O(n+1); in this way O(n+1) is a bundle over the unit sphere Sn with fiber O(n). The symmetry groups of the 5 Platonic solids (tetrahedron, cube, octahedron, dodecahedron, and icosahedron) are generated by reflections and rotations in space. ... In mathematics, in particular in topology, a fiber bundle (or fibre bundle) is a space which locally looks like a product of two spaces but may possess a different global structure. ...


Similarly, SO(n) is a subgroup of SO(n+1); and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure. The bundle structure persists: SO(n) ↪ SO(n+1) → Sn. A single rotation can produce a zero in the first row of the last column, and series of n−1 rotations will zero all but the last row of the last column of an n×n rotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction, SO(n) therefore has

(n-1) + (n-2) + cdots + 1 = n(n-1)/2

degrees of freedom, and so does O(n).


Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the order n! symmetric group Sn. By the same kind of argument, Sn is a subgroup of Sn+1. The even permutations produce the subgroup of permutation matrices of determinant +1, the order n!/2 alternating group. In mathematics, the symmetric group on a set X, denoted by SX or Sym(X), is the group whose underlying set is the set of all bijective functions from X to X, in which the group operation is that of composition of functions, i. ... In mathematics an alternating group is the group of even permutations of a finite set. ...


Canonical form

More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, if Q is special orthogonal then one can always find an orthogonal matrix P, a (rotational) change of basis, that brings Q into block diagonal form:

P^{T}QP = begin{bmatrix} R_1 & &  & ddots &  & & R_k end{bmatrix} (n even), P^{T}QP = begin{bmatrix} R_1 & & &  & ddots & &  & & R_k &  & & & 1 end{bmatrix} (n odd).

where the matrices R1,...,Rk are 2×2 rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal, ±I. Thus, negating one column if necessary, and noting that a 2×2 reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form

P^{T}QP = begin{bmatrix} begin{matrix}R_1 & &  & ddots &  & & R_kend{matrix} & 0  0 & begin{matrix}pm 1 & &  & ddots &  & & pm 1end{matrix}  end{bmatrix},

The matrices R1,…,Rk give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If n is odd, there is at least one real eigenvalue, +1 or −1; for a 3×3 rotation, the eigenvector associated with +1 is the rotation axis. Wikibooks Algebra has more about this subject: Complex numbers In mathematics, a complex number is an expression of the form where a and b are real numbers, and i is a specific imaginary number, called the imaginary unit, with the property i 2 = −1. ... Fig. ... In mathematics, the absolute value (or modulus1) of a real number is its numerical value without regard to its sign. ...


Lie algebra

Suppose the entries of Q are differentiable functions of t, and that t = 0 gives Q = I. Differentiating the orthogonality condition

Q^T Q = I ,!

yields

dot{Q}^T Q + Q^T dot{Q} = 0

Evaluation at t = 0 (Q = I) then implies

dot{Q}^T = -dot{Q} .

In Lie group terms, this means that the Lie algebra of an orthogonal matrix group consists of skew-symmetric matrices. Going the other direction, the matrix exponential of any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal). In mathematics, a Lie algebra is an algebraic structure whose main use is in studying geometric objects such as Lie groups and differentiable manifolds. ... In linear algebra, a skew-symmetric (or antisymmetric) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation: AT = −A or in component form, if A = (aij): aij = − aji   for all i and j. ... In mathematics, the matrix exponential is a function on square matrices analogous to the ordinary exponential function. ...


For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra mathfrak{so}(3) tangent to SO(3). Given ω = (xθ,yθ,zθ), with v = (x,y,z) a unit vector, the correct skew-symmetric matrix form of ω is The introduction to this article provides insufficient context for those unfamiliar with the subject matter. ...

Omega = begin{bmatrix} 0 & -ztheta & ytheta  ztheta & 0 & -xtheta  -ytheta & xtheta & 0 end{bmatrix} .

The exponential of this is the orthogonal matrix for rotation around axis v by angle θ; setting c = cos θ/2, s = sin θ/2,

exp(Omega) = begin{bmatrix} 1 - 2s^2 + 2x^2 s^2 & 2xy s^2 - 2z sc & 2xz s^2 + 2y sc 2xy s^2 + 2z sc & 1 - 2s^2 + 2y^2 s^2 & 2yz s^2 - 2x sc 2xz s^2 - 2y sc & 2yz s^2 + 2x sc & 1 - 2s^2 + 2z^2 s^2 end{bmatrix} .

Numerical linear algebra

Benefits

Numerical analysis takes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit for numeric stability. One implication is that the condition number is 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections and Givens rotations for this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices. Numerical analysis is the study of algorithms for the problems of continuous mathematics (as distinguished from discrete mathematics). ... Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear transformations, and systems of linear equations in finite dimensions. ... In the mathematical subfield of numerical analysis, numerical stability is a property of numerical algorithms. ... In numerical analysis, the condition number associated with a numerical problem is a measure of that quantitys amenability to digital computation, that is, how well-posed the problem is. ... In mathematics, a Householder transformation in 3-dimensional space is the reflection of a vector in a plane. ... In mathematics, a Givens rotation is a matrix of the form where c = cos(θ) and s = sin(θ) appear in the i-th / k-th row and column, respectively. ...


Permutations are essential to the success of many algorithms, including the workhorse Gaussian elimination with partial pivoting (where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of n indices. In mathematics, Gaussian elimination or Gauss–Jordan elimination, named after Carl Friedrich Gauss and Wilhelm Jordan (for many, Gaussian elimination is regarded as the front half of the complete Gauss–Jordan elimination), is an algorithm in linear algebra for determining the solutions of a system of linear equations, for determining...


Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order n3 to a much more efficient order n. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (Following Stewart (1976), we do not store a rotation angle, which is both expensive and badly behaved.) This article gives an overview of the various ways to multiply matrices. ...


Decompositions

A number of important matrix decompositions (Golub & Van Loan, 1996) involve orthogonal matrices, including especially: In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. ...

QR decomposition
M = QR, Q orthogonal, R upper triangular
Singular value decomposition
M = UΣVT, U and V orthogonal, Σ non-negative diagonal
Spectral decomposition
S = QΛQT, S symmetric, Q orthogonal, Λ diagonal.
Polar decomposition
M = QS, Q orthogonal, S symmetric non-negative definite

In linear algebra, the QR decomposition of a matrix is a decomposition of the matrix into an orthogonal and a triangular matrix. ... In linear algebra singular value decomposition (SVD) is an important factorization of a rectangular real or complex matrix, with several applications in signal processing and statistics. ... In mathematics, particularly linear algebra and functional analysis, the spectral theorem is a collection of results about linear operators or about matrices. ... In mathematics, particularly in linear algebra and functional analysis, the polar decomposition of a matrix or linear operator is a factorization analogous to polar decomposition of a nonzero complex number z where r is the absolute value of z (a positive real number), and is the complex sign of z. ...

Examples

Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write Ax = b, where A is m×n, m > n. A QR decomposition reduces A to upper triangular R. For example, if A is 5×3 then R has the form

R = begin{bmatrix} star & star & star  0 & star & star  0 & 0 & star  0 & 0 & 0  0 & 0 & 0 end{bmatrix}

The linear least squares problem is to find the x that minimizes ‖Axb‖, which is equivalent to projecting b to the subspace spanned by the columns of A. (Think of a pigeon flying over a parking lot at noon; its shadow hits the nearest point on the ground.) Assuming the columns of A (and hence R) are independent, the projection solution is found from ATAx = ATb. Now ATA is square (n×n) and invertible, and also equal to RTR. But the lower rows of zeros in R are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition). Here orthogonality is important not only for reducing ATA = (RTQT)QR to RTR, but also for allowing solution without magnifying numerical problems. Linear least squares is a mathematical optimization technique to find an approximate solution for a system of linear equations that has no exact solution. ... In mathematics, Gaussian elimination or Gauss–Jordan elimination, named after Carl Friedrich Gauss and Wilhelm Jordan (for many, Gaussian elimination is regarded as the front half of the complete Gauss–Jordan elimination), is an algorithm in linear algebra for determining the solutions of a system of linear equations, for determining... In mathematics, the Cholesky decomposition, named after André-Louis Cholesky, is a matrix decomposition of a symmetric positive-definite matrix into a lower triangular matrix and the transpose of the lower triangular matrix. ...


In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. With A factored as UΣVT, a satisfactory solution uses the Moore-Penrose pseudoinverse, +UT, where Σ+ merely replaces each non-zero diagonal entry with its reciprocal. Set x to +UTb. In linear algebra, an n-by-n (square) matrix is called invertible, non-singular, or regular if there exists an n-by-n matrix such that where denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. ... In mathematics, and in particular linear algebra, the pseudoinverse of an matrix is a generalization of the inverse matrix . ...


The case of a square invertible matrix also holds interest. Suppose, for example, that A is a 3×3 rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so A has gradually lost its true orthogonality. A Gram-Schmidt process could orthogonalize the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The polar decomposition factors a matrix into a pair, one of which is the unique closest orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any matrix norm invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due to Higham (1986) (1990), repeatedly averaging the matrix with its inverse transpose. Dubrulle (1994) has published an accelerated method with a convenient convergence test. In mathematics and numerical analysis, the Gram-Schmidt process of linear algebra is a method of orthogonalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn. ... In linear algebra, orthogonalization means the following: we start with vectors v1,...,vk in an inner product space, most commonly the Euclidean space Rn which are linearly independent and we want to find mutually orthogonal vectors u1,...,uk which generate the same subspace as the vectors v1,...,vk. ... In mathematics, particularly in linear algebra and functional analysis, the polar decomposition of a matrix or linear operator is a factorization analogous to polar decomposition of a nonzero complex number z where r is the absolute value of z (a positive real number), and is the complex sign of z. ... In mathematics, the term matrix norm can have two meanings: A vector norm on matrices, i. ... In numerical analysis, Newtons method (or the Newton-Raphson method) is an efficient algorithm for finding approximations to the zeros (or roots) of a real-valued function. ...


For example, consider a (very!) non-orthogonal matrix for which the simple averaging algorithm takes seven steps

begin{bmatrix}3 & 17 & 5end{bmatrix} rightarrow begin{bmatrix}1.8125 & 0.06253.4375 & 2.6875end{bmatrix} rightarrow cdots rightarrow begin{bmatrix}0.8 & -0.60.6 & 0.8end{bmatrix}

and which acceleration trims to two steps (with γ = 0.353553, 0.565685).

begin{bmatrix}3 & 17 & 5end{bmatrix} rightarrow begin{bmatrix}1.41421 & -1.060661.06066 & 1.41421end{bmatrix} rightarrow begin{bmatrix}0.8 & -0.60.6 & 0.8end{bmatrix}

Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404.

begin{bmatrix}3 & 17 & 5end{bmatrix} rightarrow begin{bmatrix}0.393919 & -0.9191450.919145 & 0.393919end{bmatrix}

Randomization

Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices. In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. It does not work to fill a matrix with independent uniformly distributed random entries and then orthogonalize it. It does work to fill it with independent normally distributed random entries, then use QR decomposition. Stewart (1980) replaced this with a more efficient idea that Diaconis & Shahshahani (1987) later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an (n+1)×(n+1) orthogonal matrix, take an n×n one and a uniformly distributed unit vector of dimension n+1. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 in the bottom corner). Monte Carlo methods are a widely used class of computational algorithms for simulating the behavior of various physical and mathematical systems. ... In mathematics, the continuous uniform distributions are probability distributions such that all intervals of the same length are equally probable. ... In mathematical analysis, the Haar measure is a way to assign an invariant volume to subsets of locally compact topological groups and subsequently define an integral for functions on those groups. ... The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields. ...


Spin and Pin

A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 not connected to each other, even the +1 component, SO(n), is not simply connected (except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with a covering group of SO(n), the spin group, Spin(n). Likewise, O(n) has covering groups, the pin groups, Pin(n). For n > 2, Spin(n) is simply connected, and thus the universal covering group for SO(n). By far the most famous example of a spin group is Spin(3), often seen in the form of unit quaternions or Pauli spin matrices. A geometrical object is called simply connected if it consists of one piece and doesnt have any circle-shaped holes or handles. Higher-dimensional holes are allowed. ... In mathematics, specifically topology, a covering map is a continuous surjective map p : C → X, with C and X being topological spaces, which has the following property: to every x in X there exists an open neighborhood U such that p -1(U) is a union of mutually disjoint open... In mathematics the spinor group or spin group Spin(n) is a particular double cover of the special orthogonal group SO(n, R). ... ... In mathematics, the quaternions are a non-commutative extension of the complex numbers. ... The Pauli matrices are a set of 2 × 2 complex Hermitian matrices developed by Wolfgang Pauli. ...


In peculiarly Ouroboros fashion, the Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices. An image drawn in 1478 by one Theodoros Pelecanos in an alchemical tract entitled Synosius. ... Clifford algebras are a type of associative algebra in mathematics. ...


Rectangular matrices

If Q is a rectangular matrix, then the conditions QTQ = I and QQT = I are not equivalent. The condition QTQ = I says that the columns of Q are orthonormal. This can only happen if Q is an m×n matrix with n ≤ m. Similarly, QQT = I says that the rows of Q are orthonormal, which requires n ≥ m.


There is no standard terminology for these matrices. They are sometimes called "orthonormal matrices", sometimes "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns".


See also

In mathematics, the orthogonal group of degree n over a field F (written as O(n,F)) is the group of n-by-n orthogonal matrices with entries from F, with the group operation that of matrix multiplication. ... In linear algebra and geometry, a coordinate rotation is a type of transformation from one system of coordinates to another system of coordinates such that distance between any two points remains invariant under the transformation. ... In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition where In is the identity matrix and U* is the conjugate transpose (also called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if it has an inverse... In mathematics, a symplectic matrix is a 2n×2n matrix M (whose entries are typically either real or complex) satisfying the condition where MT denotes the transpose of M and Ω is the 2n×2n skew-symmetric matrix Here In is the n×n identity matrix. ... In linear algebra, a skew-symmetric (or antisymmetric) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation: AT = −A or in component form, if A = (aij): aij = − aji   for all i and j. ...

References

  • Persi Diaconis, Mehrdad Shahshahani. The subgroup algorithm for generating uniform random variables. Prob. in Eng. and Info. Sci., vol. 1, 15–32, 1987. ISSN 0269-9648.
  • Augustine A. Dubrulle. Frobenius Iteration for the Matrix Polar Decomposition. HP Labs Technical Report HPL-94-117. December 16, 1994. [1]
  • Gene H. Golub, Charles F. Van Loan. Matrix Computations, 3/e. Johns Hopkins University Press, Baltimore, 1996. ISBN 978-0-8018-5414-9.
  • Nicholas Higham. Computing the Polar Decomposition—with Applications. SIAM J. Sci. Stat. Comput., 7(4):1160–1174, 1986. ISSN 0196-5204. [2]
  • Nicholas Higham, Robert Schreiber. Fast polar decomposition of an arbitrary matrix, SIAM J. Sci. Stat. Comput., 11(4):648–655, July 1990. ISSN 0196-5204. [3] [4]
  • G. W. Stewart. The Economical Storage of Plane Rotations. Numerische Mathematik, 25(2):137–138, 1976. ISSN 0029-599X.
  • G. W. Stewart. The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimation. SIAM J. Numer. Anal., 17(3):403–409, 1980. ISSN 0036-1429.

  Results from FactBites:
 
Orthogonal matrix - Wikipedia, the free encyclopedia (2819 words)
Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected.
The rest of the matrix is an n×n orthogonal matrix; thus O(n) is a subgroup of O(n+1) (and of all higher groups).
The polar decomposition factors a matrix into a pair, one of which is the unique closest orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular.
Encyclopedia4U - Orthogonal matrix - Encyclopedia Article (454 words)
In linear algebra, an orthogonal matrix is a square matrix
In three dimesions, the orthogonal matrices with determinant 1 correspond to proper rotations and those with determinant -1 to improper rotations.
The complex analog to orthogonal matrices are the unitary matrices.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m