 FACTOID # 6: Michigan is ranked 22nd in land area, but since 41.27% of the state is composed of water, it jumps to 11th place in total area.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW RELATED ARTICLES People who viewed "Determinant" also viewed:

SEARCH ALL

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Determinant

In algebra, a determinant is a function depending on n that associates a scalar, det(A), to every n×n square matrix A. The fundamental geometric meaning of a determinant is as the scale factor for volume when A is regarded as a linear transformation. Determinants are important both in calculus, where they enter the substitution rule for several variables, and in multilinear algebra. This article is about the branch of mathematics. ... Graph of example function, The mathematical concept of a function expresses the intuitive idea of deterministic dependence between two quantities, one of which is viewed as primary (the independent variable, argument of the function, or its input) and the other as secondary (the value of the function, or output). A... In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. ... In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied. ... lol rofl taco hahaThere is also a nscale factor for the expansion of the Universe Scale factors are used in computer science when certain real world numbers need to be represented on a different scale in order to fit a required number format. ... The volume of a solid object is the three-dimensional concept of how much space it occupies, often quantified numerically. ... In mathematics, a linear transformation (also called linear map or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ... Calculus (from Latin, pebble or little stone) is a branch of mathematics that includes the study of limits, derivatives, integrals, and infinite series, and constitutes a major part of modern university education. ... In calculus, the substitution rule is a tool for finding antiderivatives and integrals. ... In mathematics, multilinear algebra extends the methods of linear algebra. ...

For a fixed positive integer n, there is a unique determinant function for the n×n matrices over any commutative ring R. In particular, this function exists when R is the field of real or complex numbers. In ring theory, a branch of abstract algebra, a commutative ring is a ring in which the multiplication operation obeys the commutative law. ... In abstract algebra, a field is an algebraic structure in which the operations of addition, subtraction, multiplication and division (except division by zero) may be performed, and the same rules hold which are familiar from the arithmetic of ordinary numbers. ... In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In mathematics, a complex number is a number of the form where a and b are real numbers, and i is the imaginary unit, with the property i 2 = âˆ’1. ...

The determinant of a matrix A is also sometimes denoted by |A|. This notation can be ambiguous since it is also used for certain matrix norms and for the absolute value. However, often the matrix norm will be denoted with double vertical bars (e.g., ‖A‖) and may carry a subscript as well. Thus, the vertical bar notation for determinant is frequently used (e.g., Cramer's rule and minors). For example, for matrix In mathematics, the term Matrix Norm can have two meanings: A vector norm on matrices, i. ... In mathematics, the absolute value (or modulus) of a real number is its numerical value without regard to its sign. ... Cramers rule is a theorem in linear algebra, which gives the solution of a system of linear equations in terms of determinants. ... In linear algebra, a minor of a matrix is the determinant of a certain smaller matrix. ... $A = begin{bmatrix} a & b & cd & e & fg & h & i end{bmatrix},$

the determinant det(A) might be indicated by | A | or more explicitly as $|A| = begin{vmatrix} a & b & cd & e & fg & h & i end{vmatrix}.,$

That is, the square braces around the matrices are replaced with elongated vertical bars.

## Determinants of 2-by-2 matrices

The 2×2 matrix Image File history File links This is a lossless scalable vector image. ... Image File history File links This is a lossless scalable vector image. ... $A = begin{bmatrix} a & bc & d end{bmatrix},$

has determinant $det(A)=ad-bc.,$

The interpretation when the matrix has real number entries is that this gives the oriented area of the parallelogram with vertices at (0,0), (a,b), (a + c, b + d), and (c,d). The oriented area is the same as the usual area, except that it is negative when the vertices are listed in clockwise order. A parallelogram. ... Area is a quantity expressing the size of a figure in the Euclidean plane or on a 2-dimensional surface. ...

A formula for larger matrices will be given below.

## Determinants of 3-by-3 matrices

The 3×3 matrix $A=begin{bmatrix}a&b&c d&e&fg&h&iend{bmatrix}.$

Using the cofactor expansion on the first row of the matrix we get: In linear algebra, the Laplace expansion of the determinant of an n Ã— n square matrix B expresses the determinant |B| as a sum of n determinants of (n-1) Ã— (n-1) sub-matrices of B. There are 2n such expressions, one for each row and column of B. The Laplace... begin{align} det(A) &= abegin{vmatrix}e&fh&iend{vmatrix} -bbegin{vmatrix}d&fg&iend{vmatrix} +cbegin{vmatrix}d&eg&hend{vmatrix} &= aei-afh-bdi+cdh+bfg-ceg &= (aei+bfg+cdh)-(gec+hfa+idb), end{align} The determinant of a 3x3 matrix can be calculated by its diagonals.

which can be remembered as the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements when the copies of the first two columns of the matrix are written beside it as below: Image File history File links No higher resolution available. ... Image File history File links No higher resolution available. ... $begin{matrix} color{blue}a & color{blue}b & color{blue}c & a & b d & color{blue}e & color{blue}f & color{blue}d & e g & h & color{blue}i & color{blue}g & color{blue}h end{matrix} quad - quad begin{matrix} a & b & color{red}c & color{red}a & color{red}b d & color{red}e & color{red}f & color{red}d & e color{red}g & color{red}h & color{red}i & g & h end{matrix}$

Note that this mnemonic does not carry over into higher dimensions.

## Applications

Determinants are used to characterize invertible matrices (i.e., exactly those matrices with non-zero determinants), and to explicitly describe the solution to a system of linear equations with Cramer's rule. They can be used to find the eigenvalues of the matrix A through the characteristic polynomial In linear algebra, an n-by-n (square) matrix is called invertible or non-singular if there exists an n-by-n matrix such that where denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. ... A linear equation is an equation in which each term is either a constant or the product of a constant times the first power of a variable. ... Cramers rule is a theorem in linear algebra, which gives the solution of a system of linear equations in terms of determinants. ... In mathematics, a number is called an eigenvalue of a matrix if there exists a nonzero vector such that the matrix times the vector is equal to the same vector multiplied by the eigenvalue. ... In linear algebra, one associates a polynomial to every square matrix, its characteristic polynomial. ... $p(x) = det(xI - A) ,$

where I is the identity matrix of the same dimension as A. In linear algebra, the identity matrix of size n is the n-by-n square matrix with ones on the main diagonal and zeros elsewhere. ...

One often thinks of the determinant as assigning a number to every sequence of n vectors in $Bbb{R}^n$, by using the square matrix whose columns are the given vectors. With this understanding, the sign of the determinant of a basis can be used to define the notion of orientation in Euclidean spaces. The determinant of a set of vectors is positive if the vectors form a right-handed coordinate system, and negative if left-handed. For other senses of this word, see sequence (disambiguation). ... In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. ... In mathematics, an orientation on a real vector space is a choice of which ordered bases are positively oriented (or right-handed) and which are negatively oriented (or left-handed). ... Around 300 BC, the Greek mathematician Euclid laid down the rules of what has now come to be called Euclidean geometry, which is the study of the relationships between angles and distances in space. ... A negative number is a number that is less than zero, such as âˆ’3. ... In mathematics as applied to geometry, physics or engineering, a coordinate system is a system for assigning a tuple of numbers to each point in an n-dimensional space. ...

Determinants are used to calculate volumes in vector calculus: the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if the linear map $f: Bbb{R}^n rightarrow Bbb{R}^n$ is represented by the matrix A, and S is any measurable subset of $Bbb{R}^n$, then the volume of f(S) is given by $left| det(A) right| times operatorname{volume}(S)$. More generally, if the linear map $f: Bbb{R}^n rightarrow Bbb{R}^m$ is represented by the m-by-n matrix A, and S is any measurable subset of $Bbb{R}^{n}$, then the n-dimensional volume of f(S) is given by $sqrt{det(A^mathrm{T} A)} times operatorname{volume}(S)$. By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of a solid object is the three-dimensional concept of how much space it occupies, often quantified numerically. ... Vector calculus (also called vector analysis) is a field of mathematics concerned with multivariate real analysis of vectors in two or more dimensions. ... In mathematics, the absolute value (or modulus) of a real number is its numerical value without regard to its sign. ... In geometry, a parallelepiped (now usually pronounced , traditionally in accordance with its etymology in Greek Ï€Î±ÏÎ±Î»Î»Î·Î»-ÎµÏ€Î¯Ï€ÎµÎ´Î¿Î½, a body having parallel planes) is a three-dimensional figure like a cube, except that its faces are not squares but parallelograms. ... In mathematics, a linear transformation (also called linear map or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ... In mathematics, the Lebesgue measure is the standard way of assigning a length, area or volume to subsets of Euclidean space. ... A is a subset of B, and B is a superset of A. In mathematics, especially in set theory, the terms, subset, superset and proper (or strict) subset or superset are used to describe the relation, called inclusion, of one set being contained inside another set. ... 2-dimensional renderings (ie. ... A tetrahedron (plural: tetrahedra) is a polyhedron composed of four triangular faces, three of which meet at each vertex. ... In geometry, two lines are said to be skew lines if they do not intersect but are not parallel. ...

The volume of any tetrahedron, given its vertices a, b, c, and d, is (1/6)·|det(a − bb − c, c − d)|, or any other combination of pairs of vertices that form a simply connected graph. A tetrahedron (plural: tetrahedra) is a polyhedron composed of four triangular faces, three of which meet at each vertex. ...

## General definition and computation

The definition of the determinant comes from the following Theorem.

Theorem. Let Mn(K) denote the set of all $n times n$ matrices over the field K. There exists exactly one function $F : M_n(K) longrightarrow K$

with the two properties:

One can then define the determinant as the unique function with the above properties. In linear algebra, a skew-symmetric (or antisymmetric) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation: AT = &#8722;A or in component form, if A = (aij): aij = &#8722; aji   for all i and j. ... In linear algebra, a multilinear map is a mathematical function of several vector variables that is linear in each variable. ...

In proving the above theorem, one also obtains the Leibniz formula: In algebra, the Leibniz formula expresses the determinant of a square matrix in terms of permutations of the matrix elements. ... $det(A) = sum_{sigma in S_n} sgn(sigma) prod_{i=1}^n A_{i,sigma(i)}.$

Here the sum is computed over all permutations σ of the numbers {1,2,...,n} and sgn(σ) denotes the signature of the permutation σ: +1 if σ is an even permutation and −1 if it is odd. Permutation is the rearrangement of objects or symbols into distinguishable sequences. ... In mathematics, the permutations of a finite set (i. ... In mathematics, the permutations of a finite set (i. ... In mathematics, the permutations of a finite set (i. ...

This formula contains n! (factorial) summands, and it is therefore impractical to use it to calculate determinants for large n. For factorial rings in mathematics, see unique factorisation domain. ...

For small matrices, one obtains the following formulas:

• if A is a 1-by-1 matrix, then $det(A) = A_{1,1}. ,$
• if A is a 2-by-2 matrix, then $det(A) = A_{1,1}A_{2,2} - A_{2,1}A_{1,2}. ,$
• for a 3-by-3 matrix A, the formula is more complicated: $begin{matrix} det(A) & = & A_{1,1}A_{2,2}A_{3,3} + A_{1,3}A_{2,1}A_{3,2} + A_{1,2}A_{2,3}A_{3,1} & & - A_{1,3}A_{2,2}A_{3,1} - A_{1,1}A_{2,3}A_{3,2} - A_{1,2}A_{2,1}A_{3,3}. end{matrix},$

which takes the shape of the Sarrus' scheme. For the cross product in algebraic topology, see KÃ¼nneth theorem. ...

In general, determinants can be computed using Gaussian elimination using the following rules: In mathematics, Gaussian elimination (not to be confused with Gaussâ€“Jordan elimination), named after Carl Friedrich Gauss, is an algorithm in linear algebra for determining the solutions of a system of linear equations, for determining the rank of a matrix, and for calculating the inverse of an invertible square matrix. ...

• If A is a triangular matrix, i.e. $A_{i,j} = 0 ,$ whenever i > j or, alternatively, whenever i < j, then $det(A) = A_{1,1} A_{2,2} cdots A_{n,n} ,$ (the product of the diagonal entries of A).
• If B results from A by exchanging two rows or columns, then $det(B) = -det(A). ,$
• If B results from A by multiplying one row or column with the number c, then $det(B) = c,det(A). ,$
• If B results from A by adding a multiple of one row to another row, or a multiple of one column to another column, then $det(B) = det(A). ,$

Explicitly, starting out with some matrix, use the last three rules to convert it into a triangular matrix, then use the first rule to compute its determinant. In the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix where the entries below or above the main diagonal are zero. ...

It is also possible to expand a determinant along a row or column using Laplace's formula, which is efficient for relatively small matrices. To do this along row i, say, we write In linear algebra, the Laplace expansion of the determinant of an n Ã— n square matrix B expresses the determinant |B| as a sum of n determinants of (n-1) Ã— (n-1) sub-matrices of B. There are 2n such expressions, one for each row and column of B. The Laplace... $det(A) = sum_{j=1}^n A_{i,j}C_{i,j} = sum_{j=1}^n A_{i,j} (-1)^{i+j} M_{i,j}$

where the Ci,j represent the matrix cofactors, i.e. Ci,j is ( − 1)i + j times the minor Mi,j, which is the determinant of the matrix that results from A by removing the i-th row and the j-th column. In linear algebra, a minor of a matrix is the determinant of a certain smaller matrix. ... In linear algebra, a minor of a matrix is the determinant of a certain smaller matrix. ...

## Example

Suppose we want to compute the determinant of $A = begin{bmatrix}-2&2&-3 -1& 1& 3 2 &0 &-1end{bmatrix}.$

We can go ahead and use the Leibniz formula directly: $det(A),$ $=,$ $(-2cdot 1 cdot -1) + (-3cdot -1 cdot 0) + (2cdot 3cdot 2)$ $- (-3cdot 1 cdot 2) - (-2cdot 3 cdot 0) - (2cdot -1 cdot -1)$ $=,$ $2 + 0 + 12 - (-6) - 0 - 2 = 18.,$

Alternatively, we can use Laplace's formula to expand the determinant along a row or column. It is best to choose a row or column with many zeros, so we will expand along the second column: In linear algebra, the Laplace expansion of the determinant of an n Ã— n square matrix B expresses the determinant |B| as a sum of n determinants of (n-1) Ã— (n-1) sub-matrices of B. There are 2n such expressions, one for each row and column of B. The Laplace... $det(A),$ $=,$ $(-1)^{1+2}cdot 2 cdot det begin{bmatrix}-1&3 2 &-1end{bmatrix} + (-1)^{2+2}cdot 1 cdot det begin{bmatrix}-2&-3 2&-1end{bmatrix}$ $=,$ $(-2)cdot((-1)cdot(-1)-2cdot3)+1cdot((-2)cdot(-1)-2cdot(-3))$ $=,$ $(-2)(-5)+8 = 18.,$

A third way (and the method of choice for larger matrices) would involve the Gauss algorithm. When doing computations by hand, one can often shorten things dramatically by cleverly adding multiples of columns or rows to other columns or rows; this does not change the value of the determinant, but may create zero entries which simplifies the subsequent calculations. In this example, adding the second column to the first one is especially useful: $begin{bmatrix}0&2&-3 0 &1 &3 2 &0 &-1end{bmatrix}$

and this determinant can be quickly expanded along the first column: $det(A),$ $=,$ $(-1)^{3+1}cdot 2cdot det begin{bmatrix}2&-3 1&3end{bmatrix}$ $=,$ $2cdot(2cdot3-1cdot(-3)) = 2cdot 9 = 18.,$

## Properties

The determinant is a multiplicative map in the sense that $det(AB) = det(A)det(B) ,$ for all n-by-n matrices A and B.

This is generalized by the Cauchy-Binet formula to products of non-square matrices. In linear algebra, the Cauchy-Binet formula generalizes the multiplicativity of the determinant (the fact that the determinant of a product of two square matrices is equal to the product of the two determinants) to non_square matrices. ...

It is easy to see that $det(rI_n) = r^n ,$ and thus $det(rA) = det(rI_n cdot A) = r^n det(A) ,$ for all n-by-n matrices A and all scalars r.

A matrix over a commutative ring R is invertible if and only if its determinant is a unit in R. In particular, if A is a matrix over a field such as the real or complex numbers, then A is invertible if and only if det(A) is not zero. In this case we have In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. ... In ring theory, a branch of abstract algebra, a commutative ring is a ring in which the multiplication operation obeys the commutative law. ... In mathematics, a unit in a ring R is an element u such that there is v in R with uv = vu = 1R. That is, u is an invertible element of the multiplicative monoid of R. The units of R form a group U(R) under multiplication, the group of... In abstract algebra, a field is an algebraic structure in which the operations of addition, subtraction, multiplication and division (except division by zero) may be performed, and the same rules hold which are familiar from the arithmetic of ordinary numbers. ... In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In mathematics, a complex number is a number of the form where a and b are real numbers, and i is the imaginary unit, with the property i 2 = âˆ’1. ... $det(A^{-1}) = det(A)^{-1}. ,$

Expressed differently: the vectors v1,...,vn in Rn form a basis if and only if det(v1,...,vn) is non-zero. In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. ...

A matrix and its transpose have the same determinant: In linear algebra, the transpose of a matrix A is another matrix AT (also written Atr, tA, or Aâ€²) created by any one of the following equivalent actions: write the rows of A as the columns of AT write the columns of A as the rows of AT reflect A... $det(A^mathrm{T}) = det(A). ,$

The determinants of a complex matrix and of its conjugate transpose are conjugate: In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an m-by-n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking the transpose and then taking the complex conjugate of each entry. ... In mathematics, the complex conjugate of a complex number is given by changing the sign of the imaginary part. ... $det(A^*) = det(A)^*. ,$

(Note the conjugate transpose is identical to the transpose for a real matrix)

The determinant of a matrix A exhibits the following properties under elementary matrix transformations of A: It has been suggested that this article or section be merged with Elementary operations. ...

1. Exchanging rows or columns multiplies the determinant by −1.
2. Multiplying a row or column by m multiplies the determinant by m.
3. Adding a multiple of a row or column to another leaves the determinant unchanged.

This follows from the multiplicative property and the determinants of the elementary matrix transformation matrices. It has been suggested that this article or section be merged with Elementary operations. ...

If A and B are similar, i.e., if there exists an invertible matrix X such that A = X − 1BX, then by the multiplicative property, Several equivalence relations in mathematics are called similarity. ... $det(A) = det(B). ,$

This means that the determinant is a similarity invariant. Because of this, the determinant of some linear transformation T : VV for some finite dimensional vector space V is independent of the basis for V. The relationship is one-way, however: there exist matrices which have the same determinant but are not similar. In mathematics, similarity invariance is a property exhibited by a function whose value is unchanged under similarities of its domain. ... In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ...

If A is a square n-by-n matrix with real or complex entries and if λ1,...,λn are the (complex) eigenvalues of A listed according to their algebraic multiplicities, then In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In mathematics, a complex number is a number of the form where a and b are real numbers, and i is the imaginary unit, with the property i 2 = âˆ’1. ... In linear algebra, the eigenvectors (from the German eigen meaning inherent, characteristic) of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. ... $det(A) = lambda_{1}lambda_{2} cdots lambda_{n}.,$

This follows from the fact that A is always similar to its Jordan normal form, an upper triangular matrix with the eigenvalues on the main diagonal. In linear algebra, the Jordan normal form, also called the Jordan canonical form, named in honor of the 19th and early 20th-century French mathematician Camille Jordan, answers the question, for a given square matrix M over a field K containing the eigenvalues of M, to what extent can M...

### Relationship to Trace

From this connection between the determinant and the eigenvalues, one can derive a connection between the trace function, the exponential function, and the determinant: In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right) of A, i. ... In mathematics, the matrix exponential is a function on square matrices analogous to the ordinary exponential function. ... $det(exp(A)) = exp(operatorname{tr}(A)).$

Performing the substitution $A mapsto log A$ in the above equation yields $det(A) = exp(operatorname{tr}(log A)),$

which is closely related to the Fredholm determinant. Similarly, A complex analytic function which generalizes the characteristic polynomial of a matrix. ... $operatorname{tr}(A) = log(det(exp A)).$

For n-by-n matrices there are the relationships:

Case n=1: $left.det(A) = operatorname{tr}(A)right.$
Case n=2: $left. det(A) = frac{1}{2} left( operatorname{tr}(A)^2 - operatorname{tr}(A^2) right)right.$
Case n=3: $left. det(A) = frac{1}{6} left( operatorname{tr}(A)^3 - 3 operatorname{tr}(A)operatorname{tr}(A^2) + 2 operatorname{tr}(A^3) right)right.$
Case n=4: $left. det(A) = frac{1}{24} left( operatorname{tr}(A)^4 - 6operatorname{tr}(A)^2operatorname{tr}(A^2) + 3operatorname{tr}(A^2)^2 + 8operatorname{tr}(A)operatorname{tr}(A^3) - 6operatorname{tr}(A^4) right)right.$ $ldots$

which are closely related to Newton's identities -- see the formula for $a_n(t_1, ldots, t_n)$. In mathematics, Newtons identities relate two different ways of describing the roots of a polynomial. ...

### Derivative

The determinant of real square matrices is a polynomial function from $Bbb{R}^{n times n}$ to $Bbb{R}$, and as such is everywhere differentiable. Its derivative can be expressed using Jacobi's formula: In mathematics, a polynomial is an expression that is constructed from one or more variables and constants, using only the operations of addition, subtraction, multiplication, and constant positive whole number exponents. ... For a non-technical overview of the subject, see Calculus. ... In matrix calculus, Jacobis formula expresses the differential of the determinant of a matrix A in terms of the adjugate of A and the differential of A. The formula is It is named after the mathematician C.G.J. Jacobi. ... $d ,det(A) = operatorname{tr}(operatorname{adj}(A) ,dA)$

where adj(A) denotes the adjugate of A. In particular, if A is invertible, we have In linear algebra, the adjugate or classical adjoint of a square matrix is a matrix which plays a role similar to the inverse of a matrix; it can however be defined for any square matrix without the need to perform any divisions. ... $d ,det(A) = det(A) ,operatorname{tr}(A^{-1} ,dA)$

or, more colloquially, $det(A + X) - det(A) approx det(A) ,operatorname{tr}(A^{-1} X)$

if the entries in the matrix X are sufficiently small. The special case where A is equal to the identity matrix I yields $det(I + X) approx 1 + operatorname{tr}(X).$

In component form, this is $frac{partial det(A)}{partial A_{ij}} = det(A)(A^{-1})_{ij}.$

## Abstract formulation

An n × n square matrix A may be thought of as the coordinate representation of a linear transformation of an n-dimensional vector space V. Given any linear transformation In mathematics, a linear transformation (also called linear map or linear operator) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication. ... In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ... $A:Vto V,$

we can define the determinant of A as the determinant of any matrix representation of A. This is a well-defined notion (i.e. independent of a choice of basis) since the determinant is invariant under similarity transformations. In mathematics, the term well-defined is used to specify that a certain concept (a function, a property, a relation, etc. ... In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. ...

As one might expect, it is possible to define the determinant of a linear transformation in a coordinate-free manner. If V is an n-dimensional vector space, then one can construct its top exterior power ΛnV. This is a one-dimensional vector space whose elements are written In mathematics, the exterior algebra (also known as the Grassmann algebra) of a given vector space V is a certain unital associative algebra which contains V as a subspace. ... $v_1 wedge v_2 wedge cdots wedge v_n$

where each vi is a vector in V and the wedge product ∧ is antisymmetric (i.e. uv = −vu). Any linear transformation A : VV induces a linear transformation of ΛnV as follows: In mathematics, the exterior algebra (also known as the Grassmann algebra) of a given vector space V is a certain unital associative algebra which contains V as a subspace. ... $v_1 wedge v_2 wedge cdots wedge v_n mapsto Av_1 wedge Av_2 wedge cdots wedge Av_n.$

Since ΛnV is one-dimensional this operation is just multiplication by some scalar that depends on A. This scalar is called the determinant of A. That is, we define det(A) by the equation $Av_1 wedge Av_2 wedge cdots wedge Av_n = (det A),v_1 wedge v_2 wedge cdots wedge v_n.$

One can check that this definition agrees with the coordinate-dependent definition given above.

## Algorithmic implementation

• The naive method of implementing an algorithm to compute the determinant is to use Laplace's formula for expansion by cofactors. This approach is extremely inefficient in general, however, as it is of order n! (n factorial) for an n×n matrix M.
• An improvement to order n3 can be achieved by using LU decomposition to write M = LU for triangular matrices L and U. Now, det M = det LU = det L det U, and since L and U are triangular the determinant of each is simply the product of its diagonal elements. Alternatively one can perform the Cholesky decomposition if possible or the QR decomposition and find the determinant in a similar fashion.
• Since the definition of the determinant does not need divisions, a question arises: do fast algorithms exist that do not need divisions? This is especially interesting for matrices over rings. Indeed algorithms with run-time proportional to n4 exist. An algorithm of Mahajan and Vinay, and Berkowitz is based on closed ordered walks (short clow). It computes more products than the determinant definition requires, but some of these products cancel and the sum of these products can be computed more efficiently. The final algorithm looks very much like an iterated product of triangular matrices.
• What is not often discussed is the so-called "bit complexity" of the problem, i.e. how many bits of accuracy you need to store for intermediate values. For example, using Gaussian elimination, you can reduce the matrix to upper triangular form, then multiply the main diagonal to get the determinant (this is essentially a special case of the LU decomposition as above), but a quick calculation will show that the bit size of intermediate values could potentially become exponential. One could talk about when it is appropriate to round intermediate values, but an elegant way of calculating the determinant uses the Bareiss Algorithm, an exact division method based on Sylvester's identity to give a run time of order n3 and bit complexity roughly the bit size of the original entries in the matrix times n.

For other uses, see Big O. In computational complexity theory, big O notation is often used to describe how the size of the input data affects an algorithms usage of computational resources (usually running time or memory). ... For factorial rings in mathematics, see unique factorisation domain. ... In linear algebra, the LU decomposition is a matrix decomposition which writes a matrix as the product of a lower and upper triangular matrix. ... In mathematics, the Cholesky decomposition, named after AndrÃ©-Louis Cholesky, is a matrix decomposition of a symmetric positive-definite matrix into a lower triangular matrix and the transpose of the lower triangular matrix. ... In linear algebra, the QR decomposition (also called the QR factorization) of a matrix is a decomposition of the matrix into an orthogonal and a triangular matrix. ... In mathematics, Gaussian elimination (not to be confused with Gaussâ€“Jordan elimination), named after Carl Friedrich Gauss, is an algorithm in linear algebra for determining the solutions of a system of linear equations, for determining the rank of a matrix, and for calculating the inverse of an invertible square matrix. ... In matrix theory, Sylvesters determinant theorem is a theorem useful for evaluating certain types of determinants. ...

## History

Historically, determinants were considered before matrices. Originally, a determinant was defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero). In this sense, determinants where first used in the 3rd century BC Chinese math textbook The Nine Chapters on the Mathematical Art. In Europe, two-by-two determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz and Seki about 100 years later. Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrent law was first announced by Bezout (1764). In mathematics and linear algebra, a system of linear equations is a set of linear equations such as A standard problem is to decide if any assignment of values for the unknowns can satisfy all three equations simultaneously, and to find such an assignment if it exists. ... The Nine Chapters on the Mathematical Art (ä¹ç« ç®—è¡“) is a Chinese mathematics book, probably composed in the 1st century AD, but perhaps as early as 200 BC. This book is the earliest surviving mathematical text from China that has come down to us by being copied by scribes and (centuries later... Gerolamo Cardano. ... (15th century - 16th century - 17th century - more centuries) As a means of recording the passage of time, the 16th century was that century which lasted from 1501 to 1600. ... â€œLeibnizâ€ redirects here. ... Seki Takakazu Seki Takakazu or Seki KÅwa ) (born 1637 or 1642? â€“ October 24, 1708) was a Japanese mathematician who created a new mathematical notation system and used it to discover many of the theorems and theories that were beingâ€”or were shortly to beâ€”discovered in the West, including... Gabriel Cramer Gabriel Cramer (July 31, 1704 - January 4, 1752) was a Swiss mathematician, born in Geneva. ... Étienne Bézout (March 31, 1730 - September 27, 1783) was a French mathematician who was born in Nemours, Seine-et-Marne, France, and died in Basses-Loges (near Fontainbleau), France. ...

It was Vandermonde (1771) who first recognized determinants as independent functions. Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order. Lagrange was the first to apply determinants to questions outside elimination theory; he proved many special cases of general identities. Alexandre-Théophile Vandermonde (28 February 1735 -1 January 1796) was a French musician and chemist who worked with Bezout and Lavoisier; his name is now principally associated with determinant theory in mathematics. ... Pierre-Simon Laplace Pierre-Simon Laplace (March 23, 1749 &#8211; March 5, 1827) was a French mathematician and astronomer, the discoverer of the Laplace transform and Laplaces equation. ... In linear algebra, a minor of a matrix is the determinant of a certain smaller matrix. ... Joseph-Louis Lagrange, comte de lEmpire (January 25, 1736 â€“ April 10, 1813; b. ... In algebraic geometry, elimination theory is the classical name for algorithmic approaches to eliminating between polynomials of several variables. ...

Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word determinants (Laplace had used resultant), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem. Johann Carl Friedrich Gauss or GauÃŸ ( ; Latin: ) (30 April 1777 â€“ 23 February 1855) was a German mathematician and scientist who contributed significantly to many fields, including number theory, analysis, differential geometry, geodesy, electrostatics, astronomy, and optics. ... Traditionally, number theory is that branch of pure mathematics concerned with the properties of integers and contains many open problems that are easily understood even by non-mathematicians. ... In algebra, the discriminant of a polynomial is a certain expression in the coefficients of the polynomial which equals zero if and only if the polynomial has multiple roots in the complex numbers. ... In the mathematics of the nineteenth century, an important role was played by the algebraic forms that generalise quadratic forms to degrees 3 and more, also known as quantics. ...

The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On the same day (Nov. 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy-Binet formula.) In this he used the word determinant in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality. Alfred Binet (July 11, 1857 &#8211; October 18, 1911), French psychologist and inventor of the first usable intelligence test, the basis of todays IQ test. ... Augustin Louis Cauchy Augustin Louis Cauchy (August 21, 1789 &#8211; May 23, 1857) was a French mathematician. ... In linear algebra, the Cauchy-Binet formula generalizes the multiplicativity of the determinant (the fact that the determinant of a product of two square matrices is equal to the product of the two determinants) to non_square matrices. ...

The next important figure was Jacobi (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. Karl Gustav Jacob Jacobi Carl Gustav Jacob Jacobi (December 10, 1804 - February 18, 1851), was not only a great German mathematician but also considered by many as the most inspiring teacher of his time (Bell, p. ... In vector calculus, the Jacobian is shorthand for either the Jacobian matrix or its determinant, the Jacobian determinant. ... Crelles Journal, or just Crelle, is the common name for the Journal für die reine und angewandte Mathematik founded by August Leopold Crelle. ... James Joseph Sylvester James Joseph Sylvester (September 3, 1814 London - March 15, 1897 Oxford) was an English mathematician. ... Arthur Cayley (August 16, 1821 - January 26, 1895) was a British mathematician. ...

The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the text-books on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises. Henri Léon Lebesgue (June 28, 1875 - July 26, 1941) was a French mathematician, most famous for his theory of integration. ... Ludwig Otto Hesse (22 April 1811 â€“ 4 August 1874) was a German mathematician. ... In mathematics, persymmetric matrix may refer to: a square matrix such that the values on each line perpendicular to the main diagonal are the same for a given line; or a square matrix which is symmetric in the northeast-to-southwest diagonal. ... Hermann Hankel (February 14, 1839 - August 29, 1873) was a German mathematician who was born in Halle, Germany and died in Schramberg (near Tübingen), Germany. ... In mathematics, and in particular the theory of group representations, the regular representation of a group G is the linear representation afforded by the group action of G on itself. ... EugÃ¨ne Charles Catalan EugÃ¨ne Charles Catalan (May 30, 1814 - February 14, 1894) was a Belgian mathematician. ... Eyre & Spottiswoode was the London based printing firm that was the Kings Printer, and subsequently, in April 1929, a publisher of the same name. ... James Whitbread Lee Glaisher (5 November 1848 - 7 December 1928) was a prolific British mathematician. ... In mathematics, the determinant of a skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries. ... In linear algebra, an orthogonal matrix is a square matrix G whose transpose is its inverse, i. ... In mathematics, the Wronskian is a function named after Polish mathematician Josef Hoene-Wronski, especially important in the study of differential equations. ... Sir Thomas Muir (25 August 1844-21 March 1934) was a Scottish mathematician, remembered as an authority on determinants. ... Elwin Bruno Christoffel - Wikipedia /**/ @import /skins-1. ... A picture of Frobenius Ferdinand Georg Frobenius (October 26, 1849 â€“ August 3, 1917) was a German mathematician, best-known for his contributions to the theory of differential equations and to group theory. ... Reiss is a village in Caithness, Northern Scotland, 18 miles out of Wick. ... This article is about the military punishment picquet. ... In mathematics, the Hessian matrix is the square matrix of second order partial derivatives of a function. ...

In matrix theory, Sylvesters determinant theorem is a theorem useful for evaluating certain types of determinants. ... In mathematics, in particular linear algebra, the matrix determinant lemma computes the determinant of the sum of an invertible matrix and the dyadic product, , of a column vector and a row vector . ... Results from FactBites:

 Determinant - Wikipedia, the free encyclopedia (2284 words) Determinants are used to characterize invertible matrices (namely as those matrices, and only those matrices, with non-zero determinants), and to explicitly describe the solution to a system of linear equations with Cramer's rule. Determinants are used to calculate volumes in vector calculus: the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. The Pfaffian is an analog of the determinant for
More results at FactBites »

Share your thoughts, questions and commentary here
Press Releases | Feeds | Contact