In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right) of A, i.e.
- tr(A) = A1,1 + A2,2 + ... + An,n.
where Aij represents the (i,j)'th element of A. The use of the term trace arises from the German term Spur (cognate with the English spoor).
The trace is a linear map. That is,
- tr(A + B) = tr(A) + tr(B)
- tr(rA) = r tr(A)
for all square matrices A and B, and all scalars r.
Since the principal diagonal is not moved on transposition, a matrix and its transpose have the same trace:
- tr(A) = tr(AT).
If A is an n×m matrix and B is an m×n matrix, then
- tr(AB) = tr(BA).
Using this fact, we can deduce that the trace of a product of square matrices is equal to the trace of any cyclic permutation of the product, a fact known as the cyclic property of the trace. For example, with three square matrices A, B, and C,
- tr(ABC) = tr(CAB) = tr(BCA).
More generally, the same is true if the matrices are not assumed to be square, but are so shaped such that all of these products exist.
The trace is similarity-invariant, which means that A and P−1AP (P invertible) have the same trace, though there exist matrices which have the same trace but are not similar. This can be verified using the cyclic property above:
- tr(P−1AP) = tr(PP−1A) = tr(A)
Given some linear map f : V → V (V is a finite-dimensional vector space) generally, we can define the trace of this map by considering the trace of matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis independent definition for the trace of a linear map. Using the canonical isomorphism between the space End(V) of linear maps on V and V⊗V*, the trace of v⊗f is defined to be f(v), with v in V and f an element of the dual space V*.
If A is a square n-by-n matrix with real or complex entries and if λ1,...,λn are the (complex) eigenvalues of A (listed according to their algebraic multiplicities), then
- tr(A) = ∑ λi.
This follows from the fact that A is always similar to its Jordan form, an upper triangular matrix having λ1,...,λn on the main diagonal.
From the connection between the trace and the eigenvalues, one can derive a connection between the trace function, the matrix exponential function, and the determinant:
- det(exp(A)) = exp(tr(A)).
The trace also prominently appears in Jacobi's formula for the derivative of the determinant (see under determinant).
Other ideas and applications
If one imagines that the matrix A describes a water flow, in the sense that for every x in Rn, the vector Ax represents the velocity of the water at the location x, then the trace of A can be interpreted as follows: given any region U in Rn, the net flow of water out of U is given by tr(A)· vol(U), where vol(U) is the volume of U. See divergence.
The trace is used to define characters of group representations. Given two representations A(x) and B(x), they are equivalent if tr A(x) = tr B(x).
A matrix whose trace is zero is said to be traceless.
For an m-by-n matrix A with complex (or real) entries, we have
- tr(A*A) ≥ 0
with equality only if A = 0. The assignment
- <A, B> = tr(A*B)
yields an inner product on the space of all complex (or real) m-by-n matrices.
If m=n then the norm induced by the above inner product is called the Frobenius norm of a square matrix. Indeed it is simply the Euclidean norm if the matrix is considered as a vector of length n2.
The concept of trace of a matrix is generalised to the trace class of bounded linear operators on Hilbert spaces.