In linear algebra, the transpose of a matrix A is another matrix A^{T} (also written A^{tr}, ^{t}A, or A′) created by any one of the following equivalent actions: Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. ...
In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied. ...
 write the rows of A as the columns of A^{T}
 write the columns of A as the rows of A^{T}
 reflect A by its main diagonal (which starts from the top left) to obtain A^{T}
Formally, the transpose of an m × n matrix A is the n × m matrix  for
Examples
Properties For matrices A, B and scalar c we have the following properties of transpose: 
 Taking the transpose is a self inverse.

 The transpose is a linear map from the space of m × n matrices to the space of all n × m matrices.

 Note that the order of the factors reverses. From this one can deduce that a square matrix A is invertible if and only if A^{T} is invertible, and in this case we have (A^{−1})^{T} = (A^{T})^{−1}. It is relatively easy to extend this result to the general case of multiple matrices, where we find that (ABC...XYZ)^{T} = Z^{T}Y^{T}X^{T}...C^{T}B^{T}A^{T}.

 The transpose of a scalar is the same scalar.

 The determinant of a matrix is the same as that of its transpose.
 The dot product of two column vectors a and b can be computed as
which is written as a_{i} b^{i} in Einstein notation.  If A has only real entries, then A^{T}A is a positivesemidefinite matrix.
 If A is over some field, then A is similar to A^{T}.
In mathematics, an inverse function is in simple terms a function which does the reverse of a given function. ...
In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that respects the arithmetical operations addition and scalar multiplication defined on vector spaces, or, in other words, it preserves linear combinations. Definition and first consequences Formally, if V and W are...
In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ...
For the square matrix section, see square matrix. ...
In linear algebra, an nbyn (square) matrix is called invertible, nonsingular, or regular if there exists an nbyn matrix such that where denotes the nbyn identity matrix and the multiplication used is ordinary matrix multiplication. ...
In linear algebra, real numbers are called scalars and relate to vectors in a vector space through the operation of scalar multiplication, in which a vector can be multiplied by a number to produce another vector. ...
In algebra, a determinant is a function depending on n that associates a scalar, det(A), to every nÃ—n square matrix A. The fundamental geometric meaning of a determinant is as the scale factor for volume when A is regarded as a linear transformation. ...
In mathematics, the dot product, also known as the scalar product, is a binary operation which takes two vectors over the real numbers R and returns a realvalued scalar quantity. ...
In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ...
This article or section does not adequately cite its references or sources. ...
In linear algebra, the positivedefinite matrices are (in several ways) analogous to the positive real numbers. ...
In abstract algebra, a field is an algebraic structure in which the operations of addition, subtraction, multiplication and division (except division by zero) may be performed, and the same rules hold which are familiar from the arithmetic of ordinary numbers. ...
Several equivalence relations in mathematics are called similarity. ...
Special transpose matrices A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, A is symmetric if In linear algebra, a symmetric matrix is a matrix that is its own transpose. ...
A square matrix whose transpose is also its inverse is called an orthogonal matrix; that is, G is orthogonal if In matrix theory, a real orthogonal matrix is a square matrix Q whose transpose is its inverse: // Overview An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. ...
 the identity matrix.
A square matrix whose transpose is equal to its negative is called skewsymmetric matrix; that is, A is skewsymmetric if In linear algebra, the identity matrix of size n is the nbyn square matrix with ones on the main diagonal and zeros elsewhere. ...
In linear algebra, a skewsymmetric (or antisymmetric) matrix is a square matrix A whose transpose is also its negative; that is, it satisfies the equation: AT = −A or in component form, if A = (aij): aij = − aji for all i and j. ...
The conjugate transpose of the complex matrix A, written as A^{*}, is obtained by taking the transpose of A and the complex conjugate of each entry: In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an mbyn matrix A with complex entries is the nbym matrix A* obtained from A by taking the transpose and then taking the complex conjugate of each entry. ...
In mathematics, a complex number is a number of the form where a and b are real numbers, and i is the imaginary unit, with the property i 2 = âˆ’1. ...
In mathematics, the complex conjugate of a complex number is given by changing the sign of the imaginary part. ...
Transpose of linear maps If f: V→W is a linear map between vector spaces V and W with nondegenerate bilinear forms, we define the transpose of f to be the linear map ^{t}f : W→V, determined by In mathematics, a linear transformation (also called linear operator or linear map) is a function between two vector spaces that respects the arithmetical operations addition and scalar multiplication defined on vector spaces, or, in other words, it preserves linear combinations. Definition and first consequences Formally, if V and W are...
In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ...
In mathematics, a degenerate bilinear form f(x,y) on a vector space V is one such that for some nonzero x in V for all y ∈ V. A nondegenerate form is one that is not degenerate. ...
In mathematics, a bilinear form on a vector space V over a field F is a mapping V Ã— V â†’ F which is linear in both arguments. ...
Here, B_{V} and B_{W} are the bilinear forms on V and W respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In linear algebra, a basis is a set of vectors that, in a linear combination, can represent every vector in a given vector space, and such that no element of the set can be represented as a linear combination of the others. ...
Over a complex vector space, one often works with sesquilinear forms instead of bilinear (conjugatelinear in one argument). The transpose of a map between such spaces is defined similarly, and the matrix of the transpose map is given by the conjugate transpose matrix if the bases are orthonormal. In this case, the transpose is also called the Hermitian adjoint. In mathematics, a sesquilinear form on a complex vector space V is a map V Ã— V â†’ C that is linear in one argument and conjugatelinear in the other. ...
In mathematics, specifically in functional analysis, each linear operator on a Hilbert space has a corresponding adjoint operator. ...
If V and W do not have bilinear forms, then the transpose of a linear map f: V→W is only defined as a linear map ^{t}f : W^{*}→V^{*} between the dual spaces of W and V. In mathematics, the existence of a dual vector space reflects in an abstract way the relationship between row vectors (1Ã—n) and column vectors (nÃ—1). ...
Implementation of matrix transposition on computers On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such the BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement. A BlueGene supercomputer cabinet. ...
Random access memory (usually known by its acronym, RAM) is a type of data storage used in computers. ...
In computer science, a library is a collection of subprograms used to develop software. ...
Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. ...
Basic Linear Algebra Subprograms (BLAS) are routines which perform basic linear algebra operations such as vector and matrix multiplication. ...
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in rowmajor order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Rowmajor order describes a way to store a multidimensional array in linear memory. ...
The Fast Fourier Transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse. ...
Memory Locality is a term in computer science used to denote the temporal or spatial proximity of memory access. ...

Main article: Inplace matrix transposition Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an matrix inplace, with O(1) additional storage or at most storage much less than MN. For , this involves a complicated permutation of the data elements that is nontrivial to implement inplace. Therefore efficient inplace matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed. Inplace matrix transposition, also called insitu matrix transposition, refers to the problem of transposing an matrix inplace in computer memory: with additional storage, or at most with additional storage much less than . ...
In computer science, algorithms workinplace if they transform a data structure using a minimal, constant amount of extra memory (or disk) space. ...
Permutation is the rearrangement of objects or symbols into distinguishable sequences. ...
Inplace matrix transposition, also called insitu matrix transposition, refers to the problem of transposing an matrix inplace in computer memory: with additional storage, or at most with additional storage much less than . ...
Computer scaence, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. ...
This does not cite any references or sources. ...
External links  Transpose, mathworld.wolfram.com
 Transpose, planetmath.org
