In mathematics, the **dot product** (also known as the **scalar product** and the **inner product**) is a function (·) : V × V → F, where V is a vector space and F its underlying field. In other words, it maps a pair of vectors to a scalar. When the latter term is used, the inner product of **a** and **b** is usually denoted <**a**, **b**>; see the article inner product space for a more abstract treatment. It is defined as - ,
or, using italics to denote the norm of a vector (i.e., *x* ≡ |**x**|), - ,
where θ is the angle between the two vectors. Thus, the dot product of two perpendicular vectors is always zero. If **a** and **b** are both unit vectors (i.e., of length 1), the dot product simply gives the cosine of the angle between them. Thus, given two vectors, the angle between them can be found by rearranging the above formula: This can be understood very easily: The first vector is projected onto the second vector (the order does not matter as the dot-product is a commutative) by calculating the dot-product, and afterwards "normalized" by dividing the obtained scalar value of the numerator through their scalar length. Thus the scalar value of the fraction must be less than or equal to 1 and can be easily translated into a angular value (As the trigonometric functions are really nothing more than taylor approximated functions to achieve a seamless translation table of lengths into angle values and vice versa (arcsin,...). More to that on the sine page). The dot product is particularly used in the calculation of net force. If **b** is a unit vector, then the dot product gives the projection of **a** in direction **b**. In mechanics, this gives the component of a force in that direction. Work is the dot product of force and displacement.
## Properties
The definition has the following consequences. The dot product is commutative - .
Two non-zero vectors **a** and **b** are perpendicular if and only if **a** · **b** = 0. The dot product is bilinear - .
From these it follows directly that the dot product of two vectors **a** = [*a*_{1} *a*_{2} *a*_{3}] and **b** = [*b*_{1} *b*_{2} *b*_{3}] given in coordinates can be computed particularly easily as - ,
or, using matrix multiplication and treating the vectors as 1-by-3 matrices, where **b**^{T} denotes the transpose of the matrix **b**. The dot product satisfies all the axioms of an inner product. In an abstract vector space, the notion of angle between the elements of the space can be *defined* in terms of the inner product.
## Proof that the two forms of definition are equivalent We have already shown that the theorem follows from the definition - .
To prove that these are two equivalent ways of defining the dot product, we shall now instead use the former to derive the latter.
**Note:** This proof is shown for 3-dimensional vectors, but is readily extendable to *n*-dimensional vectors given mutually perpendicular unit vectors. Consider a vector - .
Repeated application of the Pythagorean theorem yields - .
But this is the same as - ,
so we conclude that taking the dot product of a vector **v** with itself yields the squared length of the vector. **Lemma 1** - .
Now consider two vectors **a** and **b** extending from the origin, separated by an angle θ. A third vector **c** may be defined as - ,
creating a triangle with sides *a*, *b*, and *c*. According to the law of cosines, we have - .
Substituting dot products for the squared lengths according to Lemma 1, we get - .
*(1)* But as **c** ≡ **a** − **b**, we also have - ,
which, according to the distributive law, expands to - .
*(2)* Merging the two **c** · **c** equations, *(1)* and *(2)*, we obtain - .
Subtracting **a** · **a** + **b** · **b** from both sides and dividing by −2 leaves - ,
Q.E.D.
## See also |