FACTOID # 21: 15% of Army recruits from South Dakota are Native American, which is roughly the same percentage for female Army recruits in the state.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
RELATED ARTICLES
People who viewed "Variance" also viewed:
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Variance

In probability theory and statistics, the variance of a random variable, probability distribution, or sample is one measure of statistical dispersion, averaging the squared distance of its possible values from the expected value (mean). Whereas the mean is a way to describe the location of a distribution, the variance is a way to capture its scale or degree of being spread out. The unit of variance is the square of the unit of the original variable. The positive square root of the variance, called the standard deviation, has the same units as the original variable and can be easier to interpret for this reason. In zoning, a variance is an administrative exception to land use regulations, generally in order to compensate for a deficiency in a real property which would prevent the property from complying with the zoning regulation. ... Probability theory is the branch of mathematics concerned with analysis of random phenomena. ... This article is about the field of statistics. ... In probability theory, a random variable is a quantity whose values are random and to which a probability distribution is assigned. ... A probability distribution describes the values and probabilities that a random event can take place. ... A sample is that part of a population which is actually observed. ... In descriptive statistics, statistical dispersion (also called statistical variability) is quantifiable variation of measurements of differing members of a population within the scale on which they are measured. ... In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are... The former Weights and Measures office in Middlesex, England. ... In mathematics, a square root (√) of a number x is a number r such that , or in words, a number r whose square (the result of multiplying the number by itself) is x. ... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ...


The variance of a real-valued random variable is its second central moment, and it also happens to be its second cumulant. Just as some distributions do not have a mean, some do not have a variance as well. The mean exists whenever the variance exists, but not vice versa. In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... The kth moment about the mean (or kth central moment) of a real-valued random variable X is the quantity E[(X − E[X])k], where E is the expectation operator. ... // Cumulants of probability distributions In probability theory and statistics, the cumulants κn of the probability distribution of a random variable X are given by In other words, κn/n! is the nth coefficient in the power series representation of the logarithm of the moment-generating function. ...

Contents

Definition

If μ = E(X) is the expected value (mean) of the random variable X, then the variance is In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are...

operatorname{Var}(X) = operatorname{E}[ ( X - mu ) ^ 2 ].,

This definition encompasses random variables that are discrete, continuous, or neither. Of all the points about which squared deviations could have been calculated, the mean produces the minimum value for the averaged sum of squared deviations. In mathematics, a random variable is discrete if its probability distribution is discrete; a discrete probability distribution is one that is fully characterized by a probability mass function. ... By one convention, a random variable X is called continuous if its cumulative distribution function is continuous. ...


Many distributions, such as the Cauchy distribution, do not have a variance because the relevant integral diverges. In particular, if a distribution does not have an expected value, it does not have a variance either. The converse is not true: there are distributions for which the expected value exists, but the variance does not. The Cauchy-Lorentz distribution, named after Augustin Cauchy, is a continuous probability distribution with probability density function where x0 is the location parameter, specifying the location of the peak of the distribution, and γ is the scale parameter which specifies the half-width at half-maximum (HWHM). ...


Discrete case

If the random variable is discrete with probability mass function x1p1, ..., xnpn, this is equivalent to In mathematics, a probability distribution is called discrete, if it is fully characterized by a probability mass function. ... In probability theory, a probability mass function (abbreviated pmf) gives the probability that a discrete random variable is exactly equal to some value. ...

sum_{i=1}^n p_i (x_i - mu)^2,.

(Note: this variance should be divided by the sum of weights in the case of a discrete weighted variance.) That is, it is the expected value of the square of the deviation of X from its own mean. In plain language, it can be expressed as "The average of the square of the distance of each data point from the mean". It is thus the mean squared deviation. The variance of random variable X is typically designated as Var(X), scriptstylesigma_X^2, or simply σ2. The weighted mean, or weighted average, of a non-empty list of data with corresponding non-negative weights at least one of which is positive, is the quantity calculated by which means: So data elements with a high weight contribute more to the weighted mean than do elements with a... // The definition of variance is either the expected value (when considering a theoretical distribution), or average (for actual experimental data) of squared deviations from the mean. ...


Examples

Exponential distribution

The exponential distribution with parameter λ is a continuous distribution whose support is the semi-infinite intervals [0,∞). Its probability density function is given by: In probability theory and statistics, the exponential distributions are a class of continuous probability distribution. ... In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals. ...

f(x) = lambda e^{-lambda x},,

and it has expected value μ = λ−1. Therefore the variance is equal to:

int_0^infty f(x) (x - mu)^2,dx = int_0^infty lambda e^{-lambda x} (x - lambda^{-1})^2,dx = lambda^{-2}.,

So for an exponentially distributed random variable σ2 = μ2.


Fair die

A six-sided fair die can be modelled with a discrete random variable with outcomes 1 through 6, each with equal probability 1/6. The expected value is 3.5. Therefore the variance can be computed to be: Wikipedia does not have an article with this exact name. ...

sum_{i=1}^6 tfrac{1}{6} (i - 3.5)^2 = tfrac{1}{6}left((-2.5)^2{+}(-1.5)^2{+}(-0.5)^2{+}0.5^2{+}1.5^2{+}2.5^2right) = tfrac{1}{6} cdot 17.50 approx 2.92,.

Properties

Variance is non-negative because the squares are positive or zero. The variance of a random variable is 0 if and only if the variable is degenerate, that is, it takes on a constant value with probability 1, and the variance of a variable in a data set is 0 if and only if all entries have the same value.


Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged. If all values are scaled by a constant, the variance is scaled by the square of that constant. These two properties can be expressed in the following formula: Invariant may have meanings invariant (computer science), such as a combination of variables not altered in a loop invariant (mathematics), something unaltered by a transformation invariant (music) invariant (physics) conserved by system symmetry This is a disambiguation page — a navigational aid which lists other pages that might otherwise share the... In statistics, if a family of probabiblity densities parametrized by a scalar- or vector-valued parameter μ is of the form fμ(x) = f(x − μ) then μ is called a location parameter, since its value determines the location of the probability distribution. ...

operatorname{Var}(aX+b)=a^2operatorname{Var}(X).

The variance of a finite sum of uncorrelated random variables is equal to the sum of their variances.

  1. Suppose that the observations can be partitioned into subgroups according to some second variable. Then the variance of the total group is equal to the mean of the variances of the subgroups plus the variance of the means of the subgroups. This property is known as variance decomposition or the law of total variance and plays an important role in the analysis of variance. For example, suppose that a group consists of a subgroup of men and an equally large subgroup of women. Suppose that the men have a mean body length of 180 and that the variance of their lengths is 100. Suppose that the women have a mean length of 160 and that the variance of their lengths is 50. Then the mean of the variances is (100 + 50) / 2 = 75; the variance of the means is the variance of 180, 160 which is 100. Then, for the total group of men and women combined, the variance of the body lengths will be 75 + 100 = 175. Note that this uses N for the denominator instead of N - 1.

    In a more general case, if the subgroups have unequal sizes, then they must be weighted proportionally to their size in the computations of the means and variances. The formula is also valid with more than two groups, and even if the grouping variable is continuous.[1] The well-known variance decomposition rule is given by: See also iterated expectations and law of total variance for proof. ... In probability theory, the law of total variance states that if X and Y are random variables on the same probability space, and the variance of X is finite, then In language perhaps better known to statisticians than to probabilists, the first term is the unexplained component of the variance... In statistics, analysis of variance (ANOVA) is a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts. ...


    This formula implies that the variance of the total group cannot be smaller than the mean of the variances of the subgroups. Note, however, that the total variance is not necessarily larger than the variances of the subgroups. In the above example, when the subgroups are analyzed separately, the variance is influenced only by the man-man differences and the woman-woman differences. If the two groups are combined, however, then the men-women differences enter into the variance also.

  2. Many computational formulas for the variance are based on this equality: The variance is equal to the mean of the squares minus the square of the mean. For example, if we consider the numbers 1, 2, 3, 4 then the mean of the squares is (1 × 1 + 2 × 2 + 3 × 3 + 4 × 4) / 4 = 7.5. The mean is 2.5, so the square of the mean is 6.25. Therefore the variance is 7.5 − 6.25 = 1.25, which is indeed the same result obtained earlier with the definition formulas. Many pocket calculators use an algorithm that is based on this formula and that allows them to compute the variance while the data are entered, without storing all values in memory. The algorithm is to adjust only three variables when a new data value is entered: The number of data entered so far (n), the sum of the values so far (S), and the sum of the squared values so far (SS). For example, if the data are 1, 2, 3, 4, then after entering the first value, the algorithm would have n = 1, S = 1 and SS = 1. After entering the second value (2), it would have n = 2, S = 3 and SS = 5. When all data are entered, it would have n = 4, S = 10 and SS = 30. Next, the mean is computed as M = S / n, and finally the variance is computed as SS / n − M × M. In this example the outcome would be 30 / 4 - 2.5 × 2.5 = 7.5 − 6.25 = 1.25. If the unbiased sample estimate is to be computed, the outcome will be multiplied by n / (n − 1), which yields 1.667 in this example.

Properties, formal

8.a. Variance of the sum of uncorrelated variables


One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances: In probability theory and statistics, to call two real-valued random variables X and Y uncorrelated means that their correlation is zero, or, equivalently, their covariance is zero. ...

operatorname{Var}Big(sum_{i=1}^n X_iBig) = sum_{i=1}^n operatorname{Var}(X_i).

This statement is often made with the stronger condition that the variables are independent, but uncorrelatedness suffices. So if the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is

operatorname{Var}(overline{X}) = operatorname{Var}left(frac{1}{n}sum_{i=1}^n X_iright) = frac {1}{n^2} n sigma^2 = frac {sigma^2} {n}.

That is, the variance of the mean decreases with n. This fact is used in the definition of the standard error of the sample mean, which is used in the central limit theorem. The standard error of a method of measurement or estimate is the estimated standard deviation of the error in that method. ... A central limit theorem is any of a set of weak-convergence results in probability theory. ...


8.b. Variance of the sum of correlated variables


In general, if the variables are correlated, then the variance of their sum is the sum of their covariances: For the physics topics, see covariant transformation; about the mathematics example for groupoids, see covariance in special relativity; for the computer science topic see covariance and contravariance (computer science), in mathematics and theoretical physics see covariance and contravariance of vectors, in category theory see covariance and contravariance of functors. ...

operatorname{Var}left(sum_{i=1}^n X_iright) = sum_{i=1}^n sum_{j=1}^n operatorname{Cov}(X_i, X_j).

Here Cov is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. This formula is used in the theory of Cronbach's alpha in classical test theory. Cronbachs (alpha) has an important use as a measure of the reliability of a psychometric instrument. ... Classical test theory is a body of related psychometric theory that predict outcomes of psychological testing such as the difficulty of items or the ability of test-takers. ...


So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is

operatorname{Var}(overline{X}) = frac {sigma^2} {n} + frac {n-1} {n} rho sigma^2.

This implies that the variance of the mean increases with the average of the correlations. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to

operatorname{Var}(overline{X}) = frac {1} {n} + frac {n-1} {n} rho.

This formula is used in the Spearman-Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have The Spearman-Brown prediction formula (also known as the Spearman-Brown prophecy formula) is a formula relating psychometric reliability to test length: where is the predicted reliability; N is the number of tests combined (see below); and is the reliability of the current test. The formula predicts the reliability of...

 lim_{n to infty} operatorname{Var}(overline{X}) = rho.

Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does generally not converge to the population mean, even though the Law of large numbers states that the sample mean will converge for independent variables. // The law of large numbers (LLN) is any of several theorems in probability. ...


8.c. Variance of a weighted sum of variables


Properties 6 and 8, along with this property from the covariance page: Cov(aXbY) = ab Cov(XY) jointly imply that For the physics topics, see covariant transformation; about the mathematics example for groupoids, see covariance in special relativity; for the computer science topic see covariance and contravariance (computer science), in mathematics and theoretical physics see covariance and contravariance of vectors, in category theory see covariance and contravariance of functors. ...

operatorname{Var}(aX+bY) =a^2 operatorname{Var}(X) + b^2 operatorname{Var}(Y) + 2ab, operatorname{Cov}(X, Y).

This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y.


9. Decomposition of variance


The general formula for variance decomposition or the law of total variance is: If X and Y are two random variables and the variance of X exists, then In probability theory, the law of total variance states that if X and Y are random variables on the same probability space, and the variance of X is finite, then In language perhaps better known to statisticians than to probabilists, the first term is the unexplained component of the variance...

operatorname{Var}(X) = operatorname{Var}(operatorname{E}(X|Y))+ operatorname{E}(operatorname{Var}(X|Y)).

Here, E(X|Y) is the conditional expectation of X given Y, and Var(X|Y) is the conditional variance of X given Y. (A more intuitive explanation is that given a particular value of Y, then X follows a distribution with mean E(X|Y) and variance Var(X|Y). The above formula tells how to find Var(X) based on the distributions of these two quantities when Y is allowed to vary.) This formula is often applied in analysis of variance, where the corresponding formula is In probability theory, a conditional expectation is the expected value of a real random variable with respect to a conditional probability distribution. ... In statistics, analysis of variance (ANOVA) is a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts. ...

SSTotal = SSBetween + SSWithin.

It is also used in linear regression analysis, where the corresponding formula is In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε. The model can be written as Example of linear regression with one dependent and one independent variable. ...

SSTotal = SSRegression + SSResidual.

This can also be derived from the additivity of variances (property 8), since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.


10. Computational formula for variance


The computational formula for the variance follows in a straightforward manner from the linearity of expected values and the above definition: In probability theory, the computational formula for the variance Var(X) of a random variable X is the formula where E(X) is the expected value of X. Its applications in systolic geometry include Loewners torus inequality. ...

{}operatorname{Var}(X)= operatorname{E}(X^2 - 2,X,operatorname{E}(X) + (operatorname{E}(X))^2),
{}=operatorname{E}(X^2) - 2(operatorname{E}(X))^2 + (operatorname{E}(X))^2,
{}=operatorname{E}(X^2) - (operatorname{E}(X))^2.

This is often used to calculate the variance in practice, although it suffers from numerical approximation error if the two components of the equation are similar in magnitude. Numerical analysis is the study of approximate methods for the problems of continuous mathematics (as distinguished from discrete mathematics). ... For other uses, see Error (disambiguation). ...


Characteristic property

The second moment of a random variable attains the minimum value when taken around the mean of the random variable, i.e. EX = argminaE(Xa)2. This property could be reversed, i.e. if the function φ satisfies EX = argminaEφ(Xa) then it is necessary of the form φ = ax2 + b. This is also true in multidimensional case [1].


Approximating the variance of a function

The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables. For example, the approximate variance of a function of one variable is given by In statistics, the delta method is a method for deriving an approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator. ... As the degree of the taylor series rises, it approaches the correct function. ...

operatorname{Var}left[f(X)right]approx left(f'(operatorname{E}left[Xright])right)^2operatorname{Var}left[Xright]

provided that f is twice differentiable and that the mean and variance of X are finite.


Population variance and sample variance

In general, the population variance of a finite population of size N is given by

{}sigma^2 = frac 1N sum_{i=1}^N left(x_i - overline{x} right)^ 2 ,

or if the population is an abstract population with probability distribution Pr:

{}sigma^2 = sum_{i=1}^N left(x_i - overline{x} right)^ 2 , Pr(x_i),

where overline{x} is the population mean. This is merely a special case of the general definition of variance introduced above, but restricted to finite populations.


In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with infinite populations, this is generally impossible.


A common method is estimating the variance of large (finite or infinite) populations from a sample. We take a sample (y_1,dots,y_n) of n values from the population, and estimate the variance on the basis of this sample. There are several good estimators. Two of them are well known: Sampling is that part of statistical practice concerned with the selection of individual observations intended to yield some knowledge about a population of concern, especially for the purposes of statistical inference. ... A sample is that part of a population which is actually observed. ...

s_n^2 = frac 1n sum_{i=1}^n left(y_i - overline{y} right)^ 2 = left(frac{1}{n} sum_{i=1}^{n}y_i^2right) - overline{y}^2,

and

s^2 = frac{1}{n-1} sum_{i=1}^nleft(y_i - overline{y} right)^ 2 = frac{1}{n-1}sum_{i=1}^n y_i^2 - frac{n}{n-1} overline{y}^2,

Both are referred to as sample variance. Most advanced electronic calculators can calculate both s_n^2 and s2at the press of a button, in which case that button is usually labeled σ2 or sigma_n^2 for s_n^2 and sigma_{n-1}^2 for s2.


The two estimators only differ slightly as we see, and for larger values of the sample size n the difference is negligible. The second one is an unbiased estimator of the population variance, meaning that its expected value E[s2] is equal to the true variance of the sampled random variable. The first one may be seen as the variance of the sample considered as a population. The sample size of a statistical sample is the number of repeated measurements that constitute it. ... In statistics, a biased estimator is one that for some reason on average over- or underestimates what is being estimated. ...


Common sense would suggest to apply the population formula to the sample as well. The reason that it is biased is that the sample mean is generally somewhat closer to the observations in the sample than the population mean is to these observations. This is so because the sample mean is by definition in the middle of the sample, while the population mean may even lie outside the sample. So the deviations to the sample mean will often be smaller than the deviations to the population mean, and so, if the same formula is applied to both, then this variance estimate will on average be somewhat smaller in the sample than in the population.


One common source of confusion is that the term sample variance may refer to either the unbiased estimator s2 of the population variance, or to the variance σ2 of the sample viewed as a finite population. Both can be used to estimate the true population variance. Apart from theoretical considerations, it doesn't really matter which one is used, as for small sample sizes both are inaccurate and for large values of n they are practically the same. Naively computing the variance by dividing by n instead of n-1 systematically underestimates the population variance. Moreover, in practical applications most people report the standard deviation rather than the sample variance, and the standard deviation that is obtained from the unbiased n-1 version of the sample variance has a slight negative bias (though for normally distributed samples a theoretically interesting but rarely used slight correction exists to eliminate this bias). Nevertheless, in applied statistics it is a convention to use the n-1 version if the variance or the standard deviation is computed from a sample. The standard deviation is often estimated from a random sample drawn from the population. ...


In practice, for large n, the distinction is often a minor one. In the course of statistical measurements, sample sizes so small as to warrant the use of the unbiased variance virtually never occur. In this context Press et al.[2] commented that if the difference between n and n−1 ever matters to you, then you are probably up to no good anyway - e.g., trying to substantiate a questionable hypothesis with marginal data.


Distribution of the sample variance

Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that yi are independent observations from a normal distribution, Cochran's theorem shows that s2 follows a scaled chi-square distribution: In probability theory, a random variable is a quantity whose values are random and to which a probability distribution is assigned. ... The normal distribution, also called the Gaussian distribution, is an important family of continuous probability distributions, applicable in many fields. ... In statistics, Cochrans theorem is used in the analysis of variance. ... This article is about the mathematics of the chi-square distribution. ...

 (n-1)frac{s^2}{sigma^2}simchi^2_{n-1}

As a direct consequence, it follows that  operatorname{E}(s^2)=sigma^2.


However, even in the absence of the Normal assumption, it is still possible to prove that s2 is unbiased for σ2.


Generalizations

If X is a vector-valued random variable, with values in mathbb{R}^n, and thought of as a column vector, then the natural generalization of variance is operatorname{E}((X - mu)(X - mu)^operatorname{T}), where mu = operatorname{E}(X) and X^operatorname{T} is the transpose of X, and so is a row vector. This variance is a positive semi-definite square matrix, commonly referred to as the covariance matrix. In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. ... In linear algebra, a positive-definite matrix is a Hermitian matrix which in many ways is analogous to a positive real number. ... In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. ...


If X is a complex-valued random variable, with values in mathbb{C}, then its variance is operatorname{E}((X - mu)(X - mu)^*), where X * is the complex conjugate of X. This variance is also a positive semi-definite square matrix. In mathematics, a complex number is a number which is often formally defined to consist of an ordered pair of real numbers , often written: In mathematics, the adjective complex means that the underlying number field is complex numbers, for example complex analysis, complex matrix, complex polynomial and complex Lie algebra. ... In mathematics, the complex conjugate of a complex number is given by changing the sign of the imaginary part. ...


History

The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance[3]: Sir Ronald Aylmer Fisher, FRS (17 February 1890 – 29 July 1962) was an English statistician, evolutionary biologist, and geneticist. ... The Correlation Between Relatives on the Supposition of Mendelian Inheritance is a scientific paper by Ronald Fisher which was published in the Philosophical Transactions of the Royal Society of Edinburgh in 1918, (volume 52, pages 399—433). ...

The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations θ1 and θ2, it is found that the distribution, when both causes act together, has a standard deviation sqrt{theta_1^2 + theta_2^2}. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance... This page is a candidate to be moved to Wiktionary. ... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ... In mathematics, a square root (√) of a number x is a number r such that , or in words, a number r whose square (the result of multiplying the number by itself) is x. ... In statistics the mean squared error of an estimator T of an unobservable parameter θ is i. ...

Moment of inertia

The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. (The covariance matrix is analogous to the moment of inertia tensor for multivariate distributions.) Moment of inertia, also called mass moment of inertia and, sometimes, the angular mass, (SI units kg m2, Former British units slug ft2), is the rotational analog of mass. ... Classical mechanics (commonly confused with Newtonian mechanics, which is a subfield thereof) is used for describing the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. ... -1... A probability distribution describes the values and probabilities that a random event can take place. ... Moment of inertia, also called mass moment of inertia and, sometimes, the angular mass, (SI units kg m², Former British units slug ft2), is the rotational analog of mass. ...


See also

Look up variance in Wiktionary, the free dictionary.

Wikipedia does not have an article with this exact name. ... Wiktionary (a portmanteau of wiki and dictionary) is a multilingual, Web-based project to create a free content dictionary, available in over 151 languages. ... Sample mean and covariance are statistics computed from a collection of data, thought of as being random. ... In multivariate statistics, the importance of the Wishart distribution stems in part from the fact that it is the probability distribution of the maximum likelihood estimator of the covariance matrix of a multivariate normal distribution. ... Algorithms for calculating variance play a minor role in statistical computing. ... For probability distributions having an expected value and a median, the mean (i. ... The far red light has no effect on the average speed of the gravitropic reaction in wheat coleoptiles, but it changes kurtosis from platykurtic to leptokurtic (-0. ... Qualitative variation (QV) allows us to assess the degree of statistical dispersion in nominal distributions. ... Example of experimental data with non-zero skewness (gravitropic response of wheat coleoptiles, 1,790) In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable. ... In spatial statistics, semivariance can be described by where z is a data value at a particular location, h is the distance between data values, and n(h) counts the number of pairs of data values we are given, spaced a distance of h apart. ... In statistics, the term true variance is often used to refer to the unobservable variance of a whole population, as distinguished from an observable statistic based on a sample. ... Explained variance is part of is part of the variance of any residual that can be be attributed to a specific condition (cause). ... Residual variance or unexplained variance is part of the variance of any residual. ... The mean absolute error is a term used in mathematics and error analysis similar to variance. ... In probability theory, Chebyshevs inequality (also known as Tchebysheffs inequality, Chebyshevs theorem, or the Bienaymé-Chebyshev inequality), named after Pafnuty Chebyshev, who first proved it, states that in any data sample or probability distribution, nearly all the values are close to the mean value, and provides a...

References

  1. ^ A. Kagan and L. A. Shepp, Why the variance?
  2. ^ Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. (1986) Numerical recipes: The art of scientific computing. Cambridge: Cambridge University Press. (online)
  3. ^ link

Numerical Recipes is the generic term for the following books on algorithms and numerical analysis, all by William Press, Saul Teukolsky, William Vetterling and Brian Flannery: Numerical Recipes in C++. The Art of Scientific Computing, ISBN 0-521-75033-4. ...

External links

This article is about the field of statistics. ... Descriptive statistics are used to describe the basic features of the data in a study. ... This article is about mathematical mean. ... In mathematics and statistics, the arithmetic mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided by the number of items in the list. ... The geometric mean of a collection of positive data is defined as the nth root of the product of all the members of the data set, where n is the number of members. ... This article is about the statistical concept. ... In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. ... Look up range in Wiktionary, the free dictionary. ... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ... It has been suggested that this article or section be merged with inferential statistics. ... One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. ... In statistics, a result is significant if it is unlikely to have occurred by chance, given that a presumed null hypothesis is true. ... The power of a statistical test is the probability that the test will reject a false null hypothesis (that it will not make a Type II error). ... In statistics, a null hypothesis is a hypothesis set up to be nullified or refuted in order to support an alternative hypothesis. ... In statistics, the Alternative Hypothesis is the hypothesis proposed to explain a statistically significant difference between results, that is if the Null Hypothesis has been rejected. ... Type I errors (or α error, or false positive) and type II errors (β error, or a false negative) are two terms used to describe statistical errors. ... The Z-test is a statistical test used in inference. ... A t test is any statistical hypothesis test in which the test statistic has a Students t distribution if the null hypothesis is true. ... Maximum likelihood estimation (MLE) is a popular statistical method used to make inferences about parameters of the underlying probability distribution from a given data set. ... Compares the various grading methods in a normal distribution. ... In statistical hypothesis testing, the p-value of a random variable T used as a test statistic is the probability that T will assume a value at least as extreme as the observed value tobserved, given that a null hypothesis being considered is true. ... In statistics, analysis of variance (ANOVA) is a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts. ... Survival analysis is a branch of statistics which deals with death in biological organisms and failure in mechanical systems. ... The survival function, also known as a survivor function or reliability function, is a property of any random variable that maps a set of events, usually associated with mortality or failure of some system, onto time. ... The Kaplan-Meier estimator (also known as the Product Limit Estimator) estimates the survival function from life-time data. ... The logrank test (sometimes called the Mantel-Haenszel test or the Mantel-Cox test) [1] is a hypothesis test to compare the survival distributions of two samples. ... Failure rate is the frequency with which an engineered system or component fails, expressed for example in failures per hour. ... // Proportional hazards models are a sub-class of survival models in statistics. ... Positive linear correlations between 1000 pairs of numbers. ... In statistics, a spurious relationship (or, sometimes, spurious correlation) is a mathematical relationship in which two occurrences have no logical connection, yet it may be implied that they do, due to a certain third, unseen factor (referred to as a confounding factor or lurking variable). The spurious relationship gives an... In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ... In statistics, rank correlation is the study of relationships between different rankings on the same set of items. ... In statistics, Spearmans rank correlation coefficient, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a non-parametric measure of correlation – that is, it assesses how well an arbitrary monotonic function could describe the relationship between two variables, without making any assumptions about... The Kendall tau rank correlation coefficient (or simply the Kendall tau coefficient, Kendalls τ or Tau test(s)) is used to measure the degree of correspondence between two rankings and assessing the significance of this correspondence. ... In statistics, regression analysis examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). ... In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε. The model can be written as Example of linear regression with one dependent and one independent variable. ... dataset with approximating polynomials Nonlinear regression in statistics is the problem of fitting a model to multidimensional x,y data, where f is a nonlinear function of x with parameters θ. In general, there is no algebraic expression for the best-fitting parameters, as there is in linear regression. ... Logistic regression is a statistical regression model for Bernoulli-distributed dependent variables. ...

  Results from FactBites:
 
PlanetMath: variance (229 words)
This formula is often used to calculate variance analytically.
The variance can also be characterized as the minimum of expected squared deviation of a random variable from any point:
This is version 9 of variance, born on 2001-10-26, modified 2007-07-08.
STD 06-00-001 - STD 6.1 - Variance Policy and Procedures (1298 words)
Variance inspection will be required when making decisions on adequate variance requests involving flammable and combustible liquids, toxic and carcinogenic substances, explosives, electrical equipment and others as deemed necessary.
Variance inspections will also be required for temporary and experimental variances, for situations involving employee objection to the variance, or where first-hand examination is necessary to obtain further information.
Temporary variances are technically available only during that period between promulgation of a standard and its effective date for employers unable to come into compliance within that time.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m