FACTOID # 19: Cheap sloppy joes: Looking for reduced-price lunches for schoolchildren? Head for Oklahoma!
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
RELATED ARTICLES
People who viewed "Correlation" also viewed:
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Correlation
Several sets of (x, y) points, with the correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.
Several sets of (xy) points, with the correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.

In probability theory and statistics, correlation, (often measured as a correlation coefficient) , indicates the strength and direction of a linear relationship between two random variables. In general statistical usage, correlation or co-relation refers to the departure of two variables from independence. In this broad sense there are several coefficients, measuring the degree of correlation, adapted to the nature of data. Probability theory is the branch of mathematics concerned with analysis of random phenomena. ... This article is about the field of statistics. ... A random variable can be thought of as the numeric result of operating a non-deterministic mechanism or performing a non-deterministic experiment to generate a random result. ...


A number of different coefficients are used for different situations. The best known is the Pearson product-moment correlation coefficient, which is obtained by dividing the covariance of the two variables by the product of their standard deviations. Despite its name, it was first introduced by Francis Galton. In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ... For the physics topics, see covariant transformation; about the mathematics example for groupoids, see covariance in special relativity; for the computer science topic see covariance and contravariance (computer science), in mathematics and theoretical physics see covariance and contravariance of vectors, in category theory see covariance and contravariance of functors. ... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ... This article does not cite any references or sources. ...

Contents

Pearson's product-moment coefficient

Main article: Pearson product-moment correlation coefficient

In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ...

Mathematical properties

The correlation coefficient ρX, Y between two random variables X and Y with expected values μX and μY and standard deviations σX and σY is defined as: A random variable can be thought of as the numeric result of operating a non-deterministic mechanism or performing a non-deterministic experiment to generate a random result. ... In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ...

rho_{X,Y}={mathrm{cov}(X,Y) over sigma_X sigma_Y} ={E((X-mu_X)(Y-mu_Y)) over sigma_Xsigma_Y},

where E is the expected value operator and cov means covariance. Since μX = E(X), σX2 = E(X2) − E2(X) and likewise for Y, we may also write In probability theory the expected value (or mathematical expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects as the outcome of the random trial when identical odds are... For the physics topics, see covariant transformation; about the mathematics example for groupoids, see covariance in special relativity; for the computer science topic see covariance and contravariance (computer science), in mathematics and theoretical physics see covariance and contravariance of vectors, in category theory see covariance and contravariance of functors. ...

rho_{X,Y}=frac{E(XY)-E(X)E(Y)}{sqrt{E(X^2)-E^2(X)}~sqrt{E(Y^2)-E^2(Y)}}.

The correlation is defined only if both of the standard deviations are finite and both of them are nonzero. It is a corollary of the Cauchy-Schwarz inequality that the correlation cannot exceed 1 in absolute value. In mathematics, the Cauchy-Schwarz inequality, also known as the Schwarz inequality, the Cauchy inequality, or the Cauchy-Bunyakovski-Schwarz inequality, named after Augustin Louis Cauchy, Viktor Yakovlevich Bunyakovsky and Hermann Amandus Schwarz, is a useful inequality encountered in many different settings, such as linear algebra applied to vectors, in... In mathematics, the absolute value (or modulus[1]) of a real number is its numerical value without regard to its sign. ...


The correlation is 1 in the case of an increasing linear relationship, −1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of linear dependence between the variables. The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. In linear algebra, a set of elements of a vector space is linearly independent if none of the vectors in the set can be written as a linear combination of finitely many other vectors in the set. ...


If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X2. Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, uncorrelatedness is equivalent to independence. In probability theory and statistics, to call two real-valued random variables X and Y uncorrelated means that their correlation is zero, or, equivalently, their covariance is zero. ... In probability theory and statistics, a multivariate normal distribution, also sometimes called a multivariate Gaussian distribution (in honor of Carl Friedrich Gauss, who was not the first to write about the normal distribution) is a specific probability density function. ...


A correlation between two variables is diluted in the presence of measurement error around estimates of one or both variables, in which case disattenuation provides a more accurate coefficient. In measurement and statistics, disattenuation of a correlation between two sets of parameters or measures is the estimation of the correlation in a manner that accounts for measurement error contained within the estimates of those parameters. ...


The sample correlation

If we have a series of n  measurements of X  and Y  written as xi  and yi  where i = 1, 2, ..., n, then the Pearson product-moment correlation coefficient can be used to estimate the correlation of X  and Y . The Pearson coefficient is also known as the "sample correlation coefficient". The Pearson correlation coefficient is then the best estimate of the correlation of X  and Y . The Pearson correlation coefficient is written: In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ...

 r_{xy}=frac{sum x_iy_i-n bar{x} bar{y}}{(n-1) s_x s_y}=frac{nsum x_iy_i-sum x_isum y_i} {sqrt{nsum x_i^2-(sum x_i)^2}~sqrt{nsum y_i^2-(sum y_i)^2}}.
 r_{xy}=frac{sum (x_i-bar{x})(y_i-bar{y})}{(n-1) s_x s_y},

where bar{x} and bar{y} are the sample means of X  and Y , sx  and sy  are the sample standard deviations of X  and Y  and the sum is from i = 1 to n. As with the population correlation, we may rewrite this as In mathematics and statistics, the arithmetic mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided by the number of items in the list. ... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ...

 r_{xy}=frac{sum x_iy_i-n bar{x} bar{y}}{(n-1) s_x s_y}=frac{nsum x_iy_i-sum x_isum y_i} {sqrt{nsum x_i^2-(sum x_i)^2}~sqrt{nsum y_i^2-(sum y_i)^2}}.

Again, as is true with the population correlation, the absolute value of the sample correlation must be less than or equal to 1. Though the above formula conveniently suggests a single-pass algorithm for calculating sample correlations, it is notorious for its numerical instability (see below for something more accurate).


The square of the sample correlation coefficient, which is also known as the coefficient of determination, is the fraction of the variance in yi  that is accounted for by a linear fit of xi  to yi . This is written In statistics, the coefficient of determination R2 is the proportion of variability in a data set that is accounted for by a statistical model. ...

r_{xy}^2=1-frac{s_{y|x}^2}{s_y^2},

where sy|x2  is the square of the error of a linear regression of xi  on yi  by the equation y = a + bx: In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε. The model can be written as Example of linear regression with one dependent and one independent variable. ... An equation is a mathematical statement, in symbols, that two things are the same (or equivalent). ...

s_{y|x}^2=frac{1}{n-1}sum_{i=1}^n (y_i-a-bx_i)^2,

and sy2  is just the variance of y:

s_y^2=frac{1}{n-1}sum_{i=1}^n (y_i-bar{y})^2.

Note that since the sample correlation coefficient is symmetric in xi  and yi , we will get the same value for a fit of yi  to xi :

r_{xy}^2=1-frac{s_{x|y}^2}{s_x^2}.

This equation also gives an intuitive idea of the correlation coefficient for higher dimensions. Just as the above described sample correlation coefficient is the fraction of variance accounted for by the fit of a 1-dimensional linear submanifold to a set of 2-dimensional vectors (xi , yi ), so we can define a correlation coefficient for a fit of an m-dimensional linear submanifold to a set of n-dimensional vectors. For example, if we fit a plane z = a + bx + cy  to a set of data (xi , yi , zi ) then the correlation coefficient of z  to x  and y  is 2-dimensional renderings (ie. ... Around 300 BC, the Greek mathematician Euclid laid down the rules of what has now come to be called Euclidean geometry, which is the study of the relationships between angles and distances in space. ...

r^2=1-frac{s_{z|xy}^2}{s_z^2}.

The distribution of the correlation coefficient has been examined by R. A. Fisher[1][2] and A. K. Gayen.[3] Sir Ronald Fisher Sir Ronald Aylmer Fisher, FRS (February 17, 1890 – July 29, 1962) was an evolutionary biologist, geneticist and statistician. ...


Geometric Interpretation of correlation

The correlation coefficient can also be viewed as the cosine of the angle between the two vectors of samples drawn from the two random variables. In mathematics, the trigonometric functions are functions of an angle, important when studying triangles and modeling periodic phenomena. ... This article is about angles in geometry. ... This article is about vectors that have a particular relation to the spatial coordinates. ...


Caution: This method only works with centered data, i.e., data which have been shifted by the sample mean so as to have an average of zero. Some practitioners prefer an uncentered (non-Pearson-compliant) correlation coefficient. See the example below for a comparison.


As an example, suppose five countries are found to have gross national products of 1, 2, 3, 5, and 8 billion dollars, respectively. Suppose these same five countries (in the same order) are found to have 11%, 12%, 13%, 15%, and 18% poverty. Then let x and y be ordered 5-element vectors containing the above data: x = (1, 2, 3, 5, 8) and y = (0.11, 0.12, 0.13, 0.15, 0.18).


By the usual procedure for finding the angle between two vectors (see dot product), the uncentered correlation coefficient is: In mathematics, the dot product, also known as the scalar product, is a binary operation which takes two vectors over the real numbers R and returns a real-valued scalar quantity. ...

 cos theta = frac { bold{x} cdot bold{y} } { left| bold{x} right| left| bold{y} right| } = frac { 2.93 } { sqrt { 103 } sqrt { 0.0983 } } = 0.920814711.

Note that the above data were deliberately chosen to be perfectly correlated: y = 0.10 + 0.01 x. The Pearson correlation coefficient must therefore be exactly one. Centering the data (shifting x by E(x) = 3.8 and y by E(y) = 0.138) yields x = (−2.8, −1.8, −0.8, 1.2, 4.2) and y = (−0.028, −0.018, −0.008, 0.012, 0.042), from which

 cos theta = frac { bold{x} cdot bold{y} } { left| bold{x} right| left| bold{y} right| } = frac { 0.308 } { sqrt { 30.8 } sqrt { 0.00308 } } = 1 = rho_{xy},

as expected.


Motivation for the form of the coefficient of correlation

Another motivation for correlation comes from inspecting the method of simple linear regression. As above, X is the vector of independent variables, xi, and Y of the dependent variables, yi, and a simple linear relationship between X and Y is sought, through a least-squares method on the estimate of Y: In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε. The model can be written as Example of linear regression with one dependent and one independent variable. ...

  Y = Xbeta + varepsilon.,

Then, the equation of the least-squares line can be derived to be of the form:

 (Y - bar{Y}) = frac{nsum x_iy_i-sum x_isum y_i} {nsum x_i^2-(sum x_i)^2} (X - bar{X})

which can be rearranged in the form:

 (Y - bar{Y})=frac{r s_y}{s_x} (X-bar{X})

where r has the familiar form mentioned above : frac{nsum x_iy_i-sum x_isum y_i} {sqrt{nsum x_i^2-(sum x_i)^2}~sqrt{nsum y_i^2-(sum y_i)^2}}.


Interpretation of the size of a correlation

Correlation Negative Positive
Small −0.3 to −0.1 0.1 to 0.3
Medium −0.5 to −0.3 0.3 to 0.5
Large −1.0 to −0.5 0.5 to 1.0

Several authors have offered guidelines for the interpretation of a correlation coefficient. Cohen (1988),[4] for example, has suggested the following interpretations for correlations in psychological research, in the table on the right.


As Cohen himself has observed, however, all such criteria are in some ways arbitrary and should not be observed too strictly. This is because the interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.9 may be very low if one is verifying a physical law using high-quality instruments, but may be regarded as very high in the social sciences where there may be a greater contribution from complicating factors.


Along this vein, it is important to remember that "large" and "small" should not be taken as synonyms for "good" and "bad" in terms of determining that a correlation is of a certain size. For example, a correlation of 1.0 or −1.0 indicates that the two variables analyzed are equivalent modulo scaling. Scientifically, this more frequently indicates a trivial result than an earth-shattering one. For example, consider discovering a correlation of 1.0 between how many feet tall a group of people are and the number of inches from the bottom of their feet to the top of their heads.


Non-parametric correlation coefficients

Pearson's correlation coefficient is a parametric statistic and when distributions are not normal it may be less useful than non-parametric correlation methods, such as Chi-square, Point biserial correlation, Spearman's ρ and Kendall's τ. They are a little less powerful than parametric methods if the assumptions underlying the latter are met, but are less likely to give distorted results when the assumptions fail. Parametric inferential statistical methods are mathematical procedures for statistical hypothesis testing which assume that the distributions of the variables being assessed belong to known parametrized families of probability distributions. ... Non-Parametric statistics are statistics where it is not assumed that the population fits any parametrized distributions. ... A chi-square test is any statistical hypothesis test in which the test statistic has a chi-square distribution when the null hypothesis is true, or any in which the probability distribution of the test statistic (assuming the null hypothesis is true) can be made to approximate a chi-square... The point biserial correlation coefficient is a correlation coefficient used when one variable is dichotomous and nominal. ... In statistics, Spearmans rank correlation coefficient, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a non-parametric measure of correlation – that is, it assesses how well an arbitrary monotonic function could describe the relationship between two variables, without making any assumptions about... In statistics, rank correlation is the study of relationships between different rankings on the same set of items. ...


Other measures of dependence among random variables

To get a measure for more general dependencies in the data (also nonlinear) it is better to use the correlation ratio which is able to detect almost any functional dependency, or the entropy-based mutual information/total correlation which is capable of detecting even more general dependencies. The latter are sometimes referred to as multi-moment correlation measures, in comparison to those that consider only 2nd moment (pairwise or quadratic) dependence. In statistics, the correlation ratio is a measure of the relationship between the statistical dispersion within individual categories and the dispersion across the whole population or sample. ... Claude Shannon In information theory, the Shannon entropy or information entropy is a measure of the uncertainty associated with a random variable. ... In probability theory and, in particular, information theory, the mutual information, or transinformation, of two random variables is a quantity that measures the mutual dependence of the two variables. ... The total correlation (Watanabe 1960) is one of several generalizations of the mutual information. ...


The polychoric correlation is another correlation applied to ordinal data that aims to estimate the correlation between theorised latent variables. In statistics, polychoric correlation is a technique for estimating the correlation between two theorised normally distributed continuous latent variables, from two ordinal variables. ...


Copulas and correlation

The information given by a correlation coefficient is not enough to define the dependence structure between random variables; to fully capture it we must consider a copula between them. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the cumulative distribution functions are the multivariate normal distributions. In the case of elliptic distributions it characterizes the (hyper-)ellipses of equal density, however, it does not completely characterize the dependence structure (for example, the a multivariate t-distribution's degrees of freedom determine the level of tail dependence). In statistics, a copula is a multivariate cumulative distribution function defined on the n-dimensional unit cube [0, 1]n such that every marginal distribution is uniform on the interval [0, 1]. Sklars theorem is as follows. ... In probability theory, the cumulative distribution function (abbreviated cdf) completely describes the probability distribution of a real-valued random variable, X. For every real number x, the cdf is given by where the right-hand side represents the probability that the random variable X takes on a value less than... In probability theory and statistics, a multivariate normal distribution, also sometimes called a multivariate Gaussian distribution, is a specific probability distribution, which can be thought of as a generalization to higher dimensions of the one-dimensional normal distribution (also called a Gaussian distribution). ...


Correlation matrices

The correlation matrix of n random variables X1, ..., Xn is the n  ×  n matrix whose i,j entry is corr(XiXj). If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables Xi /SD(Xi) for i = 1, ..., n. Consequently it is necessarily a positive-semidefinite matrix. In statistics and probability theory, the covariance matrix is a matrix of covariances between elements of a vector. ... In linear algebra, the positive-definite matrices are (in several ways) analogous to the positive real numbers. ...


The correlation matrix is symmetric because the correlation between Xi and Xj is the same as the correlation between Xj and Xi.


Removing correlation

It is always possible to remove the correlation between zero-mean random variables with a linear transform, even if the relationship between the variables is nonlinear. Suppose a vector of n random variables is sampled m times. Let X be a matrix where Xi,j is the jth variable of sample i. Let Zr,c be an r by c matrix with every element 1. Then D is the data transformed so every random variable has zero mean, and T is the data transformed so all variables have zero mean, unit variance, and zero correlation with all other variables. The transformed variables will be uncorrelated, even though they may not be independent.

D = X -frac{1}{m} Z_{m,m} X
T = D (D^T D)^{-frac{1}{2}}

where an exponent of -1/2 represents the matrix square root of the inverse of a matrix. The covariance matrix of T will be the identity matrix. If a new data sample x is a row vector of n elements, then the same transform can be applied to x to get the transformed vectors d and t: In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. ... In mathematics and especially linear algebra, an n-by-n matrix A is called invertible, non-singular or regular if there exists another n-by-n matrix B such that AB = BA = In, where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. ...

d = x - frac{1}{m} Z_{1,m} X
t = d (D^T D)^{-frac{1}{2}}.

Common misconceptions about correlation

Correlation and causality

The conventional dictum that "correlation does not imply causation" means that correlation cannot be validly used to infer a causal relationship between the variables. This dictum should not be taken to mean that correlations cannot indicate causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown. Consequently, establishing a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction). Correlation does not imply causation is a phrase used in the sciences and statistics to emphasize that correlation between two variables does not imply there is a cause-and-effect relationship between the two. ...


Here is a simple example: hot weather may cause both a reduction in purchases of warm clothing and an increase in ice-cream purchases. Therefore warm clothing purchases are correlated with ice-cream purchases. But a reduction in warm clothing purchases does not cause ice-cream purchases and ice-cream purchases do not cause a reduction in warm clothing purchases.


A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health? Or does good health lead to good mood? Or does some other factor underlie both? Or is it pure coincidence? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.


Correlation and linearity

Four sets of data with the same correlation of 0.81
Four sets of data with the same correlation of 0.81

While Pearson correlation indicates the strength of a linear relationship between two variables, its value alone may not be sufficient to evaluate this relationship, especially in the case where the assumption of normality is incorrect. Image File history File links Anscombe. ... Image File history File links Anscombe. ...


The image on the right shows scatterplots of Anscombe's quartet, a set of four different pairs of variables created by Francis Anscombe.[5] The four y variables have the same mean (7.5), standard deviation (4.12), correlation (0.81) and regression line (y = 3 + 0.5x). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear, and the Pearson correlation coefficient is not relevant. In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.81. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear. Waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA. A scatterplot or scatter graph is a graph used in statistics to visually display and compare two or more sets of related quantitative, or numerical, data by displaying only... Anscombes quartet comprises four datasets which have identical simple statistical properties, yet which are revealed to be very different when inspected graphically. ... See also exclaves and Polynesian outliers. ...


These examples indicate that the correlation coefficient, as a summary statistic, cannot replace the individual examination of the data.


Computing correlation accurately in a single pass

The following algorithm (in pseudocode) will calculate Pearson correlation with good numerical stability. Pseudocode (derived from pseudo and code) is a compact and informal high-level description of a computer programming algorithm that uses the structural conventions of some programming language, but typically omits details that are not essential for the understanding of the algorithm, such as subroutines, variable declarations and system-specific... In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ...

 sum_sq_x = 0 sum_sq_y = 0 sum_coproduct = 0 mean_x = x[1] mean_y = y[1] for i in 2 to N: sweep = (i - 1.0) / i delta_x = x[i] - mean_x delta_y = y[i] - mean_y sum_sq_x += delta_x * delta_x * sweep sum_sq_y += delta_y * delta_y * sweep sum_coproduct += delta_x * delta_y * sweep mean_x += delta_x / i mean_y += delta_y / i pop_sd_x = sqrt( sum_sq_x / N ) pop_sd_y = sqrt( sum_sq_y / N ) cov_x_y = sum_coproduct / N correlation = cov_x_y / (pop_sd_x * pop_sd_y) 

See also

A plot showing 100 random numbers with a hidden sine function, and an autocorrelation of the series on the bottom. ... In statistics, an association (statistics) comes from two variables who are related. ... In statistics, the term cross-correlation is sometimes used to refer to the covariance cov(X, Y) between two random vectors X and Y, in order to distinguish that concept from the covariance of a random vector X, which is understood to be the matrix of covariances between the scalar... In statistics, the coefficient of determination R2 is the proportion of variability in a data set that is accounted for by a statistical model. ... In statistics, the fraction of variance unexplained (or FVU) in the context of a regression task is the amount of variance of the regressand Y which cannot be explained, i. ... In statistics, rank correlation is the study of relationships between different rankings on the same set of items. ... In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ... The point biserial correlation coefficient is a correlation coefficient used when one variable is dichotomous and nominal. ... In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. ... In statistics, Spearmans rank correlation coefficient, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a non-parametric measure of correlation – that is, it assesses how well an arbitrary monotonic function could describe the relationship between two variables, without making any assumptions about... In the world of finance and investments Statistical arbitrage is used in two related but distinct ways: In the academic literature Statistical arbitrage is opposed to (deterministic) arbitrage. ... Currency correlation is a statistical measure of how two currency pairs move in relation to each other. ...

Notes and references

  1. ^ R. A. Fisher (1915). "Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population". Biometrika 10: 507–521. 
  2. ^ R. A. Fisher (1921). "On the probable error of a coefficient of correlation deduced from a small sample". Metron. 
  3. ^ A. K. Gayen (1951). "The frequency distribution of the product moment correlation coefficient in random samples of any size draw from non-normal universes". Biometrika 38: 219–247. 
  4. ^ Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates. ISBN 0-8058-0283-5.
  5. ^ Anscombe, Francis J. (1973) Graphs in statistical analysis. American Statistician, 27, 17–21.

Sir Ronald Fisher Sir Ronald Aylmer Fisher, FRS (February 17, 1890 – July 29, 1962) was an evolutionary biologist, geneticist and statistician. ... Biometrika is a scientific journal established in 1901 by Francis Galton, Karl Pearson and W. F. R. Weldon to promote the study of biometrics, the statistical analysis of hereditary phenomena. ... Sir Ronald Fisher Sir Ronald Aylmer Fisher, FRS (February 17, 1890 – July 29, 1962) was an evolutionary biologist, geneticist and statistician. ... Metron can refer to: Metron (comics) Metron (Star Trek) A unit of surface area in Heim Theory A Ultra Seven enemy: Metron-seijin This is a disambiguation page: a list of articles associated with the same title. ... Biometrika is a scientific journal established in 1901 by Francis Galton, Karl Pearson and W. F. R. Weldon to promote the study of biometrics, the statistical analysis of hereditary phenomena. ...

Further reading

  • Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. (3rd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates.

External links

This article is about the field of statistics. ... Descriptive statistics are used to describe the basic features of the data in a study. ... This article is about mathematical mean. ... In mathematics and statistics, the arithmetic mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided by the number of items in the list. ... The geometric mean of a collection of positive data is defined as the nth root of the product of all the members of the data set, where n is the number of members. ... This article is about the statistical concept. ... In statistics, mode means the most frequent value assumed by a random variable, or occurring in a sampling of a random variable. ... Look up range in Wiktionary, the free dictionary. ... This article is about mathematics. ... In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. ... It has been suggested that this article or section be merged with inferential statistics. ... One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. ... In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. ... The power of a statistical test is the probability that the test will reject a false null hypothesis (that it will not make a Type II error). ... In statistics, a null hypothesis is a hypothesis set up to be nullified or refuted in order to support an alternative hypothesis. ... In statistics, the Alternative Hypothesis is the hypothesis proposed to explain a statistically significant difference between results, that is if the Null Hypothesis has been rejected. ... Type I errors (or α error, or false positive) and type II errors (β error, or a false negative) are two terms used to describe statistical errors. ... The Z-test is a statistical test used in inference. ... A t-test is any statistical hypothesis test in which the test statistic has a Students t distribution if the null hypothesis is true. ... Maximum likelihood estimation (MLE) is a popular statistical method used to make inferences about parameters of the underlying probability distribution from a given data set. ... Compares the various grading methods in a normal distribution. ... In statistical hypothesis testing, the p-value of a random variable T used as a test statistic is the probability that T will assume a value at least as extreme as the observed value tobserved, given that a null hypothesis being considered is true. ... In statistics, analysis of variance (ANOVA) is a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts. ... A meta-analysis is a statistical practice of combining the results of a number of studies. ... Survival analysis is a branch of statistics which deals with death in biological organisms and failure in mechanical systems. ... The survival function, also known as a survivor function or reliability function, is a property of any random variable that maps a set of events, usually associated with mortality or failure of some system, onto time. ... The Kaplan-Meier estimator (also known as the Product Limit Estimator) estimates the survival function from life-time data. ... The logrank test (sometimes called the Mantel-Haenszel test or the Mantel-Cox test) [1] is a hypothesis test to compare the survival distributions of two samples. ... Failure rate is the frequency with which an engineered system or component fails, expressed for example in failures per hour. ... // Proportional hazards models are a sub-class of survival models in statistics. ... In statistics, a spurious relationship (or, sometimes, spurious correlation) is a mathematical relationship in which two occurrences have no logical connection, yet it may be implied that they do, due to a certain third, unseen factor (referred to as a confounding factor or lurking variable). The spurious relationship gives an... In statistics, the Pearson product-moment correlation coefficient (sometimes known as the PMCC) (r) is a measure of the correlation of two variables X and Y measured on the same object or organism, that is, a measure of the tendency of the variables to increase or decrease together. ... In statistics, rank correlation is the study of relationships between different rankings on the same set of items. ... In statistics, Spearmans rank correlation coefficient, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a non-parametric measure of correlation – that is, it assesses how well an arbitrary monotonic function could describe the relationship between two variables, without making any assumptions about... The Kendall tau rank correlation coefficient (or simply the Kendall tau coefficient, Kendalls τ or Tau test(s)) is used to measure the degree of correspondence between two rankings and assessing the significance of this correspondence. ... In statistics, regression analysis examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). ... In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε. The model can be written as Example of linear regression with one dependent and one independent variable. ... dataset with approximating polynomials Nonlinear regression in statistics is the problem of fitting a model to multidimensional x,y data, where f is a nonlinear function of x with parameters θ. In general, there is no algebraic expression for the best-fitting parameters, as there is in linear regression. ... Logistic regression is a statistical regression model for Bernoulli-distributed dependent variables. ...

  Results from FactBites:
 
CORRELATION (381 words)
Direct relationships (positive correlations) exist when high scores on one variable are associated with high scores on another variable, as when intelligence is positively correlated with grade point average.
Inverse relationships (negative correlations) exist when high scores on one variable are associated with low scores on a second variable, as when the amount of sleep one gets is negatively correlated with levels of irritability and anxiety.
Demonstrating that a correlation exists does not prove that changes in one variable are the cause of changes in the other, partly because other factors which are undetected may be influencing both known variables.
Correlation (819 words)
Correlation is a statistical technique which can show whether and how strongly pairs of variables are related.
Although this correlation is fairly obvious your data may contain unsuspected correlations.
Correlation works for data in which numbers are meaningful, usually quantities of some sort.
  More results at FactBites »

 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m