One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. A **statistical hypothesis test**, or more briefly, *hypothesis test*, is an algorithm to state the alternative (for or against the hypothesis) which minimizes certain risks. This article describes the commonly used frequentist treatment of hypothesis testing. From the Bayesian point of view, it is appropriate to treat hypothesis testing as a special case of normative decision theory (specifically a model selection problem) and it is possible to accumulate evidence in favor of (or against) a hypothesis using concepts such as likelihood ratios known as Bayes factors. There are several preparations we make before we observe the data. - The hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. For example:
*The mean response to treatment being tested is equal to the mean response to the placebo in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation ... (value).* - A test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. Such a statistic is known as a sufficient statistic. A sufficient statistic for a parameter of a distribution exists if and only if the distribution forms an exponential family. In the example given above, it might be the numerical difference between the two sample means,
**m**_{1} − m_{2}. - The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor where n
_{1} and n_{2} are the sample sizes. - Among all the sets of possible values, we must choose one that we think represents the most extreme evidence
**against** the hypothesis. That is called the **critical region** of the test statistic. The probability of the test statistic falling in the critical region when the hypothesis is correct is called the **alpha** value (or **size**) of the test. After the data is available, the test statistic is calculated and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is one of the following: - The hypothesis is incorrect, therefore reject the null hypothesis.
- An event of probability less than or equal to
*alpha* has occurred. The researcher has to choose between these logical alternatives. In the example we would say: the observed response to treatment is statistically significant. If the test statistic is outside the critical region, the only conclusion is that *There is not enough evidence to reject the hypothesis.* This is **not** the same as evidence in favor of the hypothesis. That we cannot obtain using these arguments, since lack of evidence against a hypothesis is not evidence for it. On this basis, statistical research progresses by eliminating error, not by *finding the truth*.
## See also
falsifiability -- statistical theory -- applied statistics -- null hypothesis |