For example, one may choose a significance level of, say, 5%, and calculate a critical value of a statistic (such as the mean) so that the probability of it exceeding that value, given the truth of the null hypothesis, would be 5%. If the actual, calculated statistic value exceeds the critical value, then it is significant "at the 5% level".
If the significance level is smaller, a value will be less likely to be more extreme than the critical value. So a result which is "significant at the 1% level" is more significant than a result which is "significant at the 5% level". However a test at the 1% level is more likely to have a Type II error than a test at the 5% level, and so will have less statistical power. In devising a hypothesis test, the tester will aim to maximize power for a given significance, but ultimately have to recognise that the best which can be achieved is likely to be a balance between significance and power, in other words between the risks of Type I and Type II errors.
The first part simplifies the concept of statisticalsignificance as much as possible; so that non-technical readers can use the concept to help make decisions based on their data.
In contrast the high significancelevel for type of vehicle (.001 or 99.9%) indicates there is almost certainly a true difference in purchases of Brand X by owners of different vehicles in the population from which the sample was drawn.
While this logic passes the common sense test, the mathematics behind statisticalsignificance do not actually guarantee that 1-p gives the exact probability that there is a difference is the population.
Share your thoughts, questions and commentary here
Want to know more? Search encyclopedia, statistics and forums:
Press Releases |
The Wikipedia article included on this page is licensed under the
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m