FACTOID # 2: Puerto Rico has roughly the same gross state product as Montana, Wyoming and North Dakota combined.
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 


FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:



(* = Graphable)



Encyclopedia > Quantitative marketing research

Quantitative marketing research is a social research method that utilizes statistical techniques. It typically involves the construction of questionnaires and scales. Large numbers of people are contacted, usually in a survey. Marketers use the information so obtained to craft strategies and develop marketing plans. Social research refers to research conducted by social scientists (primarily within sociology, but also within other disciplines such as social policy, human geography, social anthropology and education). ... Statistics is a type of data analysis which practice includes the planning, summarizing, and interpreting of observations of a system possibly followed by predicting or forecasting of future events based on a mathematical model of the system being observed. ... Questionnaires are frequently used in quantitative marketing research and social research in general. ... Scaling is the measurement of a variable in such a way that it can be expressed on a continuum. ... Statistical surveys are used to collect quantitative information in the fields of marketing, political polling, and social science research. ... Marketing is the collective field of advertising and promotion. ... Strategic planning suggests ways (strategies) to identify and to move toward desired future states. ... A Marketing Plan is a written document that details the actions necessary to achieve a specified marketing objective(s). ...


Scope and requirements

If quantitative marketing research is carried out correctly, both descriptive and inferential statistical techniques can be used to analyse data and draw conclusions. It involves a large number of respondents, tests of a specific hypothesis, and the use of random sampling techniques to enable inference from the sample to the population.

General procedure

  1. Problem audit and problem definition - What is the problem? What are the various aspects of the problem? What information is needed?
  2. Conceptualization and operationalization - How exactly do we define the concepts involved? How do we translate these concepts into observable and measurable behaviours?
  3. Hypothesis specification - What claim(s) do we want to test?
  4. Research design specification - What type of methodology to use? - examples: questionnaire, survey
  5. Question specification - What questions to ask? In what order?
  6. Scale specification - How will preferences be rated?
  7. Sampling design specification - What is the total population? What sample size is necessary for this population? What sampling method to use?- examples: cluster sampling, stratified sampling, simple random sampling, multistage sampling, systematic sampling, nonprobability sampling
  8. Data collection - Use mail, telephone, Internet, mall intercepts. May be a custom survey, or added to an omnibus survey
  9. Codification and re-specification - Make adjustments to the raw data so it is compatible with statistical techniques and with the objectives of the research - examples: assigning numbers, consistency checks, substitutions, deletions, weighting, dummy variables, scale transformations, scale standardization
  10. Statistical analysis - Perform various descriptive and inferential techniques (see below) on the raw data. Make inferences from the sample to the whole population. Test the results for statistical significance.
  11. Interpret and integrate findings - What do the results mean? What conclusions can be drawn? How do these findings relate to similar research?
  12. Write the research report - Report usually has headings such as: 1) executive summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts and diagrams. Present the report to the client in a 10 minute presentation. Be prepared for questions.

A hypothesis (assumption in ancient Greek) is a proposed explanation for a phenomenon. ... Questionnaires are frequently used in quantitative marketing research and social research in general. ... Scaling is the measurement of a variable in such a way that it can be expressed on a continuum. ... Sampling is that part of statistical practice concerned with the selection of individual observations intended to yield some knowledge about a population of concern, especially for the purposes of statistical inference. ... Cluster sampling is used when natural groupings are evident in the population. ... Stratified sampling is a method of sampling from a population in statistics. ... In statistics, a simple random sample from a population is a sample chosen randomly, in which each member of the population has the same probability of being chosen. ... Multistage sampling is a complex form of cluster sampling. ... Systematic sampling is the selection of every kth element from a sampling frame, where k, the sampling interval, is calculated as: k = Number in population / Number in sample Using this procedure each element in the population has a known and equal probability of selection. ... Sampling is the use of a subset of the population to represent the whole population. ... An omnibus survey is a method of quantitative marketing research where data on a wide variety of subjects is collected during the same interview. ...

Descriptive techniques

The descriptive techniques that are commonly used include:

Cross tabs (or cross tabulations) display the joint distribution of two or more variables. ... In statistics, central tendency is an average of a set of measurements, the word average being variously construed as mean, median, or other measure of location, depending on the context. ... In mathematics and statistics, the arithmetic mean of a set of numbers is the sum of all the members of the set divided by the number of items in the set. ... In probability theory and statistics, the median is a number that separates the higher half of a sample, a population, or a probability distribution from the lower half. ... Mode has several meanings: In statistics, the mode is the value that has the largest number of observations, namely the most frequent value or values. ... The interquartile mean (IQM) is a statistical measure of central tendency, much like the mean (in more popular terms called the average), the median, and the mode. ... In descriptive statistics, statistical dispersion (also called statistical variability) is quantifiable variation of measurements of differing members of a population within the scale on which they are measured. ... In probability and statistics, the standard deviation is the most commonly used measure of statistical dispersion. ... In descriptive statistics, the range is the length of the smallest interval which contains all the data. ... In descriptive statistics, the interquartile range (IQR) is the difference between the third and first quartiles and is a measure of statistical dispersion. ... The absolute deviation of an element of a data set is the absolute difference between that element and a given point. ... In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable. ... In probability theory and statistics, kurtosis is a measure of the peakedness of the probability distribution of a real-valued random variable. ...

Inferential techniques

Inferential techniques involve generalizing from a sample to the whole population. It also involves testing a hypothesis. A hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. Then a test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. A null hypothesis is a hypothesis that is presumed true until a hypothesis test indicates otherwise. Typically it is a statement about a parameter that is a property of a population. The parameter is often a mean or a standard deviation. A statistic (singular) is the result of applying a statistical algorithm to a set of data. ... In statistics, a null hypothesis is a hypothesis that is presumed true until statistical evidence in the form of a hypothesis test indicates otherwise. ... A parameter is a measurement or value on which something else depends. ...

Not unusually, such a hypothesis states that the parameters, or mathematical characteristics, of two or more populations are identical. For example, if we want to compare the test scores of two random samples of men and women, the null hypothesis would be that the mean score in the male population from which the first sample was drawn, was the same as the mean score in the female population from which the second sample was drawn: A parameter is a measurement or value on which something else depends. ... A sample is that part of a population which is actually observed. ...

H01 = μ2


H0 = the null hypothesis
μ1 = the mean of population 1, and
μ2 = the mean of population 2.

The equality operator makes this a two-tailed test. The alternative hypothesis can be either greater than or less than the null hypothesis. In a one-tailed test, the operator is an inequality, and the alternative hypothesis has directionality:

H01 = or < μ2

These are sometimes called a hypothesis of significant difference because you are testing the difference between two groups with respect to one variable.

Alternatively, the null hypothesis can postulate that the two samples are drawn from the same population:

H01 − μ2 = 0

A hypothesis of association is where there is one population, but two traits being measured. It is a test of association of two traits within one group.

The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the hypothesis is correct is called the alpha value of the test. After the data is available, the test statistic is calculated and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is either the hypothesis is incorrect, or an event of probability less than or equal to alpha has occurred. If the test statistic is outside the critical region, the conclusion is that there is not enough evidence to reject the hypothesis.

The significance level of a test is the maximum probability of accidentally rejecting a true null hypothesis (a decision known as a Type I error).For example, one may choose a significance level of, say, 5%, and calculate a critical value of a statistic (such as the mean) so that the probability of it exceeding that value, given the truth of the null hypothesis, would be 5%. If the actual, calculated statistic value exceeds the critical value, then it is significant "at the 5% level".

The word probability derives from the Latin probare (to prove, or to test). ... In statistical hypothesis testing, a Type I error consists of rejecting a null hypothesis that is true, in other words finding a result to have statistical significance when this has in fact happened by chance. ... The word probability derives from the Latin probare (to prove, or to test). ...

Types of hypothesis tests

  • Parametric tests of a single sample:
  • Parametric tests of two independent samples:
    • two-group t test
    • z test
  • Parametric tests of paired samples:
    • paired t test
  • Nominal/ordinal level test of a single sample:
    • chi-square
    • Kolmogorov-Smirnov one sample test
    • runs test
    • binomial test
  • Nominal/ordinal level test of two independent samples:
    • chi-square
    • Mann-Whitney U
    • Median
    • Kolmogorov-Smirnov two sample test
  • Nominal/ordinal level test for paired samples:
    • Wilcoxon test
    • McNemar test

The Z-test is a statistical test used in inference which determines if the difference between a sample mean and the population mean is sufficiently different as to be statistically significant. ... For any positive integer , the chi-square distribution with k degrees of freedom is the probability distribution of the random variable where Z1, ..., Zk are independent normal variables, each having expected value 0 and variance 1. ... The Mann-Whitney U test is one of the best-known non-parametric statistical significance tests. ... The Wilcoxon signed-rank test is a non-parametric alternative to the paired Students t-test. ...

Reliability and validity

Research should be tested for reliability, generalizability, and validity. Generalizability is the ability to make inferences from a sample to the population. In psychometrics reliability is the accuracy of the scores of a measure. ... In psychometrics a valid measure is one which is measuring what it is supposed to measure. ...

Reliability is the extent to which a measure will produce consistent results. Test-retest reliability checks how similar the results are if the research is repeated under similar circumstances. Stability over repeated measures is assessed with the Pearson coefficient. Alternative forms reliability checks how similar the results are if the research is repeated using different forms. Internal consistency reliability checks how well the individual measures included in the research are converted into a composite measure. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability). The value of the Pearson product-moment correlation coefficient is adjusted with the Spearman-Brown prediction formula to correspond to the correlation between two full-length tests. A commonly used measure is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Reliability may be improved by increasing the sample size. In mathematics, and in particular statistics, the Pearson product-moment correlation coefficient (r) is a measure of how well a linear equation describes the relation between two variables X and Y measured on the same object or organism. ... The Spearman-Brown prediction formula (also known as the Spearman-Brown prophecy formula) is a formula relating psychometric reliability to test length: where is the predicted reliability; N is the number of tests combined (see below); and is the reliability of the current test. The formula predicts the reliability of... Cronbachs α (alpha) is a quantity defined in multivariate statistics. ...

Validity asks whether the research measured what it intended to. Content validation (also called face validity) checks how well the content of the research are related to the variables to be studied. Are the research questions representative of the variables being researched. It is a demonstration that the items of a test are drawn from the domain being measured. Criterion validation checks how meaningful the research criteria are relative to other possible criteria. When the criterion is collected later the goal is to establish predictive validity. Construct validation checks what underlying construct is being measured. There are three variants of construct validity. They are convergent validity (how well the research relates to other measures of the same construct), discriminant validity (how poorly the research relates to measures of opposing constructs), and nomological validity (how well the research relates to other variables as required by theory) .

Internal validation, used primarily in experimental research designs, checks the relation between the dependent and independent variables. Did the experimental manipulation of the independent variable actually cause the observed results? External validation checks whether the experimental results can be generalized.

Validity implies reliability : a valid measure must be reliable. But reliability does not necessarily imply validity :a reliable measure need not be valid.

Types of errors

Random sampling errors:

  • sample too small
  • sample not representative
  • inappropriate sampling method used
  • random errors

Research design errors: In statistics, the concepts of error and residual are easily confused with each other. ...

  • bias introduced
  • measurement error
  • data analysis error
  • sampling frame error
  • population definition error
  • scaling error
  • question construction error

Interviewer errors:

  • recording errors
  • cheating errors
  • questioning errors
  • respondent selection error

Respondent errors:

  • non-response error
  • inability error
  • falsification error

Hypothesis errors:

  • type I error (also called alpha error)
    • the study results lead to the rejection of the null hypothesis even though it is actually true
  • type II error (also called beta error)
    • the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false

In statistical hypothesis testing, a Type I error consists of rejecting a null hypothesis that is true, in other words finding a result to have statistical significance when this has in fact happened by chance. ... In statistical hypothesis testing, a Type II error consists of failing to reject an invalid null hypothesis (i. ...

See also

Research covers the search for and retrieval of information for a specific purpose. ...

List of related topics

  Results from FactBites:
Quantitative Research Overview - American Marketing Association - www.marketingpower.com (456 words)
The term quantitative research covers a range of techniques that can be used to quantify - with some statistical confidence - categories, evaluations, opinions and attitudes that potential or current customers bring to any market.
Quantitative research usually involves administering a pre-scripted questionnaire to hundreds or thousands of people.
Quantitative research is one of the two main branches of marketing research.
Market Street Research, Inc. - Quantitative and Qualitative Marketing Research and Analysis (545 words)
Market Street Research is passionate about understanding our clients' needs and is dedicated to performing customized quantitative and qualitative market research that delivers actionable information and drives effective business and organizational decision-making.
By incorporating the most established market research methods and best practices with our vision, 25 years of quantitative and qualitative marketing research and political polling experience and our high ethical standards, we are able to achieve the optimum results for our clients.
Market Street Research's team of seasoned business professionals is expert in designing and implementing quantitative and qualitative surveys, focus groups and political polling market research projects that will produce effective, actionable results.
  More results at FactBites »



Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m