In order to reap the greatest dividends, survey research must be a well-thought-out process from beginning to end. Small missteps throughout the survey process can result in large, sometimes irreversible, effects downstream. Thus, it is imperative that businesses understand not only how to analyze their data, but also how to identify and rectify potential pitfalls in their data during the pre-analysis stage. When these possible pitfalls are left unnoticed, inaccurate conclusions may be drawn and there may be a potential misallocation of human and/or monetary resources. Below are some common issues that you may face when preparing to analyze your survey data.
Adjusting for Non-Responses
When a sample is drawn from a list or general population, there is sometimes basic demographic information about every individual prior to the actual survey. If the sample is not representative of the actual population, then it may be fruitful to adjust your sample data to mirror the characteristics of the population as a whole. This can be done by weighting certain values in a sample. For example, if population gender ratio is 1:1 but you sampled 1 male for every 2 females, you may want to increase the weight of the male response to compensate. While weighting may be invaluable when attempting to make inferences to a bigger population, as the number of variables grow the adjustments can get complicated. When dealing with a large number of variables, it would be wise to run your data through a statistical program to do the weighting.
Adjusting for Sampling Error
Samples drawn from a simple random sample will less likely be subjected to sample error. However, in practice it is more common for deviations from the simple random sampling to occur. If your statistical program does not appropriately adjust for the design of your sample, the estimates of sampling errors and calculations for your statistical test will likely be incorrect. For example, cluster sampling, often used in market research, occurs when a population is divided into heterogeneous subgroups and then elements of each cluster are sampled. Given that only a limited number of clusters are used, a high proportion of the population is left un-sampled, resulting in higher levels of sampling error.
Adjusting for No or Difficult to Code Responses
Providing no answers or difficult to code answers can prove to be problematic in the analytical stage. As mentioned earlier this issue often arises earlier on in the survey process when there is confusing wording in the stem of a question. Low levels of non-responses, less than 5%, is usually so inconsequential that there is no discernible distortion in the survey results. As non-responses start to rise, your estimates begin to be affected. Simply removing the non-response cases may not be the best choice since this may result in a reduction of power (the ability to find relationships when they are there). Instead, you could develop an imputation model which predicts the most likely answer for each respondent who does not answer a question.
Do you have more questions regarding the pre-analysis techniques? A seasoned expert in analytics and survey design from www.strategicdb.com will use the right techniques and even more to ensure your business questions are fully answered. Why not visit www.strategcdb.com, call 1-877-332-4923 or email us firstname.lastname@example.org today and you will be glad you did!