by Vincent McDee
I should begin reminding you of the quote “lies, damned lies and statistics”, which is of disputed origin, but which reflects the eternal truth: nobody with a cause to maintain ever lacked figures with which to do it.
And there’s no better method than the famous weighting.
Normally, the pollsters mistrust the answers given by respondents. Many scientific studies have proven that people don’t like to speak of their voting intentions, as most find it a very personal issue, but out of politeness they give a response nevertheless. Corrections need to be made to adjust for this factor, and to ensure accuracy in the interpretation of the results.
That’s why we still need the formal elections, if the results of a poll could be trusted, a lot of money could be saved. That’s why there are normally differences between polling figures and election outcomes.
Before explaining weighting, margin of error needs to be understood. Margin of error is a common measurement of sampling error, referred to regularly in the media, which quantifies uncertainty about a survey result.
Technically the margin of error is a statistic expression of the amount of random sampling error in a survey’s results. The larger the margin of error, the less faith one should have that the poll’s reported results are close to the true figures; that is, the figures for the whole population. Margin of error occurs whenever a population is incompletely sampled.
Surveys are typically designed to provide an estimate of the true value of one or more characteristics of a population, at a given time. Data in a survey are collected from only some, but not all, members of the population. This makes data collection cheaper and faster.
But results in the sample can differ from a target population quantity (i.e. Scotland’s voters), simply due to the luck of the draw. Some groups or segments of the population can be over or under-represented in the sample.
For the interested, there are three issues that seem to affect the margin of error: sample size, the type of sampling done, and the size of the population.
An important factor in determining the margin of error is the size of the sample. Larger samples are more likely to yield results close to the target population quantity and thus have smaller margins of error than more modest-sized samples.
In sampling, to try to estimate a population proportion, such as in telephone polls, a sample of 100 will produce a margin of error of no more than about 10 percent, a sample of error of 500 will produce a margin of error of no more than about 4.5 percent, and a sample of size 1,000 will produce a margin of error of no more than about 3 percent. This illustrates that there are diminishing returns when trying to reduce the margin of error by increasing the sample size. For example, to reduce the margin of error to 1.5% would require a sample size of well over 4,000.
These percentages can go both ways, plus or minus.
A misleading feature of most current media stories on political polls is that they report the margin of error associated with the proportion favouring one candidate, not the margin of error of the lead of one candidate over another.
To illustrate the problem, suppose one poll finds that Mr. Jones has 45 percent support, Ms. Smith has 41 percent support, 14 percent are undecided, and there is a 3 percent margin of error for each category. Mr. Jones might have anywhere from 42 percent to 48 percent support in the voting population and Ms. Smith might have anywhere from 38 percent to 44 percent support.
It would not be terribly surprising then for another poll to report anything from a 10-point lead for Mr. Jones (such as 48 percent to 38 percent) to a 2-point lead for Ms. Smith (such as 44 percent to 42 percent).
In more technical terms, a law of probability dictates that the difference between two uncertain proportions (eg, the lead of one candidate over another in a political poll in which both are estimated) has more uncertainty associated with it than either proportion alone.
Accordingly, the margin of error associated with the lead of one candidate over another should be larger than the margin of error associated with a single proportion, which is what media reports typically mention – thus the need to keep your eye on what’s being estimated.
Something that helps assessing non-sampling uncertainties, when available, is to include the percentage of respondents who answer “don’t know” or “undecided.”
Be wary when these quantities are not given. Almost always there are people who have not made up their mind. How these cases are handled can make a big difference. Simply splitting them in proportion to the views of those who gave an opinion can be misleading in some settings.
Overall non-response in surveys has been growing in recent years and is increasingly a consideration in the uncertainty of reported results.
Until media organizations get their reporting practices in line with actual variation in results across political polls, a rule of thumb is to multiply the currently reported margin of error by 1.7 to obtain a more accurate estimate of the margin of error for the lead of one candidate over another. Thus, a reported 3 percent margin of error becomes about 5 percent and a reported 4 percent margin of error becomes about 7 percent when the size of the lead is being considered.
The size of population is, funnily, one factor that generally has little influence on the margin of error. That is, a sample size of 100 in a population of 10,000 will have almost the same margin of error as a sample size of 100 in a population of 10 million.
Now, let’s see sampling. Three common types are simple random sampling, random digit dialling, and stratified sampling (in telephone polls, the most common nowadays)
A simple random sampling design, is one in which every sample of a given size is equally likely to be chosen. In this case, individuals might be selected into such a sample based on a randomizing device which gives each individual a chance of selection. Computers are often used to simulate a random stream of numbers to support his effort.
Telephone surveys that attempt to reach not only people with listed phone numbers, but also people with unlisted numbers, often rely on the technique of random digit dialling.
Stratified sampling designs involve defining groups, or strata, based on characteristics known for everyone in the population, and then taking independent samples within each stratum. Such a design offers flexibility and depending on the nature of the strata, they can also improve the precision of estimates of target quantities (or equivalently, reduce their margins of error).
Of the three types of probability sampling, stratified samples are especially advantageous when the aim of the survey is not necessarily to estimate the proportion of an entire population with a particular viewpoint, but instead is to estimate differences in viewpoints between different groups.
Sometimes samples are drawn in clusters, in which only a few counties or cities are sampled, or the interviewer visits only a few blocks. This tends to increase the margin of error and should be taken into account by whoever calculates sampling error.
There are two most common types of survey weights: Design and Post-Stratification or Non-Response.
Design is normally used to compensate for over (or under) sampling of specific cases or disproportionate stratification, when we want the statistic to be representative of the population (with characteristics we know through the census).
For example, if we make a survey in an area with a larger than standard minority groups, lets say double, each case in the area would get a Design Weight of 1/2 or .5
The Post-Stratification or Non-Response type is used to compensate for the fact I mentioned above, that persons with certain characteristics are not as likely to respond or if they do respond are more likely to give a misleading reply. These characteristics can relate to age, education, residence (rural, urban, regional, etc), gender, race/ethnicity or any of the other population categories found in the census.
Finding good stratifications for the population characteristics is sometimes a challenge if you want an honest result, or a means to an end if you don’t.
And then, there is the Unionist pet reporter bias, capable of ignoring the 65% of results wanting a referendum, and headlining ‘More than one third oppose referendum’.