With the emergence of online data collection, market, consumer and policy research has become a lot quicker and cheaper. But how about the quality of online surveys? In this document we are focusing on three questions: What is the size and distribution of the response and non-response of various data collection methods and do these distort the data? How reliable and valid is the data collected through online surveys? Finally, do online surveys yield enough representative data? The quality of online surveys has been directly compared to telephone and CAPI (Computer Aided Personal Interviewing) surveys. One of the most important conclusions is that on many points data collected through online surveys is of a higher quality than the data collected through telephone (CATI) and CAPI surveys. There are doubts, however, about the representativeness of data collected through online surveys on the specific subject of Internet use.
The quality of surveys collecting data online is under discussion. The criticism is concentrated on a number of points. Firstly, online surveys often use a panel which rewards respondents for their participation. Critics claim that these mostly cooperate for the reward and are not sufficiently involved with the research. The data quality will be worse than the quality of data from e.g. telephone surveys. Secondly, critics doubt the reprensentativeness of the data from online surveys. Use of specific panels is said to distort the data, as respondents who signed up to these panels are not representative of the general Dutch population. Only those respondents who have an Internet connection can take part.
Despite these question marks regarding quality, practice shows that online data collection is used more and more, at the expense of telephone and CAPI surveys. This development seems to be unstoppable and is driven by the enormous advantages in online data collection. Telephone interviews result in more and more irritation with consumers, as ‘cold calling’ often isn’t appreciated and people are often called at the wrong moment at work or at home. In the case of online data collection, people who register for so-called access panels are contacted, who have explicitly indicated that they want to participate in a survey. There is hardly to no irritation shown by participants. Other important advantages of online data collection, compared to telephone surveys, are the relatively short completion times, the low costs and the possibility to add photographs, logos and videos to a questionnaire.
In this document, we will present a parallel study by Research International, which compared the quality of online surveys with telephone and CAPI (Computer Aided Personal Interviewing) questionnaires. Various quality aspects of the three different types of data collection methods were determined, such as the representativeness of data, the integrity and the (non-)response. The study shows that the response for online surveys is much higher than for telephone and CAPI questionnaires. The answers given in online and CAPI surveys are more reliable and trustworthy than those given during a telephone questionnaire. The data integrity and reliability of online and CAPI surveys also score better on other points than telephone questionnaires. With regard to the representativeness of data, it can be concluded that this is good for all three data collection methods. The answers all point in the same direction, with the exception of specific questions about Internet use. Additional studies will have to show which data collection method on this specific subject gives the most representative results.
To determine the differences between the three methods for data collection, identical questionnaires were presented to different groups. The field work took place in the period between 27 May and 20 August 2004. For the telephone survey, random people were approached by phone. For the CAPI survey, people were approached on location or at work. The telephone and CAPI surveys were carried out by Research International. For the online data collection, Research International called in the help of the PanelClix panel.
The number of respondents who cooperated in this survey were:
Segmentation was applied to the group of telephone respondents. In this group of 291 respondents, 149 indicated to have an Internet connection. The other 142 respondents indicated they did not have an Internet connection. These two groups will from here on be respectively designated as ‘telephone online’ and ‘telephone offline’.
The questionnaire consisted of the following parts:
Variations were introduced in the online questionnaire in the manner in which the statement and multiple-choice questions were presented. The 323 respondents in the online survey were presented with the following variations:
To test the representativeness of the data, the composition of the various groups of respondents (Internet, telephone and CAPI) was kept as similar as possible and representative of the Dutch population, based on socio-demographic characteristics. Segmentation was applied based on gender, age and region.
The segmentation of the groups in the CAPI and telephone surveys (without Internet connection) is not representative of Dutch society. This was caused by the fact that the CAPI-survey was done on location, with an over-representation of the Randstad. During the field work, it turned out that the penetration of telephone respondents without Internet connection was very low, and that it was impossible to realize the desired segmentation based on gender, age and region. A later review of the data, which made all groups identical, showed almost no differences. For this reason, unweighed data was used to present the results.
Non-response has always been an important point of attention in the justification of the research carried out. If non-response increases, there is more chance of lower data quality. This can distort the data, if only a small percentage of people participate in a survey. For surveys in specific policy areas, researchers often have to guarantee a high response rate before the research start.
In this survey, the level of (non-)response has been predetermined. The segmentation of the (non-)response was also measured using a number of random socio-demographic variables.
The level of non-response was measured for both the online and telephone surveys. The following figure indicates that 909 people were approached for the online survey (left bar). 36% of these reacted in time and completed the questionnaire (323 respondents). Around 20% of the people who were approached, did respond, but were too late. The quotas, as mentioned in paragraph 2.3, were already full, which is why these willing respondents were no longer able to take part. In fact, these people, even though there were too late, can be seen as respondents. We will never know whether these late respondents would all have completed the questionnaire. In the end, 44% of the total group did not respond at all.
The contrast with the telephone survey is enormous. A lot more people had to be approached, 6,069 in total (see the right bar in the figure). Only a small percentage of these (5%) was willing to participate in the survey (291 respondents). Around 36% refused and 60% could not be reached.
The large difference in the level of response between online and telephone survey can be explained by the fact that online surveys use a panel, where members have already indicated that they are willing to participate in surveys. Also the fact that online respondents have the freedom to decide when and where to fill in the questionnaire contributes to a higher response. Complex stratifications with various segments and quotas within the group of respondents can be much easier applied to online surveys. For specific surveys which require a high response rate, online surveys are often preferable.
For the online survey, the distribution of (non-)response per age group, gender and education level was analyzed. A uniform segmentation of (non-)response is desirable. On the one hand, the composition of the group of respondents is then comparable to the composition of the total sample (group of people invited, who can be properly selected in advance). On the other hand, a uniform segmentation of (non-)response means that there is a big chance that the same uniform segmentation can be applied any random variable. The chance of distortion or coloration of data as a result of an unequal (non-)response is smaller.
The next three figures show the segmentation of the (non-)response. It should be noted that the response has been subdivided into respondents who reacted too late (semi-response: directly to the right of the vertical zero line) and respondents who completed the questionnaire (response: to the far right of the vertical zero line).
The segmentation of the (non-)response per age group (see below) shows that the non-response is higher for the 18-34 years category (20% of the total) than for the two ‘older’ groups (13% and 10% of the total). The response rates are almost equal for all age groups.
The following figure shows a schematic segmentation of the (non-)response for men and women. Both men and women responded equally well (27% and 28% respectively). There is only a slightly higher non-response rate with the men.
The following figure shows a schematic segmentation of the (non-)response for men and women. Both men and women responded equally well (27% and 28% respectively). There is only a slightly higher non-response rate with the men.
We can argue that the response for the online survey carried out is much higher than that for the telephone survey. The segmentation for the online survey is also reasonably to uniformly divided over the three variables of age, gender and education. Only the younger group (18-34 years) shows a higher non-response than the older participants.
For telephone and CAPI surveys, it is impossible to measure the segmentation of the non-response for socio-demographic parameters. Based on the composition of the response, it was determined later that the (non-)response is less uniform. The first indication for this is the distorted distribution of the response for gender and age for the telephone and CAPI surveys (see the table in par. 2.3). Non-uniformity is again confirmed in the following segmentation for social class (A, B+, B-, C, D) of respondents, as shown in a later analysis of the three surveys.
It can be seen that in the segmentation for the telephone survey, respondents from the social class B- were overrepresented, while the segmentation in the online and CAPI surveys was more representative. The researcher benefits from a uniform distribution of (non-)response within the original sample group. This study has given clues as to that the online survey has a more uniformly distributed (non-)response than telephone and CAPI surveys.
The integrity and reliability of the data obtained can be determined based on objective criteria. It should be noted that online surveys can be more thoroughly checked than telephone and CAPI surveys. The following criteria were used in this research:
In the online survey, the socio-demographic data of respondents, filled in on registration with the PanelClix panel, were compared to the answers given. In total, 6 respondents seemed to be someone other than the respondent who was approached. For a total of 323 respondents, this was 2% of all respondents. When the answers were analyzed closely, the only conclusion was that the respondents did complete the questionnaire truthfully.
In the online survey, the postcode-region check indicated that 17 respondents gave inconsistent answers. This corresponds to the number of inconsistent answers given during the telephone survey. The answers can be explained by the fact that the Nielsen regions are not understood by all respondents, or that certain people do not know their own postcode. On a total of respectively 291 and 323 respondents, we can conclude that based on this check, the reliability was extremely high for both types of data collection.
With the three data collection methods, the answering pattern for statement questions was further analyzed. The following table gives an overview of the results and mentions both the number and percentage of respondents that gave identical answers to these statement questions. It can be concluded that the higher the percentage of respondents giving identical answers, the lower the quality of data.
With the online survey, three variations were applied to the presentation of statement questions. These are the grid presentation, drop-down menu and so-called one-by-one presentation, respectively.
The variations used in the online survey influence the variations of answers that have to be given. The one-by-one method yields the most varied answers (only 5.7% of respondents give identical answers). The drop-down variety also shows a surprisingly low variation of 6.8%. With the grid presentation, 11.8% of respondents give identical answers. The telephone survey yielded considerably more respondents (10.4% and 13.4% for the online and offline groups) with identical answers. A possible reason would be that telephone respondents are less interested in the survey and are less willing to think about the answer to a statement. In certain cases telephone respondents forget the possible answers that have been read out and choose the answer that they remembered. The result is that they will also often give the same answer.
In the online survey, two respondents gave nonsense answers to open questions. Also because of the presence of the interviewer, nonsense answers are much less likely to occur in telephone surveys.
The quality of the answers to open questions is also being determined based on the length of the answers and the willingness to give an opinion or explanation. Respondents in the online survey give significantly longer answers to open questions and much more often give their opinion or an explanation than the respondents in the telephone survey. The following table gives an overview of the length of the answers to the two open questions (C1 and C2) in the questionnaire for both the online and the telephone surveys. Of all the answers given to these open questions, only the ‘real’ answers were used in the analysis. The respondents in the online survey gave more extensive answers to open questions (on average 81% longer answers than in telephone surveys). They also gave their opinion more often than telephone respondents (on average 14% more than in the telephone survey).
Finally, the willingness of respondents to answer sensitive questions was measured. The questionnaire, for instance, contains a question about income. The following image shows the willingness of respondents to answer this specific question.
The percentage of respondents that are unwilling to talk about their income is significantly higher in the telephone survey (24 and 28% for the online and offline groups) than in the online and CAPI surveys (10% and 12% respectively). The most prominent explanation is that respondents feel the online and CAPI surveys offer more privacy. Especially on sensitive issues (such as illness, income, domestic abuse, drug use), online and CAPI surveys will score better than the telephone survey.
The respondents in both the telephone and online surveys gave a similar number of answers in multiple-choice questions. The following table shows the average number of answers per respondent for the various multiple-choice questions (A12, C6, O2 and O6). The online survey also looked at the effects of different presentations of multiple-choice questions (yes/no option per answer versus the standard method with a list of answers). The yes/no method (right column in the table) gives around 50% more answers and therefore substantially deviates from the rest. The question O6 is on which products people buy on the Internet. Respondents in the online survey buy more on the Internet and will give more different answers.
In this research, it took on average 13 minutes to complete the questionnaire truthfully. Respondents who took less than 7 minutes to complete the questionnaire had to be analyzed more closely, as they might have not filled in the questions truthfully. Of the 323 respondents in the online survey, 18 respondents completed the web questionnaire in less than 7 minutes. Among those were the two respondents mentioned earlier, who gave nonsense answers to open questions. Of the other 16 respondents it was unclear whether or not they just filled in something at random. These results are comparable to the results of the CAPI survey, where 15 respondents completed the questionnaire in less than 7 minutes.
We can state that the consistency and reliability of the data obtained in both the online and telephone surveys is very good. The telephone survey yielded considerably more respondents who gave identical answers. These respondents also gave significantly shorter answers to open questions and less often gave their opinion or an explanation than respondents in the online survey. Online and CAPI respondents were more willing to answer sensitive questions than telephone respondents. The feeling of privacy and anonymity is a decisive factor.
The representativeness of data obtained via these three data collection methods has been studied based on the following aspects.
The questionnaire contains 12 statements about motivation with a 5-point scale (1 = completely disagree, 2 = disagree, 3 = don’t agree or disagree, 4 = agree, 5= completely agree). There is also the option to select the answer ‘Not applicable’. Some examples of statements: You like to take risks; You want to achieve the highest possible position in your career; You like to follow the latest fashion; etc.
Then the answers on the 5-point scale were measured. The figure below shows the answering pattern for the online (both the grid and one-by-one presentation method), telephone and CAPI surveys.
From literature, we know that telephone respondents tend to select answers towards the end of the list of answers. The telephone groups do indeed select the later answers (4 and 5) more often. They are, however, more likely to select the penultimate answer. This results in a characteristic wave pattern, which can be seen for each group of statement questions (with a 5-point scale). The respondents in the online survey quite often select option 3 (don’t agree or disagree), which creates a pattern which looks more like a normal distribution. In the diagrams the answers in negative statements were turned around, to clarify the pattern. Both telephone groups (online and offline) are identical and show a very obvious wave pattern (with a peak for the answer 4, ‘Agree’). Both groups in the online survey (grid and one-by-one) also deviate only slightly from each other and show a reasonably normal distribution (with a smaller peak for the answer 4, ‘Agree’). The results for the CAPI survey lie in between the telephone and the online surveys.
The answering patterns shown are reproduced several times for other statement questions (with a 5-point scale) in the survey. The following figure shows the patterns for six statements in a concept test with a similar 5-point scale. The same wave pattern with a dominant peak at 4, ‘Agree’, can be seen for the telephone survey, while the online survey shows a flatter pattern. The CAPI results are again in between.
The answering patterns have also been analyzed with regard to statement questions with a 4-point scale (1 = completely unimportant for me, 2 = reasonably important for me, 3 = very important for me, 4 = extremely important for me).
The following figure shows the results of nine statement questions where respondents could select what was important for them. Examples of statements were: Feeling safe; giving and receiving love; having fun in life; etc.
The differences between the online and telephone surveys were not as extreme as with the 5-point scale, but the telephone groups more often select a later answer (4). The pattern from the telephone offline group is striking, as it deviates significantly from the online group, although there is no clear reason. A possible explanation is that the socio-demographic data for this group (e.g. income and age) deviates from the other telephone group, explaining the different answers.
It is often stated, although not justifiably, that the use of specific panels in online surveys give a distorted view, because it is not clear whether the respondents who register with these panels are representative of the Dutch society. Important is to know whether people who register for an online panel have deviating beliefs or display a different type of behaviour than telephone respondents.
In this document, we will only give a few of the questions asked. For the question ‘Which of the following beverages do you drink?’, with multiple answers, the various data collection methods yield the following results.
There are slight differences in the answers between online, telephone and CAPI surveys, but the answers to questions about various different subjects all point in the same direction. The conclusion is that respondents who participate in an online survey do not have deviating beliefs or display deviating behaviour from telephone or CAPI respondents.
An important exception is the use of the Internet, the connection type and the buying behaviour on the Internet. For the question ‘Where do you access the Internet of world wide web?’ (multiple possible answers) most respondents answered that they used the Internet at home (see the following figure). The respondents in the CAPI survey often give answers that lie in between those of the online and the telephone groups (online group).
The question ‘Which of the following categories does your home Internet connection belong to?' (one possible answer) in the following figure clearly shows that respondents in the online survey have a faster Internet connection (cable and xDSL) than telephone respondents. The answers of the CAPI respondents lie in between those of the online and the telephone respondents.
Respondents from the online survey are online more often. The following figure also indicates this. The CAPI respondents are similar to the telephone group (with availability of an Internet connection).
The buying behaviour on the Internet of online respondents also significantly differs from that of the telephone and CAPI respondents. The buying frequency is higher (see the following figure) but the type of products that are being purchased does not differ a lot.
The wave patterns as discussed in par. 3.3.1 look interesting, but what do they say about the way in which respondents answer questions? What these patterns show, is that respondents in online surveys (and less in the case of CAPI surveys) use the entire spectrum of answering options on the scale. The distribution is more or less normal. A reason could be that people feel more free in online surveys and are more willing to give an honest answer. The so-called interviewer effect could play a role, where a telephone respondent is more likely to give the answer he thinks will please the interviewer. We have already established that respondents in online surveys are more likely to give personally sensitive information and more often give their opinion about questions and statements. It is then relatively safe to conclude that the answers given by respondents to statement questions in online surveys are more representative than in telephone surveys.
The use of specific panels in online surveys can distort reality. This survey has shown, however, that there is no distortion. The answers to questions about various subjects such as attitude, behaviour, buying intentions, user moments, public opinion and motivational aspects point in the same direction. Respondents who registered for the panel used in this survey do not have deviating beliefs or show deviating behaviour from telephone respondents. It has been shown that respondents who participate in online surveys handle the Internet differently than telephone and CAPI respondents.
One of the most important conclusions is that data obtained via online data collection is of a higher quality with regard to certain aspects than data collected by the established telephone (CATI) and CAPI methods. We can draw the following conclusions based on this survey.
Finally, we also have to stress that the results and conclusions presented in this document cannot simply be applied to every random type of telephone or online survey. The interviewing technique for telephone surveys can influence the results. In the online survey, the quality of the file used and especially the involvement of the members of the panel they registered with in the past are an essential aspect.