Online data collection has now become a popular method in the world of market and policy research. But how does the degree to which a respondent likes filling in a questionnaire relate to the future response? In this white paper, the following question is discussed: Is a respondent who finds a questionnaire attractive more likely to participate in a future survey?
The results of this study show that there is a significant relationship between a positive assessment of the attractiveness of a questionnaire and the future response. If a respondent has also completed a previous questionnaire, he/she is more likely to respond to a future invitation to participate in a new survey. The length of a questionnaire is an important factor in the attractiveness of a questionnaire. The shorter the questionnaire, the more motivated a respondent is to complete it. The completion rates for completes from a previous questionnaire are also significantly higher compared to those people who quit a previous questionnaire. Even when the time period in between two questionnaires is long, the effects are still clearly visible.
De uitkomsten van deze studie laten zien dat er een significante samenhang bestaat tussen een positieve beoordeling van de aantrekkelijkheid van een vragenlijst en toekomstige respons. Wanneer een respondent daarnaast een eerdere vragenlijst helemaal heeft ingevuld zal hij/zij veel sneller geneigd zijn te reageren op de volgende oproep om deel te nemen aan een nieuw onderzoek. Hiernaast is de lengte van een vragenlijst een belangrijke factor voor de beoordeling van de aantrekkelijkheid van een onderzoek. Hoe korter de vragenlijst; des te meer een respondent gemotiveerd is om de vragenlijst helemaal af te maken. Bovendien ligt de completion rate voor completes uit een vorige vragenlijst significant hoger ten opzichte van afhakers in een voorgaande vragenlijst. Zelfs wanneer de tijdspanne tussen twee vragenlijsten langer is blijft dit effect duidelijk herkenbaar.
Online panels are increasing used to carry out market and policy research. Slowly but surely, more attention is paid to the value of the respondent; how do we keep respondents motivated to participate in online surveys again in the future? A lack of motivated respondents is fatal for researchers. The attention of researchers is usually focused on the research process. On the one hand, the questionnaires are being constructed and checked based on the information the end user requires. On the other hand, the researcher wants to construct a questionnaire which can collect the necessary data in an efficient manner. This can be at the expense of user-friendliness of the questionnaire and the fun the respondent experiences while answering the questions. To ensure growth and a future for Online fieldwork, it is crucial that the respondent plays a pivotal role in the research process. It is in the interest of both the researcher, end user and panel manager that respondents stay motivated, to participate regularly in online surveys.
The importance of a high response rate for Online fieldwork mainly comes from the desire to increase the accuracy of claims made (Snijders et al., 2005). But how can this response rate be improved? A literature study into response increasing effects for online questionnaires shows us that only a few studies have looked into this subject. Online fieldwork aspects related to maximizing response, quality of response (Deutskens et al., 2004; Riva et al., 2003) and contacting and rewarding respondents (van Selm & Jankowski, 2006) have now been extensively researched. The relationship between the experience of filling in an online questionnaire and the future response has not been investigated. One possible explanation is given by Kaplowitz et al. (2004), who indicates hardly any time and attention has been focused on developing and testing motivational aspects of online questionnaires. The focus has mainly been on making telephone and written surveys more attractive. However, motivational theories for telephone and written surveys cannot be simply copied for online questionnaires. Online respondents on the one hand have to deal with safety aspects on the Internet, which makes respondents harder to contact compared to how it is done with questionnaires on paper. On the other hand, a respondent has the freedom to decide when they want to complete the questionnaire. With telephone surveys, the researcher determines at which moment the respondent takes part in the survey. Looking at the future, it is very important that a framework is created which takes the importance of the attractiveness of online questionnaires into account. Looking at the increasing costs for telephone and written research, Online fieldwork will continue to grow (Cook et al., 2000).
For these technical reasons, this documents presents the results of a study carried out by the Technical University of Eindhoven. Subject of the study was whether respondents found completing the online questionnaire a pleasurable experience and whether this response has any effect on the willingness of the respondent to cooperate again and complete the next online questionnaire. For this survey, members of the PanelClix panel were approached. The web questionnaire was programmed by Isiz.
Based on the evaluation of six questionnaires, with various variations (e.g. multimedia and games) to make them more attractive, the effect of the attractiveness of the questionnaires on future response was investigated. At the same time, socio-demographic factors (age and gender) and their influence on how the questionnaires were evaluated were assessed. Besides approaching respondents and asking them to complete all six questionnaires, a sample group of respondents was approached for only a few questionnaires.
Our most important conclusion is that the future response of respondents is partly dependent on how pleasurable the experience of completing a previous questionnaire was. The more fun a respondent had filling in a questionnaire, the more likely it is he/she will participate in a future survey. It pays to invest in an attractive presentation of questionnaires. Even when the time period between two questionnaires is long, the positive effect of the previous survey has an impact on the response. The effect of an attractive questionnaire partly disappears, however, when the next questionnaire has been filled in. We can also conclude that, in general, women and older people appreciate an attractive questionnaire more. A pleasurable experience lowers the threshold to participate in a future survey and to complete a questionnaire. There is also a linear relationship when we look at the length of a questionnaire: The longer a questionnaire, the higher the drop-out rate. When we look more closely at these drop-outs, we can conclude that there is a strong negative relationship with regard to the future response.
For the survey, six different questionnaires were developed which differ from each other by adding and deleting multimedia, such as short films, quiz questions and game techniques, aimed at anonymity. For this study the PanelClix panel functioned as the sample group. A maximum of six different questionnaires on various subjects were sent to these panel members. The field work took place between 7 September 2007 and 26 October 2007. Table 1 shows how many invitations were sent per questionnaire, distributed over how many batches and how many completes were generated per questionnaire.
For the first questionnaire, an a-select group of respondents was selected from the PanelClix population file, to guarantee a good distribution by age, gender and region. For the following questionnaires, respondents were used who had completed a previous questionnaire, with additional panel members who were randomly selected. The following segmentation was applied based on gender and age.
For this survey, no reminders were sent out. There were also no indications that this was a follow-up survey. At the end of each questionnaire, the respondents were given seven questions and asked to indicate how attractive they found the questionnaire. For each evaluation question the respondent was asked to give his/her opinion on a 5-point Likert scale. The seven evaluation questions can be found in appendix A.
After completing each questionnaire, all respondents were asked a set of seven evaluation questions (appendix A). The respondents were asked to rate the questionnaire from 1 to 5 for each of the seven aspects. To calculate the average attractiveness score for each questionnaire, the seven evaluation questions were averaged. All seven criteria were given the same weight. If we scale the evaluation per score, we can see the following picture emerge:
Apart from the differences between the questionnaires, there are of course also differences between the people answering the same questionnaire.
We will now look at whether a positive evaluation of a questionnaire corresponds to an increased chance that a next questionnaire is completed. This can be calculated for questionnaires 2 through 6.
In all cases, which exception of the last questionnaire, there is a significant difference. We can conclude the following: The more fun a respondent had filling in a questionnaire, the bigger the chance he/she will participate in a future survey and also complete that questionnaire. This conclusion still stands when demographic variables, such as age and gender, are taken into account.
We then looked at the size of the effect on the degree of attractiveness of a questionnaire. We studied the effect of increasing the attractiveness by 1 point on the 5-point Likert scale. Considering the scores, this would e.g. be a difference in evaluation between a 6 (mediocre) and an 8 (good). To be able to highlight how pronounced the effect of attractiveness is, the percentages of completes were compared for respondents who had rated a previous questionnaire as ‘bad’ compared to ‘good’. We looked at the percentage of completes for questionnaire t+1, given the condition that the respondent has also complete questionnaire t. Then a distinction was made between respondents who belonged to the 33% of the group of people giving the lowest score to questionnaire t(x) compared to the 33% of respondents gave questionnaire t(x) the highest score. The results of this analysis for questionnaires 2 through 6 are as follows:
The effect of the degree of attractiveness of a questionnaire indicates that the difference between the evaluation ‘mediocre’ and ‘good’ for questionnaire 1 increases the chance that the same respondent completes questionnaire 2 by around 6 percent. The percentage gain does vary strongly per questionnaire. The positive effect is however visible for all questionnaires and can be more than 20 percent. The differences in chance that a respondent again responds to a future questionnaire compared to the score of the previous questionnaire can be explained because respondents on the one hand are more likely to start a future questionnaire. On the other hand respondents are less likely to quit after they have started a questionnaire when they rated the previous questionnaire positively.
The positive effect of a pleasant questionnaire even has a striking effect when the time period between consecutive questionnaires is longer. We can see this for the respondents who, for instance, participated in questionnaire 1 and who were only invited again for questionnaire 4; even here a positive effect is seen. The effect hardly decreases in strength during the entire time of the six questionnaires. What was true, was that the effect of a appealing questionnaire diminished largely after the next questionnaire. We can see this by predicting the chance of a complete for questionnaire t in a regression analysis, based on the rate before questionnaires t-1 and t-2. The score at t-1 did and the score at t-2 turned out not to show a statistical relationship with the chance of a complete. In other words: If an appealing questionnaire 1 is followed by a less successful questionnaire 2, the positive effect of questionnaire 1 on the completion of questionnaire 3 has evaporated.
After analyzing the effect of the attractiveness of questionnaires on the future response, it is important to look at any possible differences between the types of respondents. Firstly, we looked at how men and women rated the attractiveness of the questionnaires. The results, based on the scores, were split for men and women and are as follows:
On first glance there hardly seem to be any differences between how men and women rate the six questionnaires. When we use a multivariate analysis, taking into account not only gender, but also age and education (measured over 8 categories), we do see some differences.
Based on this multivariate analysis, we can conclude that men, in general, rate the questionnaires lower than women. A participant, man or woman, will rate a questionnaire half a point higher for every 10 years added to their age. Another conclusion is that the education level of the respondent did not really influence the evaluation of the questionnaire. The explanation here probably is self-selection; women and older people like participating in online surveys more.
We also investigated whether computer and Internet skills had an influence on the score. This showed that although both skills have a positive influence on the rating – more experienced respondents have more fun – it does not affect the effect of age and gender.
If the median time frame of the questionnaires is compared to the completion rates, we can see a pronounced relationship.
When we rate the relationship between the (median time used per questionnaire and the completion rate, based on the previous table, we can see that the formula
Completion rate = 106 – 0.04 * Median time
gives a good estimate (R2=0.9). This formula shows that if the duration of the questionnaire is increased by 5 minutes, this will yield an expected decrease in completion rates of 60 * 5 * 0.04 = 12 percent.
This obviously raises the important question whether there is a significant relationship between drop-outs in questionnaire t and the future response for questionnaire t+1. To find out, an additional study was carried out. From three projects with large sample sizes, two groups of respondents each were selected. On the one hand these are respondents who dropped out; after answering at least 1 question, these respondents decided not to complete that questionnaire. On the other hand a similarly large group of respondents was invited who generated a complete in one of the three questionnaires. These members were invited for the same satisfaction survey as was used in the main study. The response rates compared for the drop-outs and the completes for questionnaire t-1 were as follows:
This analysis shows that the chance of a complete increases with more than 24 percent when the respondent has completed the previous questionnaire. The results also show that when a respondent quits in a previous questionnaire, as a result he/she is less likely to respond to a future invitation to participate in a new survey. The chance also increases that a respondent quits a future questionnaire again, if we compare this with respondents who completed a previous questionnaire. In other words, the chance that a respondent completes a questionnaire, increases by reducing the length of the questionnaire. This not only has an impact on the current project, but also is a positive stimulus for future response.
Based on the study, the following conclusions can be drawn.
Deutskens, E.R., de Ruyter K., Wetzels M. & Oosterveld P. (2004) Response Rate and Response Quality of Internet-Based Surveys: An experimental Study. Marketing Letters 15 (1), 21-36.
Riva G., Tiziana T. & Anolli L. (2003) The Use of the Internet in Psychological Research: Comparison of Online and Offline Questionnaires. Cyber Psychology & Behavior 6 (1), 73-80.
Selm, van M. & Jankowsko N.W. (2006) Conducting Online Surveys. Quality & Quantity 40, 435-456.
Snijders, C., Matzat U., Pluis B. & Haan, de W. (2005) Respons bij online onderzoek. White Paper 1-26.