The use of progress bars in online questionnaires

Chris Snijders and Uwe Matzat (Technical University Eindhoven) Bart Pluis (PanelClix) and Wiggert de Haan (Isiz)



Online data collection is increasingly used in market and policy research. This study investigates the factors why people do not complete questionnaires. The emphasis is on the effects of the use of a progress bar. We researched the effect of five different ways to indicate the progress of a questionnaire to the respondents: (1) No progress bar; (2) a message on each page indicating ‘… pages to go’; (3) a standard progress bar; (4) a progressive progress bar; and (5) a degressive progress bar. On average, filling in our online questionnaire took more than 20 minutes, which corresponds to the time indicated in the invitation. In total, 3.266 people accepted our invitation and clicked on the link to our online questionnaire. 496 of the respondents quit prematurely. If we look at when respondents are most likely to finish filling in a questionnaire, we see that older people are more inclined to finish questionnaires, if the invitation contains an extensive explanation, if the respondents are more experienced in filling in questionnaires and if a higher reward is promised. Our most important conclusion on indicating progress in a questionnaire is, however, that in general there is no real reason to show a progress bar. Our basic type, with no indication of progress during the entire questionnaire, showed a high – often the highest – response percentage.


1 - Introduction


A high response in (online) research is important for various reasons. A high response improves the accuracy of the claims that can be made, and increases the general representativeness of the sample survey, because it reduces the selectivity of the survey. The first method to achieve a high response, is by sending an attractive invitation. As soon as people have responded to this invitation, it is important to ensure that those people who start filling in the questionnaire also actually finish it. In general, it can be said that a questionnaire has to look well-organized, attractive and clearly ordered and that the lay-out and look of the questionnaire should not give respondents a reason to quit (see Dillman, 2000). Intuitively, one would expect that adding a progress bar to an online questionnaire is a good idea. It gives the respondents an indication of where he/she is in the questionnaire, and conveys the idea that the creator of the questionnaire has gone to the trouble of increasing the comfort for respondents during the filling-in process. Still, the few studies there are on this subject give mixed results. Both Couper et al. (2001) and Crawford et al. (2001) even found that adding a progress bar has a negative effect on the chances of respondents filling in the entire questionnaire. In the study by Boehme (2003), no effects were found, although there were some indications that non-linear progress bars (which will be discussed later) do have some effect.

In this document, we present the results of a study carried out by the Technical University of Eindhoven. For this study, members of the PanelClix panel were approached. The web questionnaire was programmed by Isiz. The subject of our study was whether or not there was a link between quitting an online questionnaire and the presence of a progress bar. At the same time, the study also looked at other possible factors linked to respondents prematurely quitting. We found that the completion percentages were higher in the following cases: With older respondents, if the invitation contains an extensive explanation, if the respondents are more experienced in filling in questionnaires and if respondents are promised a higher reward for finishing a questionnaire. We also see that those respondents who have been going for 6 to 7 minutes are more likely to complete the questionnaire. Our most important conclusion on how to show progress in a questionnaire is, however, that in general there is no real reason to show a progress bar. Our basic type, with no indication of progress during the entire questionnaire, showed a high – often the highest – response percentage.

In this document, we will present a parallel study by Research International, which compared the quality of online surveys surveys with telephone and CAPI (Computer Aided Personal Interviewing) questionnaires. Various quality aspects of the three different types of data collection methods were determined, such as the representativeness of data, the integrity and the (non-)response. The study shows that the response for online surveys is much higher than for telephone and CAPI questionnaires. The answers given in online and CAPI surveys are more reliable and trustworthy than those given during a telephone questionnaire. The data integrity and reliability of online and CAPI surveys also score better than telephone questionnaires on other points. With regard to the representativeness of data, it can be concluded that this is good for all three data collection methods. The answers all point in the same direction, with the exception of specific questions about Internet use. Additional studies will have to show which data collection method on this specific subject gives the most representative results.


2 - The study


The online questionnaire consisted of more than 40 pages, with a total maximum of 56 questions to the respondents. How many questions respondents had to answer, depended partly on coincidence (half of the respondents got a somewhat shorter questionnaire; more about this later) and partly on the answers given previously. A respondent who for example answered that he/she had never bought anything on the Internet, would not get the follow-up question asking whether he/she had ever been the victim of transaction fraud. The subjects in the questionnaire were extremely varied. They varied from questions on computer ownership and Internet use to questions on the current state of Dutch society, government policy, social networks and in how far respondents trusted people from another country. The invitation indicated that filling in the questionnaire would take between 15 and 20 minutes. On average, the respondents needed more than 20 minutes to fill in the questionnaire. The contents of the questionnaire can be requested from PanelClix.

Respondents were randomly assigned to one of five different types of progress bars:

  • No progress bar
  • The text: ‘X pages to go’
  • A progress bar with a normal indication of which part of the questionnaire has already been filled in
  • A progressive progress bar (a bar which in the beginning indicates that a larger part has been filled in than is actually true)
  • A degressive progress bar (a bar which in the beginning indicates that a smaller part has been filled in than is actually true)

Where a progress bar or progress indication was shown, it was shown on each page of the questionnaire. With types 4 and 5, the respondents were actually ‘cheated’. In these cases, the progress bar started out too fast and then slowed down (type 4) or the other way around (type 5). We investigated which percentage of respondents failed to complete the questionnaire, and especially if this percentage of quitters was in any way related to the use of a progress bar.


3 - Results


3.1 Basic results

More than 3.000 members (3.266) of the PanelClix panel responded to the invitation and clicked on the link to the first page. Around 15 percent of respondents (496 people) did not complete the questionnaire. We will now look at where and why this happened.

The random assignment of the 3,266 respondents to the various types resulted in the division as shown in table 1.

progress bars

Of the 496 people who did not complete the questionnaire, 116 people left the questionnaire on the first page. At that moment, no progress bar is shown, and there was no difference in the percentage of quitters between the different types, as is shown in table 2.

progress bars

Of the 496-116 = 380 people who were left, we will look which percentage made which checkpoints for each of the types.

progress bars

Table 3 shows a remarkable result. The highest percentage of completed questionnaires by far was generated by the type which did not show a progress bar! The type ‘X pages to go' gave the lowest response rate, a difference of almost 8 percent points compared to the type without a progress bar. This is a statistically significant difference. There was not a significant difference between the different types of progress bars ([PB], [PB_PRO] and [PB_DEG]) and these types together resulted in 88 percent of questionnaires being completed. This gives a completion rate of 91.7 percent when there is no progress bar, 88 percent when there is a progress bar (of any type) and 84 percent if the text ‘X pages to go’ is shown. These differences are statistically significant ( < 0.01).


3.2 Further analyses

The destructive effect of progress bars is by itself an important result. Parties wanting to use a questionnaire of a similar size and lay-out on a random selection of members of the PanelClix database, are advised not to use any progress bars at all. To gain more insight in the reasons, which makes it possible to generalize these results to other types of questionnaires, we will look at the completion rates in more detail.

To see in how far the effect of progress bars depends on the length of the questionnaire, we gave half of the respondents a shorter questionnaire.1
If we break down the results by length of the questionnaire ([SHORT] vs. [LONG]), the following becomes obvious:

1 1 The questionnaire contained a fairly extensive question about the respondent’s social network (a battery question consisting of 30 separate items). The respondents with a short questionnaire were not given these battery questions. Their progress bar was also based on fewer questions. In reality, there was very little difference in time between the ‘short’ and the ‘long’ questionnaires (only 3 minutes). It would be fair to say the questionnaires were ‘long’ and ‘a bit shorter’.

progress bars

We can see that the largest gains for the absence of a progress bar are made with the shorter questionnaire. In the long questionnaire, the completion rate is actually highest with the progressive progress bar (even though not statistically significantly higher). The [X to go] type to indicate progress gives the lowest results for the long questionnaires. We can also see that the adjusted progress bars (progressive and degressive) score better in the longer than in the shorter questionnaires. In general, these results show that the basic results from table 3 are no universal truths: The effect of the type of progress bar can depend on the characteristics of the questionnaire.

To research this further, we are dividing the questionnaire up into 24 ‘checkpoints’: 24 places in the questionnaire we know all respondents who completed the questionnaire had to pass. Figure 1 shows the number of quitters per checkpoint. The 25th checkpoint is the end of the questionnaire.

progress bars

Except for the 116 respondents who quit at the start, the 134 quitters at the fourth checkpoint attract attention. This is the transition from page 6 to page 7. At that moment in the questionnaire, the respondents have filled in a few introductory questions about themselves, and now get questions on the state of Dutch society at the moment (e.g. the question what the respondents thinks are the reasons why social security is under pressure). Of course, it could be possible that respondents think that they’d rather stop if the questionnaire is mostly about Dutch society. But, if the subject would be the main reason for quitting, then we would have not expected any differences between the types of progress bars. But there are obvious differences.

progress bars

The largest number of quitters was found with the types with the progress bar indicating ‘X pages to go’, while the type without a progress bar showed the lowest number of quitters. This was true for both the short and the long questionnaire, although the differences are more pronounced with the longer questionnaire. This suggests that one of the disadvantages of a progress bar, especially for longer questionnaires, can be that as soon as respondents feel the urge to quit, they are 'pushed over the edge' when they see that the end of the questionnaire is still a long way away. Especially seeing how many questions are still to go can be discouraging. Respondents filling in a long questionnaire with a progressive progress bar are, understandably, less likely to feel this urge in the beginning of the questionnaire: The respondent thinks he/she is already further in than is actually true. This phenomenon can be compared to a car journey to a destination far away. If the distance still to go is given every 5 km, however courteous and considerate this may be, travellers do have the feeling the journey lasts a lot longer.


3.3 Integral analysis of factors related to whether the questionnaire is completed

These analyses indicate that to get more insight into the problems, it is wise to break down the results for the different groups (we already did this based on the length of the questionnaire, the quit location and the type). This does have one large disadvantage. Each time a subdivision is made, the number of observations per cell diminishes, and quickly becomes too small to be able to make any statements about the statistic significance of the different types. That is why we are combining all the different breakdowns into one analysis. The variable to be clarified is whether or not the respondent made the next checkpoint, given that the previous checkpoint was passed.2

The variables included in the analysis are:

The type of progress bar: Our most important point of attention in the analysis (see above for the conditions)

The length of the questionnaire: Long or short (see above)

The type of invitation: The way in which potential respondents are invited not only has an effect on the chance that he/she responds, but also on the chance that the questionnaire is completed, given that the respondent started the questionnaire. We are comparing invitations with an extensive explanation and an explicit visible TU/e logo with invitations with a short, impersonal invitation.

The number of Clix promised: PanelClix ‘pays’ its panel members in ‘Clix’ for completing questionnaires. We varied the number of Clix: 50, 100 or 150 Clix.

Age: As will be shown there are differences in age with regard to the people who quit the questionnaire.

Time elapsed: It can be assumed that the time spent on the questionnaire by the respondent also has an effect on whether he/she will continue. After a certain period of time, respondents do not like answering any more questions.3

The place of the checkpoint in the questionnaire: It is possible that the fact whether the next checkpoint will be made, partly depends on where in the questionnaire a respondent is. How this has an effect, is hard to predict. On the one hand, it can be that at certain moments in the questionnaire, respondents quit because they have to answer an unpleasant question (e.g. a very long question or an impertinent question). On the other hand, it is predictable that the number of questions already answered has a positive effect on making it to the next checkpoint. The argument here is that respondents do not like giving up before they’re finished, because it means that everything they have already filled in, is lost.4

2 The logistic regression analysis is used here. Per respondent in the data, we added the same number of observations as the number of checkpoints he/she passed. In social sciences, this method of processing data is classed under survival analysis with the aid of person-period files. Because this creates more records per person in the data, the standard errors in the logistic regression analysis have to be adjusted. This is done by means of the usual method by Huber (Huber, 1967). 3 Time itself is measured in seconds per page. The time in between checkpoints is measured by replacing extreme outliers in the page times (e.g. because a respondent left to make coffee) by a maximum period of 2 minutes. Time is included as a quadratic effect by including both time itself and the time squared in the regression analysis. 4 For this reason, 24 dummy variables are included in the analysis, one for each checkpoint, with the first checkpoint as a reference category.

With the three data collection methods, the answering pattern for statement questions was further analyzed. The following table gives an overview of the results and mentions both the number and percentage of respondents that gave identical answers to these statement questions. It can be concluded that the higher the percentage of respondents giving identical answers, the lower the quality of data.

The experience of the respondent: the chance that a more experience respondent will complete the questionnaire is probably bigger, as the person involved has already proven to do so and will on average more than likely be more experienced with computers. In the analysis, we will use the natural logarithm of the number of PanelClix questionnaires completed the previous year.

Finally, the interactions of a number of variables will also be included. This ensures that we can measure factors such as whether the effect of progress bars differs depending on the use of short or long questionnaires.

The type of progress bar and length of questionnaire: this measures whether the effect of progress bars differs depending on the use of short or long questionnaires.

Type of progress bar and the place of the checkpoint in the questionnaire: this measures whether the effect of progress bars on the chances of continuing with the questionnaire differs depending on the place of the checkpoint in the questionnaire.

Type of progress bar and time elapsed: this measures whether the effect of progress bars on the chances of continuing with the questionnaire differs depending on the time that has elapsed.

Type of progress bar and experience:This measures whether the effect of progress bars on the chances of continuing with the questionnaire differs depending on the level of experience of the respondent.5

The table with analysis results can be found in appendix A. In the main text we will be discussing the results in-depth, starting with the control variables. We will be looking at whether we can find factors influencing the chance of making it to the next checkpoint, given that the previous checkpoint was passed. Or: Which factors determine whether respondents continue with the questionnaire? The method used ensures that we keep measuring the ‘net’ effects, in other words, the effects of the different factors, taking into account the effects of the other factors measured.

Firstly, we want to show the importance of age.

progress bars

Figure 2 gives a good indication of the effect of age. The chances of getting to the next checkpoint increase up to the age of around 40, and then decrease. We can roughly say that respondents over 35 have a higher chance of completing the questionnaire, while it decreases quickly for respondents younger than 35. This is also reflected in a straight count of the number of completed questionnaires per age group. In 84% of cases, respondents up to 20 years old return completed questionnaires, compared to 87% for the age groups 21-25 and 26-30, 89% for the group 30-35, 92% for 36-40, 90% for 41-50, 92% for 51-60 and 87% for those aged over 60.

We also found that more questionnaires are completed if an extensive invitation is used, with the TU/e logo. From similar surveys, we know, however, that an extensive invitation with logo actually persuades less people to respond to the invitation (see Snijders, Matzat, Pluis and De Haan, 2005). This shows that the negative effect of the extensive invitation is partly compensated by the fact that those people who received an extensive invitation and do start the questionnaire, are more likely to also complete it. The effect is small, however (around half a percentage point gain with the transfer to the next checkpoint).

We also see that a higher reward ensures that respondents who start a questionnaire are more likely to complete it. Our own research has shown that for a questionnaire of this size (around 20 minutes), increasing the number of Clix to be earned from 50 to 100 still had an effect, where increasing it to 150 no longer made a difference. Here we do see, however, that increasing it to 150 Clix does have effect on whether respondents complete the questionnaire. Increasing the reward from 50 to 100 increases the chances of making the next checkpoint in the questionnaire with 0.7 percent, while increasing the reward to 150 increases this with another full percentage point. NB: These differences may seem small. We do have to realize, however, that we are here looking at the 15% of the respondents that quit the questionnaire, and relate the size of the effect to this fact. The size of the effect is most significant if we are looking at it compared to the effect of other variables.

Regarding the effect of time elapsed. We can see that, roughly – for all types of progress bars –the effect of time is positive as soon as 6 to 7 minutes have passed. In other words: As soon as respondents have been going for 6 to 7 minutes, the longer they have been working on the questionnaire, the bigger the chance of making it to the next checkpoint, and the bigger the chance of completing the questionnaire. The positive effect of the time invested (‘I have been at this for such a long time, I might as well finish.’) obviously outweighs the argument that people think they have spent enough time answering questions. In other words: Those people who want to fill in short questionnaires, already realized this one was longer, and have already given up.6

Finally, we can also see that the effect of the length of the questionnaire, after checking the effects of other factors, is quite big. The chances of making it to the next checkpoint is lower, on average, for the longer questionnaires (an effect size of 1.2 percent). This effect is borderline statistically significant. We are now looking at the chances of making the next checkpoint in the questionnaire for the different types of progress bars, taking into account the effects already mentioned.

progress bars

Figures 2 and 3 clearly show there is a dip in responses for checkpoint 5, as we saw before. This means it is more difficult to see in these figures how the lines for the different types of progress bars relate to each other. That is why we are repeating the figures above, skipping checkpoint 5 so that we can see the lines better.

6We do however see differences in the effect in time between the progress bars. The bars [X to go] and [PB] show a strict increase in the chances of progress seen over time. The other three show a curvilineair connection. The chance of progress first decreases, and only increases after 6 to 7 minutes.

progress bars

We will be looking at figure 4 first. We can see that the progress bar [X to go] performs rather poorly in the beginning – until checkpoint 10 it has the next to lowest chances of progress. This then slowly starts to increase (all compared to the other types of progress bars). After checkpoint 14 [X to go] climbs to the third place and even to the second place a bit later. It seems that the effect of [X to go] improves as the information given shows that the end of the questionnaire is coming closer. A striking result is that for short questionnaires the absence of a progress bar generally gives the best score. Figure 4 also shows that people give up easier if they feel that the progress bar does not advance as quickly as they would like. This is shown by the results for [PB_PROG]. This progress bar initially advances (too) quickly, and then hardly scores any better than questionnaires without a progress bar. After checkpoint 13, the progressive progress bar slows down for a while (it has to end up at 100% as well), and then we can see that it drops back compared to questionnaires without a progress bar. The degressive progress bar is, relatively, the odd one out. In general, it shows the worst performance, even in the later part of the questionnaire, while it actually advances relative quickly in that phase (also here, the progress bar has to end up at 100%, and that is why the ground lost up to checkpoint 13 has to be made up).

Figure 5 shows similar results. Again, it is important to observe that the absence of the progress bar does well, and is only outperformed by the progressive progress bar in the beginning. The progressive progress bar, however, loses ground after checkpoint 13, when it starts to slow down. The least successful option for long questionnaires is the progress bar [X to go]. It already loses a lot of ground in the beginning, and never manages to catch up.


4 - Conclusion and discussion


We looked at the effects of different types of progress bars: No progress bar, ‘X pages to go’, a normal progress bar (based on the number of questions to be answered), a progressive progress bar and a degressive progressive bar. In the invitation, we indicated to respondents that filling in the questionnaire would take between 15 and 20 minutes. Based on the previous analyses, we can draw the following conclusions:

  • In contrast to what people would think, we achieved the best response results with questionnaires without a progress bar. It is often thought that the maker of the questionnaire is only including a progress bar out of politeness to the respondent (‘to show them where they are’), that this politeness is appreciated and this appreciation will result in a better response. We, however, have not or hardly found any proof for this reasoning.
  • There are slight differences in the effects of progress bars, depending on the length of the questionnaire. However, both the short and longer questionnaires without progress bars performed well.
  • For short questionnaires the degressive progress bar (which starts off more slowly) shows the worst results. For long questionnaires, the progress bar showing ‘X pages to go’ gives the worst results.
  • Looking at the results for the progressive progress bar, it seems that a progress bar should, definitely in the beginning, give an indication about how long it is until the end.
  • If a progress bar advances slowly (as the degressive bar does in the beginning, and the progressive bar in the second half of the questionnaire), the chance of respondents giving up increases. A quicker advancement of the progress bar, such as the progressive bar at the beginning of the questionnaire, has some positive effects at the start of longer questionnaires.
  • We found the following differences with regard to whether or not respondents continued with the questionnaire:
    Older respondents (over 30) keep going the longest, as are people who were sent an extensive invitation, who were offered a higher reward and who have more experience in filling in questionnaires. Respondents who have been answering questions for 6 to 7 minutes, are more likely to continue as time goes on (in other words: those respondents that haven’t left before those 6 to 7 minutes have elapsed, then feel more obliged to also complete the questionnaire).

These results give little reason to use progress bars in questionnaires, at least for questionnaires similar to the ones we used here. It might be possible to try new methods to keep the percentage of respondents that completes questionnaires as high as possible. For shorter questionnaires there is little reason to do so, but for longer questionnaires a combination of a progressive progress bar at the start, then a period without a progress bar, one or two encouraging statements (‘You’re nearly there!’) and a normal progression bar at the end might be investigated. It is however unclear how respondents would react to this method. It might be that this strange combination of progress indications makes respondents suspicious and leads to more people dropping out. A second option to consider, is to now show a certain type of progress bar on every page, but only on a few selective points in the questionnaire, and definitely less often in the beginning. For long questionnaires, this might give the impression that in the beginning there is less emphasis on ‘the end still being far away’, while as soon as respondents are halfway or past the halfway mark, the positive impression is given that they are nearly finished. Until there is more clear information on the relative success of these initiatives, it is wise not to use a progress bar for questionnaires that look like the one used here with regard to length, type of invitation and target group. If the decision is made to use a progress bar, we advise to select a ‘normal’ progress bar and that for longer questionnaires the ‘X pages to go’ bar is not used.

An important point to make here is that we only looked at the quantitative effects on the response. It might be possible –although not very likely – that there are qualitative differences between the groups of respondents that complete the questionnaire. In that case, it might be wise to go for a low response, if the response is of a higher quality (more reliable and valid). We, however, found no proof of that.

Finally, we also have to stress that the results and conclusions presented in this document cannot simply be applied to every random type of online survey. There might be differences, depending on the random sample population and the subject of the questionnaire. When designing a new questionnaire, we however advise to follow our recommendation with regard to progress bars in stead of following your intuition


5 - References

Boehme, R. (2003) Fragebogeneffekte bei Online-Befragungen. Master's Thesis in Communication Science. University of Dresden.
Couper, M. P., Traugott, M. W. & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly, 65 (2), 230-253.
Crawford, S. D., Couper, M. P. & Lamias, M. J. (2001). Web surveys: Perceptions of burden. Social Science Computer Review, 19 (2), 146-162.
Dillman, D. A. (2000). Mail and internet surveys: The tailored design method (2nd edition). New York: Wiley.
Huber, P. J. (1967) The behavior of maximum likelihood estimates under non-standard conditions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability 1: 221–233.
Snijders, C. Matzat, Pluis and de Haan (2005) Clou, December 2005, volume 4, nr. 20. This is a abstract of the white paper “Respons bij online onderzoek”, PanelClix/TUE.

Appendices


Appendix A: Logistic regression analysis of the chance that checkpoint n+1 is reached, given that n is reached (p-values adjusted to clustering on respondent level, N=70.215, * significant at 5% level; ** significant at 1% level)