Online survey as an alternative for controlled socio-scientific laboratory experiments

Chris Snijders and Uwe Matzat (Technical University Eindhoven) Bart Pluis (PanelClix) and Wiggert de Haan (Isiz)



Market and policy research increasingly make use of online panels for the collection of research data. For strict (socio-)scientific purposes, the use of online panels is much less generally accepted. In this study, we are comparing the results of experimental socio-scientific surveys in the lab with comparable online surveys. We will be looking at the results to compare the choices people make in so-called trust games. Our conclusion is that the online results are largely the same as the lab results. It is true, however, that the online results might be less pronounced than the lab results.


1 - Introduction


Market and policy research increasingly make use of online panels for the collection of research data. For strict (socio-)scientific purposes, the use of online panels is much less generally accepted. For a standard socio-scientific experiment, ‘test subjects’ are invited into the lab and then exposed to controlled circumstances, while the researcher checks the written and verbal reactions. An online panel has potential disadvantages compared to this standard design. First, when an online panel is used, the researcher has no control over and no knowledge of other stimuli the test subject is being exposed to. Second, there is no possibility of interaction during the experiment. In the lab, it is usually possible to ask the researcher for an explanation or help if things are unclear, while Online fieldwork does not offer that room. A third issue is that many researchers find it important to research a homogenous population of test subjects (e.g. only men between 18 and 25), which is easier to control in the lab than online.

For these reasons, the use of online panels is not something which we regularly see in articles published in scientific journals such as the Journal of Personality and Social Psychology of Management Science. To which degree the reduced control of the group researched in online panels plays a role, remains a question. The concerns raised could lead to a distortion of the results of Online fieldwork, but is that the case? To arrive at any founded conclusions, it is necessary to compare the results of usual experimental research with the same research done online.

In this document we are comparing the results from a series of experiments with regard to the behaviour in so-called ‘trust games’ with a similar survey online.


2 - Trust games


Before we discuss the comparison of the online and standard research, we will have a quick look at the research into trust games. The basis of the research is the so-called trust game, a game played with two people, which is explained by the following figure.

online survey

Player A will first make a choice. He can choose between right or down. If he chooses right, the game is over immediately. Both player A and player B receive 35 Euros. If player A chooses down, it’s player B’s turn. Player B can then choose between right and down. If he chooses down, he and player A each receive 75 Euros. If player B chooses right, he will receive 95 Euros, while player A receives only 5 Euros.

For obvious reasons, this game is called a trust game. Player A has to decide whether he trusts that player B will choose down. In general, this game is seen as one of the most basic forms of cooperation problems. There is an opportunity to create surplus value (if A chooses down first and if B then also chooses down), but to get to this result, one of both parties (in this case, player B) has to pass up the opportunity to earn more money themselves.

This game has been played by test subjects in the lab in many forms: Both with and without real payments, with and without the players knowing each other, with and without the players consulting each other and making agreements, and with different pay-outs. In this document, when comparing the lab research and Online fieldwork we will be only looking at the effects the size of the pay-out has on the choices the players make.


3 - The study


The results presented here are based on a study carried out by the Technische Universiteit Eindhoven. The results as found in previous socio-scientific lab research into the behaviour in trust games (Snijders, 1996; Snijders & Keren, 2001) is used as a basis for the comparison. These lab results are compared here with the results found for people from the online panel.

The questions about behaviour in trust games is part of a larger questionnaire about different subjects (not an unusual method for socio-scientific lab research). The web questionnaire was programmed by Isiz. As trust games need some explanation about how the game works, participants in the online questionnaire were asked first whether they were willing to answer some questions which need a more extensive explanation in advance. Those people who indicated they were prepared to do so (48%) were given an explanation of the trust game, as seen in figure 1. The complete explanation of the trust game for the participants can be found in the appendix. Then they were asked whether they would choose right or down if they were player A AND if they were player B. These same two questions were then put to all participants again for five other trust games. The only difference in these trust games was the size of the pay-out. This way, each participant to the online questionnaire was given six trust games in total: The game from figure 1 and five other game with a different pay-out.

The questionnaire used four randomly assigned conditions: Four sets of five trust games. A trust game is totally characterized by its pay-out. In the example in figure 1, there are four pay-outs (5, 35, 75, 95). Below you can see the four sets of trust games used in the online survey. A set is offered in the order as given below.

online survey

The online fieldwork was carried out in two waves; a pilot between July 7 and 14 (1419 invitations) and a larger wave (6259 invitations) between August 4 and 11, 2005. Per wave, all invitations were sent out at the same time at the beginning of the period to an a-select group from the PanelClix population file. Reminders were not sent. Any members who did not respond to the invitation in the first week, were unable to participate in the survey and were not included in the response. This yielded 2830 people who at least answered the questionnaire up to the trust game. Of these 2830 people, 47% (1342 people) were willing to answer the trust game questions.


4 - Results from previous research


To enable an adequate comparison between lab and Online fieldwork , we are first summarizing a number of results from the literature. Firstly, it would be logical to assume that whether a person chooses right or down in a trust game can be almost fully explained from the person's characteristics, such as his/her character or income. For example: People who are very trusting would choose down when they are player A. Or: People with a high income would choose down when they are player B. These explanations, however, can only be applied in a small percentage of cases. There are differences between people, of course, but the differences in behaviour as a result of the size of the pay-out in the game are, on general, bigger than those based on personal characteristics (Snijders & Keren, 2001). For this reason, it was decided to research the effects of pay-outs. A second remarkable result was that one would assume that it is important whether the money promised in the game is actually paid out or not. If there is nothing at stake, it would be easier for participants to behave in a more cooperative manner than they would normally. However, this does not or hardly seem to be the case.

To be able to describe the effects of the pay-outs, it seems sensible to create two constructs: Risk and temptation.. . To clarify this, we are again looking at the pay-outs in the game in figure 1:

online survey

Here, the number before the decimal point indicates what player A is getting and the number after the decimal point what player B is getting. We are now comparing these pay-outs from the viewpoint of player A. He/she will earn at least 5 Euros from the game, while 75 Euros is the highest amount to be won. He/she can has 70 Euros to gain (=75-5). If player A does not trust player B and chooses right, he/she will receive 35 Euros, which means he/she already has earned 30 (=35-5) of the maximum of 70 Euros. If player A chooses down, he/she risks losing 43% (30/70) of the possible total profit in the game. This percentage is called the risk for player A. The risk is equal to (35-5)/(75-5). In a similar manner we can calculate the risk for player A in each trust game.

Now player B. If player B has his/her turn, he/she can create a maximum difference between him/herself and player A of 90 (=95-5) Euros. Most people will find it unpleasant if they create a difference between themselves and the other party. On the other hand: If player B chooses right, he/she will receive 20 (=95-75) Euros more than he/she would have received if he/she had chosen down, and most people would, understandably, find that a comforting thought. The ratio between the pay-out for player B if he/she chooses right and the difference created between him/herself and player A, is called the temptation for player B. This temptation is equal to (95-75)/(95-5). In a similar manner we can calculate the risk for player B in each trust game.

One of the results from previous research is that the variation in choices the test subjects make can be largely explained by the variation in the values of risk and temptation (Snijders, 1996; Snijders & Keren, 2001). This study looked at the choices made by people in 36 different trust games with various risk and temptation values.


5 - The comparison of lab versus internet panel


First, some general results from the online survey. On average, in 43% of the cases player A chooses to give trust (down) and in 44% of the cases player B rewards this trust (down). When looking at the differences between people, we see differences in age. Older people will gave and reward trust more often than younger people. Ten years older is, on average, equal to a net increase in the giving and rewarding of trust of around 4 to 5 percent. Something similar can be seen when looking at the education level. People with a higher education are less likely to give and reward trust than people with a lower education. The difference between a person with a lower education and someone with an academic degree is around 3 to 4 percent. Compared to the effects of the pay-out, these effects are small, as we will see.

The pay-outs in the sets of trust games have been selected to create a variation within the sets in the risk and temptation values. We will consecutively look at [1] the choices made by player B depending on the temptation values, [2] the choices made by player A depending of the risk values and [3] the choices made by player A depending on the temptation values for player B.


5.1 Rewarding trust as a function of the pay-out

We will first look at the choices made by player B depending on the temptation value in the trust game. If the online results correspond with the results found in the literature, we would see here that there is a relationship between the value of temptation for player B and the percentage of people deciding not to reward the trust given (= the percentage choosing right). Figure 2 shows these results for both the trust games researched in the literature and the trust games from the online survey.

online survey

We see that the degree to which people are willing to reward trust is indeed strongly dependent on the temptation value. For online data (the bottom line in the diagram) we can see that the percentage of abuse of trust is lower, on average, and increases less strongly with increasing temptation values. A regression analysis of the percentage of abuse of trust shows that on average the online data yield 13 percent less abuse of trust (statistic significance p < 0.01, 95% reliability interval of 9 to 18 percent). The effect of the temptation value differs significantly between the online group and the group in the lab (p=0.001). The effect in the online group is around half as pronounced as in the lab group. The differences found can partly be explained by the differences between the populations. The lab group mainly consisted of students, i.e. younger people with a higher education. If we make the comparison again and limit it to the people in the online panel who are under 25 and have a higher education, we can see that the difference in willingness to abuse trust between the two groups is not statistically significant (p=0.32). The effect of temptation in the online group is no longer statistically significant (p=0.053) but the size is still half of the effect of temptation in the lab group.


5.2 The degree to which giving trust depends on the risk for player A

A similar analysis can be made for the choices made by player A. In the lab group, the choice by player A to trust player B or not is largely (but not solely) determined by the risk value for player A. In figure 3 on page 9, we can see the risk for player A compared to the percentage of trust given by player A in both the lab group and the PanelClix group.

We again see that the bigger picture of online results corresponds to those from the lab. We expect a negative relationship: The bigger the risk, the lower the percentage of player A’s that dares to give trust.1 This analysis shows that giving trust in the online group happens around 12 percent more often in the online group (a 95% reliability interval of 7 to 18 percent). As with rewarding trust, we can see that the effect on the willingness to give trust is less strong in the online group than in the lab group (for every 10 percent extra risk, the chances of giving trust fall with around 8 percent in the lab group, compared to 5 percent for the online group).

1 Figure 3 is based on a regression analysis including the temptation for player B and interactions with a dummy variable ‘PanelClix’ as predictor (see appendix 2). The figure shows the risk for player A on the X axis, compared to the percentage of people who give trust, corrected for differences in other independent variables.

online survey

If we only calculate the results in the online group for the group of highly educated young people, the image only changes slightly. We can see that the willingness to give trust is no longer statistically different (p=0.48) between the online group and the lab group. What remains is the risk effect on the willingness to give trust, which is still higher in the lab group than the online group (p=0.007), although the difference between the groups is smaller in this regard if we compare the entire online group.


5.3 The degree to which giving trust depends on the temptation for player B

The last research result we are looking at, is the degree to which giving trust depends on the temptation for player B. We can expect that at the moment that player A is wondering whether he will trust player B, the temptation for player B also plays a role. A surprising result in the experiments reported by Snijders (1996) was, however, that the effect of temptation for player B is small compared to the risk effect. It might be exaggerated to state that in these experiments player A mostly looks at their own pay-out when making the decision to give trust, much less at the temptation player B is exposed to. To give an indication of the size of the difference: The lab group shows that an increase in risk of 10 percent results in an average reduction of willingness to give trust of 8 percent, while a similar increase in temptation for player B only results in an average reduction of willingness to give trust of 2 percent.

Figure 4 shows the relationship in the lab group and the online group.

online survey

The main reasons why some social scientists are reticent when it comes to using internet panels for their research is because, in general, the researcher has less control over participants. It is not entirely clear who answers the questions and what this person is doing while he/she is filling in the questionnaire (one option to partly control this is by just asking the respondent and taking it into account during the analyses).

The comparison of results from the trust game experiments in controlled lab situations versus participants from the PanelClix Internet panel gives the following conclusions:2

  • For all the comparisons, the results are largely the same. Substantial and statistical significant effects found in the lab remain valid if the results were compiled based using the Internet panel.
  • The size of the three effects compared was less pronounced in the online panel. The effect of temptation of player B on the behaviour of player A, which was already small in the lab, completely disappears in the panel data. This can be explained partly by the differences in the composition of the population.

Finally, we also have to stress that the results and conclusions presented in this document cannot simply be applied to every random type of online survey. This create differences per panel and of course also per experiment. This study, however, yields results which correspond with the intuitively attractive thought that carrying out experiments in the lab leads to more or less similar results as when Internet panels are used. The lack of control over the test subjects in online surveys seems to translate into less pronounced effects. This seems to suggests that an Internet panel is a less good option for effects which are too subtle. On the other hand, one could argue that for exactly this reason the use of Internet panels is a good way to test the phenomenon researched. If the effects are found in this environment, with less control, this argues strongly for the existence of the effects. In that sense, carrying out experiments using questionnaires and Internet panels can be seen as a logical step in between a strictly controlled experimental study and purely empirical testing by means of studies in the field.

2We should realize that in general we are mainly interested in the degree to which various factors in an experiment or questionnaire influence the behaviour of the respondent (and not or much less in a comparison of e.g. the absolute percentage of people giving trust). The differences of around 12 percent between the lab research and the panel are of a similar size as found in various implementations of the same lab study in two labs.


References


Snijders, C. (1996) Trust and commitments. Amsterdam: Thela Thesis.
Snijders, C. en G. Keren (2001) Do you trust? Whom do you trust? When do you trust? In: S.R. Thye, E.J. Lawler, M.W. Macy and H.A. Walker (eds.), Advances in Group Processes 18. Amsterdam: JAI, Elsevier Science, 129-160.


Appendices


Appendix 1: Explanation of the trust games

The following questions are about your decisions in strategic situations. They require some explanation, which explains why it will take a bit longer to answer these questions. Please answer the following questions only if you are prepared to read the following short introduction completely and carefully.

online survey

Have a look at this game. It is a fictitious game, but we are asking you to play along as much as possible. We will explain the game using the following figure.

This is a 2-player game, with player A and player B. Player A gets to start. Player A can choose two directions, right and down. If player A chooses right, the game is over immediately; player A and B then each receive 35 Euros. If player A chooses down, it’s player B’s turn. If player B chooses right, player A and player B will each receive 75 Euros. But, if player B chooses down, player A receives 5 Euros, and player B 95 Euros. We will ask you a few questions about what you would do if you were player A or B. Take a careful look at the situation.

Imagine: You are player A and player B is a random other member of PanelClix. What will you do? Will you choose right or down?

online survey

And: Imagine: You are player B, player A is a random member of PanelClix and player A has chosen down. It’s your turn. What will you do? Will you choose right or down?

online survey

Exactly the same two questions: ‘What would you do if you were player A?’ and ‘What would you do if you were player B?’ will be asked for 5 more games with the same structure. Think about your choice and put yourself in the shoes of both players.


Appendix 2: The most important results from the analyses

Analysis 1: Prediction of the percentage of abuse of trust, split into online (PanelClix) data and lab data.

online survey

Analysis 2: Prediction of the percentage of trust given, split into online (PanelClix) data and lab data.

online survey