Tag Archives: survey design

Prizes and Incentives for completing surveys

There are a number of things to think about when deciding whether to offer a prize for participating in a social research survey.

The three main things to consider are: is it really necessary; is it an appropriate prize; and does it comply with lottery laws? You must also comply with privacy law when collecting, storing and using personal information (name, phone, email) supplied to enter the draw.

Definition: An incentive is an item (product or voucher) offered to a research participant either at the time of participation (offered to all participants) or as a prize draw (occurring at the end of the fieldwork).

  1. Is it really necessary?
    Our community should want to give us their feedback without us having to pay for it. We don’t want to set a precedent that it is only worth providing feedback if there is a reward.  Offering an incentive also increases the likelihood that people will just complete the survey to enter the prize draw (resulting in answers which are not as honest), or complete it multiple times. When offering an incentive we need to factor in methods to identify people who are ‘skimming’ (just completing the survey for the prize), as well as those who complete it multiple times for multiple chances at winning.
  2. Is the prize appropriate?
    When offering an incentive, it is important to ensure that it isn’t going to skew who participates in the survey. For instance, if you offer a public transport voucher, you are going to get a disproportionate number of respondents who use public transport, and those who live in areas without access to public transport who might otherwise have participated, may not bother. Try to pick something that would be of value to all demographics and locations.
  3. Does the prize comply with lottery laws?
    The following information is specific to Victoria, Australia. If people are likely to be responding from other states or Countries, you will need to review the laws in each location.
    If offering an incentive as a prize draw (that is, people provide their details and then a random entrant is drawn to win) it is recognised under Victorian lottery law as a ‘trade promotion lottery’. When running a trade promotion lottery, entering must be free and you must include the following information when the respondent enters the prize draw:

    • Closing date
    • Where and when the prize will be drawn
    • Where the names will be published (If the prize is over $1,000 you must publish the name of the winner.)
    • Any other entry requirements (such as they must have completed at least 80% of the survey and only one entry per person will be accepted).

Other items for consideration are:

    • You must notify the winner in writing
    • Records must be kept for 3 years to prove random selection.
    • Winner must be selected using a randomisation algorithm (so each person has equal chance of being selected).
    • The prize must be delivered to the winner within 28 days of being drawn.
    • The winner may be substituted for another draw only if reasonable efforts have been made to contact them and were unsuccessful.
    • If you need to change the prize after commencement of the survey, the new prize must be of equal or greater value and the winner needs to agree in writing, or you need to make reasonable attempts to provide the alternative.

Privacy law

In order to comply with privacy law, you must follow the following steps regarding the personal information collected to enter the draw (name, email, phone number, address etc):

  1. In the introduction to the section asking for their personal information for the prize draw, include a link to your privacy policy:
  2. When collecting this information, it must not be physically stored outside of Australia (that is, you can’t use Surveymonkey or Google forms. ASDF research has a locally installed online surveying tool, hosted in Australia. Please see our Online Surveying information sheet for further details).
  3. Ensure that you do not store the contact information in the same data file as the survey responses.
  4. The contact information provided must not be used for any other purpose, unless written permission is provided by the individual. That is, if you collect their name and email address for a prize draw, you are not allowed to add them to your enewsletter list. You can include a checkbox asking if they would like to be added to the list, but this must be default unchecked.

Online survey screening questions

[dropcap]S[/dropcap]creening is where the surveyer screens for certain types of people up front in an online survey.

Whilst this is a perfectly legitimate thing to do, one needs to take into consideration that there are a lot of people out there who are members of online survey panels in order to accumulate ‘points’ or cash from participating. These people are savvy enough to know to select items that increase their chances of being able to continue with a survey (and therefore get the points/cash).

When including a screening question, it is advisable that the true screening aspect is hidden in a list of aspects. Preferably, present a list of options which are all very different so that the respondent is forced to be honest due to not being able to guess the outcome, and present it in such a way so as skimmers are more likely to be disqualified.

For instance:

Which of the following best describes you?
Please choose one option per row

In this example the real screener is the one relating to online purchasing. The order is switched so that most people will chose the right-hand one for the first two rows, but the qualifier is the left hand option for the screener, therefore increasing the chances of disqualifying those who are skimming for the completion points.

Changing questions in tracking surveys

[dropcap]T[/dropcap]his is a very simple one. If the question is wrong, change it. There is no point continuing to collect rubbish data just for the sake of ‘tracking’. It will not assist you in any way, in fact it will feed misinformation.

Just make sure that when reporting changes over time, the alteration to the question is noted. Indeed, some question changes will result in not being able to compare to previous findings at all. When analysing this information, I would generally recommend providing the previous tracking information, comment on why the question has changed, and then present the new data with a discussion on how the re-wording has enhanced the analysis.

If you must include the existing incorrect question, find a way to also include a revised question asked in the correct way. However, be sure to consider how ordering and placement of the question will impact on findings.

Agree Disagree Ratings Questions: Do we need to move away from this question type?

[dropcap]A[/dropcap]gree Disagree scales are one of the most common types of question found in social research surveys. They are usually used to ascertain the opinions and perceptions of respondents relating to a particular issue. However, research suggests that framing questions in this way results in a notably lower level of quality (read: accuracy) in responses [1].
When referring to Agree Disagree (A/D) scales, the following is how it would typically be framed/presented (obviously this is an example of self completion format, alterations to wording and structure would be expected for telephone surveys). This type of question is sometimes referred to as a Likert scale, after Rensis Likert who developed it in 1932:

 

Framing a question in this way has a number of limitations that need to be considered:
  • The statements themselves are more often than not leading, such as “I never read the flyers I receive in my letterbox”.
  • Acquiescence response bias needs to be considered. This is the phenomenon whereby some people will agree with almost anything; due to being an agreeable person, assuming that the researcher agrees so they defer to their judgement, and/or because agreeing takes less effort than rationalising disagreement. [1].
  • Social desirability bias also needs to be considered, whereby respondents will answer in a way that places them in a more favorable light to others. The risk of this is greater when using direct contact surveying methodologies such as face-to-face or telephone.
  • Some people will shy away from the extreme ends of a rating scale, particularly if it is a sensitive issue, which can result in central tendency bias.
It is instead suggested that one employes an item-specific (IS) response scale structure. For instance, instead of asking for level of agreement with the statement “I never read the flyers I receive in my letterbox” you instead ask “How often you read the flyers you receive in your letterbox?” with a scale such as ‘Always, Sometimes, Rarely, Never’. Or you could explore the issue in much more depth, having a series of questions to draw out if there are variations between types of flyers, and ascertain greater detail about actions, such as read and discard, read and share with friends/family, read and pin on fridge/pin-up board etc.
Whilst this approach will clearly provide more useful detail, and avoids the risk of A/D scale biases, it does reduce the opportunity for comparison across multiple statements to identify underlying phenomenon. It also requires greater levels of concentration from the respondent to think through each scale individually. This later consideration, however, can in some cases be a good thing as it will encourage greater levels of engagement with the survey (that is, minimise the risk of the respondent ‘zoning out’). Adopting a IS approach can also significantly lengthen survey duration (impacting on respondent satisfaction, response rates and costs).

Conclusion:

As with the design of any survey question, you need to decide on the best approach based on a wide variety of considerations. For some research, A/D scales may be appropriate, yet for others it may be wise to avoid them.The primary consideration should be how you are going to use the information/what type of information is going to be most useful to you. Cost is also a consideration (presenting a number of statements in a A/D table is cheaper than IS questions), however, cost should never overrule usefulness – if you are going to conduct a survey that is not usefull or is likely to provide sub-standard results just to save money, it is better to not run the survey at all.
If you are going to use Agree Disagree Scales, things to consider are as follows:
  • Always randomise the list of statements to avoid response bias.
  • Keep the list of statements to less than 10 to minimise the risk of response fatigue.
  • The benefit of using a Likert scale is that it allows for identifying of variations which might point to an underlying phenomenon relating to the topic being explored.

References

[1] Saris, W.E et al (2010),  Comparing Questions with Agree/Disagree Response Options to Questions with Construct-Specific Response Options, Survey Research Methods, Vol 4 No1 p61-79.