Response code sets

Here are some of the best practice response code sets for question design:

Satisfaction rating
Very satisfied
SatisfiedNeither satisfied nor dissatisfied
Dissatisfied
Very dissatisfied

Importance rating
Very important
Important
Moderately important
Of little importance
Not at all important

Quality
Extremely good
Above average
Average
Below average
Extremely poor

Excellent
Very good
Good
Fair
Poor

Safety
Very safe
Safe
Neither
Unsafe
Very unsafe

Frequency
Daily or most days
2-3 times a week
Weekly
Fortnightly
Monthy
Every 2-3 months
2-3 times a year
Annually
Less often
Never

sometimes the following code frame is used, but it is very subjective (that is, different people can interpret it in different ways) so is usually not advised.

Always
Most of the time
Occasionally
Rarely
Never

Interest
Extremely interested
Very interested
Moderately interested
Slightly interested
Not at all interested

Online survey screening questions

[dropcap]S[/dropcap]creening is where the surveyer screens for certain types of people up front in an online survey.

Whilst this is a perfectly legitimate thing to do, one needs to take into consideration that there are a lot of people out there who are members of online survey panels in order to accumulate ‘points’ or cash from participating. These people are savvy enough to know to select items that increase their chances of being able to continue with a survey (and therefore get the points/cash).

When including a screening question, it is advisable that the true screening aspect is hidden in a list of aspects. Preferably, present a list of options which are all very different so that the respondent is forced to be honest due to not being able to guess the outcome, and present it in such a way so as skimmers are more likely to be disqualified.

For instance:

Which of the following best describes you?
Please choose one option per row

In this example the real screener is the one relating to online purchasing. The order is switched so that most people will chose the right-hand one for the first two rows, but the qualifier is the left hand option for the screener, therefore increasing the chances of disqualifying those who are skimming for the completion points.

Survey design tips

When designing a survey, you can greatly improve the quality of the survey by checking the following things:

  • Use simple English.
  • Think about whether the question can be interpreted in multiple ways. When designing questions you need to make sure that there is only one interpretation of the meaning of the question otherwise respondents will become confused and your results won’t be as accurate.
  • Try not to include multiple concepts within a single question. For instance, rather than ask people to rate the ‘appearance and reliability’ of public transport, ask it as two separate questions (one for appearance and one for reliability), as experiences may differ across the two concepts.
  • Check to see if any of the questions are leading. It is important that you do not lead the respondent into a particular opinion by the way you have framed the question.
  • Check the question order, is there a risk that earlier questions may influence the answers given?
  • Never have a grid question with more than 10 statements/rows.
  • Try to keep surveys to less than 15 questions.
  • Do a run through of the questions and for each ask if it is ‘useful’ or ‘interesting’. If it is useful (that is, you can think of an example of how the result will be used) then keep it. If it is just interesting, and you can’t think of any way in which it might be able to be used, then delete it as you don’t need it. The only exception to this rule is with demographics; they may not be useful in their own right, but when you use them to cross-tabulate other data they can be highly useful (for instance, identifying differences in opinions or behaviours when comparing younger people to elderly people).

If you would like us to review your survey to make sure it is going to get you the most reliable findings, whilst ensuring respondent satisfaction with the survey experience, please contact us

How to define a target market

In the realm of social research, the target market  is the group of people with whom you wish to conduct your research.

It is important to define your target market early on in the research planning process as it plays a critical role in developing the methodology and designing an appropriate research tool.

Before you can define your target market, you have to develop a clear set of objectives for the research:

  • What are you trying to achieve with the research? You may be seeking to understand the community so that you can better design communication tools,  or test your performance with your customers, or find out what your community wants from your organisation so that you can then deliver it.
  • What sort of results will be useful to you? It can help to come up with some ideas of the type of information you would like to get out of the research and how this will be useful to you.
  • What will you do with the results? Will they be integrated into planning, used to design communications, help you to improve future programs, or something else?

Once you have defined the objectives for the project, you can work out who you need to talk to in order to fulfil these objectives.

Target markets vary significantly depending on the purpose of the project. For program evaluations or customer satisfaction monitoring it may be a relatively small market of people with whom your organisation has directly interacted; for service planning it may be a particular segment of the community; or sometimes, it may be the entire community.

Target markets can be based on a wide range of variables, and indeed may involve a combination of multiple types of variables:

  • Demographic (age, gender, cultural background, education, work status, income, household structure)
  • Geographic (where they live, where they work)
  • Behavioural (actions they take or don’t take, or even consumer behaviour)
  • Psychological (attitudes, values and perceptions)
  • Physical/medical

Some items to consider when defining your target market are as follows:

  • Who directly interacts with your organisation?
  • Who makes use of your products or services?
  • Are there policies or guidelines that determine who should be consulted?

And remember, your target market may be more widespread than you realise!

If you need some help with defining target markets please contact us.

 

Tips for analysis

[dropcap]W[/dropcap]hen compiling a written analysis of data, it is important to frame explanations in the correct way. Here are a few tips that may help.

  • If you are referring to an age group, make sure that you word it in such a way so as people can’t misinterpret it. For instance, if your age group is 50+ year olds, avoid saying “A high incidence of those aged over 50 years said that they like carrots”. Framing it like this could be misinterpreted to mean the statement applies to those aged 51 years or higher (because they are over 50 years). Instead, it should say “A high incidence of those aged 50 years or over said that they like carrots”.
  • When talking about the results of a factual yes/no question, for instance, Do you have a health care card?, be careful that you are not making assumptions about attitudes. Saying that health care cards are ‘most likely’ to be held by those aged 50 years or over sounds as though if you asked people aged 50 years or over if they wanted one they would be more inclined to say yes than those under the age of 50. It sounds like it is a choice/attitudinal decision. Instead it should say that a health care card is held by a higher proportion of residents aged 50 years or over.
  • Another common mistake is assuming that if the respondent says they have done something within a certain period, then this is a common action. For instance, a question may ask In the last 3 months, have you seen a doctor because you were turning orange from eating too many carrots?. Often this will be reported as “Half of people aged 50 years or over visit the doctor because they think they are turning orange from eating too many carrots”. It can’t be assumed that just because it happened in the last 3 months it will be a regular thing. Instead the analysis should be “Half of people aged 50 years or over visited the doctor in the three months prior to interview because they thought they were turning orange from eating too many carrots”.
  • If you are comparing two different groups, be careful about how you are framing the analysis. Lets assume that the data has respondents from across two suburbs, A and B. In the sample there are 100 respondents in suburb A and 500 respondents in suburb B. All respondents are asked if they like carrots and it was found that in suburb A 50 per cent said that they like carrots and in suburb B 20 per cent said that they like carrots. The tenancy is to report this as “More people in suburb A like carrots”. This is correct proportionally (50 per cent versus 20 per cent) but not numerically (50 people in suburb A versus 100 people in suburb B). In order to report this correctly you would need to say “a higher proportion of those in suburb A said that they like carrots”.
  • Avoid starting a sentence with a number (unless it is a dot point list). Try to use an expression, such as “Half of respondents…” instead. If you simply must start the sentence with a number, write it as text, not numerals; so “Fifty per cent of respondents” not “50 per cent of respondents”.
  • With regards to using “%”, “percent” or “per cent” in the written analysis, this will differ across organisations. It is a good idea to get your hands on the style guide for the organisation for whom the report is written to make sure it conforms to their rules.

In addition to these common analysis mistakes, there are a couple of rules that I use when analysing data:

  • In a typical research analysis report (non-academic), after every sentence or paragraph ask “so what?”. That is, the analysis should provide meaningful information, not just put forth numbers (unless it is a pure technical report of course).
  • When reviewing a research report, do a search for the word ‘interesting’. If you have sentences such as “it is interesting to note…” or “Interestingly…” then delete them. Using ‘interesting’ to try and find meaning in data is a sure sign that it has no meaning.

Changing questions in tracking surveys

[dropcap]T[/dropcap]his is a very simple one. If the question is wrong, change it. There is no point continuing to collect rubbish data just for the sake of ‘tracking’. It will not assist you in any way, in fact it will feed misinformation.

Just make sure that when reporting changes over time, the alteration to the question is noted. Indeed, some question changes will result in not being able to compare to previous findings at all. When analysing this information, I would generally recommend providing the previous tracking information, comment on why the question has changed, and then present the new data with a discussion on how the re-wording has enhanced the analysis.

If you must include the existing incorrect question, find a way to also include a revised question asked in the correct way. However, be sure to consider how ordering and placement of the question will impact on findings.

The cost factor: cutting corners in design to reduce cost

[dropcap]I[/dropcap] am going to make my position on this very clear from the outset… Don’t do it!!!

This is perhaps one of the worst trends I see happening in research in recent years. As it becomes easier to do research for low cost (Surveymonkey etc) I see lots of organisations running sub-standard research.Not only does this devalue research as a whole (that is, respondents who receive poorly designed surveys develop cynical views towards participating in research), it results in organisations making decisions based on unsound data. This has dire ramifications both for the future of social research (as a way to provide the community with an avenue to have their say on important issues concerning them) but also on the functioning of businesses who make important decisions based on poor data.Some of the most common cost-cutting mistakes I see are:

  • Conducting surveys in-house without the expertise to adequately design questionnaires/methodology or analyse findings. This is a particular challenge for the industry today as commissioning a research company to conduct research is often prohibitively expensive, while many organisations are required to undertake research to meet funding / board obligations. Furthermore, research is usually the first item to be reduced or removed to meet budgets, whilst the requirement for delivering evidence of progress remains.
  • Survey Monkey (or similar). I cannot express enough how dangerous Survey Monkey is to the research industry, and for organisations who use it without utilizing any expertise in research design. It has made it incredible easy for anyone to run a survey without the need for any knowledge on how to design questions or indeed even reach a representative target market.
  • Combining two surveys together to reduce administration costs, resulting in prohibitively long surveys (some more than 100 questions!!). This affects response rates (reducing representativeness) and also the accuracy of results in the later questions within the survey (response fatigue).
  • Long barrages of statements to be rated to reduce survey length. In a telephone survey environment, this is both taxing on the interviewer and the respondent; and in a self-completion environment (online or paper based) there is a risk of ‘skimming’ (that is, people just circling the same option, or random options, for each statement just to complete the question – there are methods to identify and remove people who are doing this, but that is for another post).
  • Using poor respondent sourcing methodology. This is an item for its own post later, but the two cheapest options at present are using online research panels and random digit dialling (RDD) landlines. Online research panels are self-selected (people choose to join) and are populated with professional respondents (people who conduct lots of surveys, and therefore not necessarily typical of the general population). In Australia, recruiting survey respondents using random digit dial landline numbers, or White Pages listing (including listed mobiles) will not achieve a representative sample. Less than half of adults under the age of 40 years have a landline telephone, and less than 8% of mobile telephones are listed in the White Pages (mostly trades businesses). Unfortunately using mobile phone RDD in Australia is not feasible unless it is a national survey as mobile phone numbers are not assigned by region, and screening for the defined region would result in a very low response rate, and consequently high cost.

Survey sampling: Is telephone research no longer viable in Australia?

[dropcap]C[/dropcap]onducting research using random digit dial (RDD) landline numbers has for decades been the staple of the research industry. In recent years the effectiveness of this methodology has been in significant decline; first due to the withdrawl of electronic White Pages from public access in 2002, followed by a significant decline in landline installation at home (no longer necessary now that everyone has mobile phones).

The ACMA Communications report for 2011/2012 shows that only 22% of Australian adults who have a fixed line telephone or mobile most use fixed-line at home to communicate, meaning that even when people have a fixed line, it is usually not their primary method of phone communication. Furthermore, the incidence of having access to a fixed line telephone is low amongst younger adults. In June 2011 it was found that only 63% of 18-24 year olds (mostly those still living in their parental home) and 64% of 25-34 year olds claimed to have a fixed line telephone at home. These figures have been falling over the years, so are most likely much lower than this now. [1]

Research conducted by the Social Research Centre reveals that there are statistically significant variations in the populations reached by different telephone sampling methodologies. Specifically, those who were contacted over a mobile phone who didn’t have a landline showed a higher incidence of being male, in younger age groups, live in capital cities, born overseas, live in rental accommodation, and living in their neighbourhood for less than five years. [2] In addition, significant biases were observed in the sample contacted over landline, with landline sample showing lower levels of a variety of important variables including health issues, public transport usage and smoking and alcohol consumption. [3]

There are telephone number list providers out there that claim to include mobile numbers by region. These are ‘listed’ mobile numbers. That is, when someone obtains a new mobile number, it is default unlisted unless the owner requests that it is listed. Many mobile number providers don’t actively prompt for people to have it listed. Mobile numbers that are listed are highly likely to be home businesses (as these are the people who go out of their way to get their numbers listed), thereby skewing the ‘mobile population’ in the survey.

Conclusion

Using Random Digit Dial (RDD) with a mix of mobile and landline numbers would be viable to achieve representative samples. However, this will only work for national surveys, as mobile phone numbers are not assigned by region. Undertaking local area telephone surveys using RDD landlines or white pages phone numbers (even if listed mobiles are included) will miss large, and often critical chunks of the community.

It should be noted, however, that telephone surveys would still be viable if you are sampling a population where you have phone numbers for the entire population (eg. using a client list).

References
[1]  Australian Communication and Media Authority (2012) Communications report 2010–11 series Report 2: Converging communications channels: Preferences and behaviours of Australian communications users, ACMA.
[2] Penney, D (2012), Second national dual-frame omnibus survey announced, www.srcentre.com.au, accessed 21 August 2013.
[3] Penney, D & Vickers, N (2012), Dual Frame Omnibus Survey: Technical and Methodological Summary Report, The Social Research Centre.

Agree Disagree Ratings Questions: Do we need to move away from this question type?

[dropcap]A[/dropcap]gree Disagree scales are one of the most common types of question found in social research surveys. They are usually used to ascertain the opinions and perceptions of respondents relating to a particular issue. However, research suggests that framing questions in this way results in a notably lower level of quality (read: accuracy) in responses [1].
When referring to Agree Disagree (A/D) scales, the following is how it would typically be framed/presented (obviously this is an example of self completion format, alterations to wording and structure would be expected for telephone surveys). This type of question is sometimes referred to as a Likert scale, after Rensis Likert who developed it in 1932:

 

Framing a question in this way has a number of limitations that need to be considered:
  • The statements themselves are more often than not leading, such as “I never read the flyers I receive in my letterbox”.
  • Acquiescence response bias needs to be considered. This is the phenomenon whereby some people will agree with almost anything; due to being an agreeable person, assuming that the researcher agrees so they defer to their judgement, and/or because agreeing takes less effort than rationalising disagreement. [1].
  • Social desirability bias also needs to be considered, whereby respondents will answer in a way that places them in a more favorable light to others. The risk of this is greater when using direct contact surveying methodologies such as face-to-face or telephone.
  • Some people will shy away from the extreme ends of a rating scale, particularly if it is a sensitive issue, which can result in central tendency bias.
It is instead suggested that one employes an item-specific (IS) response scale structure. For instance, instead of asking for level of agreement with the statement “I never read the flyers I receive in my letterbox” you instead ask “How often you read the flyers you receive in your letterbox?” with a scale such as ‘Always, Sometimes, Rarely, Never’. Or you could explore the issue in much more depth, having a series of questions to draw out if there are variations between types of flyers, and ascertain greater detail about actions, such as read and discard, read and share with friends/family, read and pin on fridge/pin-up board etc.
Whilst this approach will clearly provide more useful detail, and avoids the risk of A/D scale biases, it does reduce the opportunity for comparison across multiple statements to identify underlying phenomenon. It also requires greater levels of concentration from the respondent to think through each scale individually. This later consideration, however, can in some cases be a good thing as it will encourage greater levels of engagement with the survey (that is, minimise the risk of the respondent ‘zoning out’). Adopting a IS approach can also significantly lengthen survey duration (impacting on respondent satisfaction, response rates and costs).

Conclusion:

As with the design of any survey question, you need to decide on the best approach based on a wide variety of considerations. For some research, A/D scales may be appropriate, yet for others it may be wise to avoid them.The primary consideration should be how you are going to use the information/what type of information is going to be most useful to you. Cost is also a consideration (presenting a number of statements in a A/D table is cheaper than IS questions), however, cost should never overrule usefulness – if you are going to conduct a survey that is not usefull or is likely to provide sub-standard results just to save money, it is better to not run the survey at all.
If you are going to use Agree Disagree Scales, things to consider are as follows:
  • Always randomise the list of statements to avoid response bias.
  • Keep the list of statements to less than 10 to minimise the risk of response fatigue.
  • The benefit of using a Likert scale is that it allows for identifying of variations which might point to an underlying phenomenon relating to the topic being explored.

References

[1] Saris, W.E et al (2010),  Comparing Questions with Agree/Disagree Response Options to Questions with Construct-Specific Response Options, Survey Research Methods, Vol 4 No1 p61-79.