27.07.2018

New technologies are shaping the way surveys are done

Published in:  Surveys and Opinion Polls

by Pisana Ferrari – cApStAn Ambassador to the Global Village

The range of methods (“modes”) of data collection in surveys has changed drastically over the past decades. Each new technological advance has required survey research to adapt. More recently, as people are increasingly taking online surveys via their smartphones, new challenges have arisen in terms of software optimisation and questionnaire design. The main “classic” modes are face-to-face interviews, phone interviews, postal/mail surveys and web surveys, and there may be advantages in combining them, as Dr. Caroline Roberts, from the Université de Lausanne, explained in her very interesting short course on “Mixed mode surveys” at WAPOR  2018 (1).

Mixed mode surveys use different modes of data collection for different parts of a survey or for different sample members. One of the reasons for mixing modes is to reduce costs: web surveys, for example, are much cheaper than face-to-face. Mixed mode surveys can also help to reduce “mode effects”, ie. errors related to the particular mode selected. For example, not all people can be contacted in all modes, leading to problems in “coverage” (e.g. not everyone has landlines). “Non-response” is related to the fact that different modes attract different types of people (e.g. old/young). These are what are know as “selection errors”. “Measurement errors” are due to the fact that people respond differently to different modes (e.g. interviews vs self-administered).

Modes also have an influence on answers and here many different factors come into play, including assimilation (hearing), understanding (comprehension), retrieval (searching memory), judgement (using the information retrieved to decide answer/form opinion). Errors can occur at each level: recall and estimation errors, social desirability bias (willingness to report their answer truthfully), aquiescence (tendency to agree) or “satisficing” (shortcutting, not putting the effort to respond optimally), “no opinion” reporting, and “straightlining” (non-differentiation of responses).

There are pros and cons for all modes: phone interviews can objectively pose concentration/comprehension problems; face-to-face interviews are more motivational but there is a risk that the interviewer can influence respondents; web surveys give good quality data but exclude non-internet users and “functionally illiterate” people. Mixed mode surveys can compensate weaknesses/errors of one mode with another.

The two main ways of mixing surveys are concurrent (simultaneous) ie. offering a choice, or sequential (different modes, one by one, over a period of time). Concurrent can potentially reduce the selection errors due to non-coverage. Sequential can increase response rate and reduce selection errors. It is also of course possible to combine concurrent and sequential as well as to do cross-sectional (concurrent-sequential-mixed), cross-national (different questionnaire for different countries) or longitudinal (sequential, switching to other mode after recruitment). What about the total mode effect? The problem with mixed mode surveys is that, for example, errors can be compounded (net result might not be beneficial), or confounded (difficult to separate differences in answers).

Survey design is paramount and this is always “a balancing act, an explicit trade-off between errors and costs”, says Dr.  Roberts (citing De Leeuw 2005). It is essential to minimise the mode effects and ensure that there is data to quantify these (e.g. “auxiliary” data from registers, previous questionnaire waves, etc). Dr. Roberts gave the example of a (successful) sequential survey on well-being, conducted in Switzerland, which consisted of an advance letter with “push to web”, a postcard reminder, a paper questionnaire, a letter announcing a CATI (computer-aided telephone interview) or CAPI (computer-aided personal interview) and NR questionnaire. The selection effect was reduced as the overall response rates, all modes considered/combined, were high (70%).

To conclude, should one mix modes? Mixed mode surveys can improve coverage, increase response rate and representativeness. On the other hand, they are logistically more complex and may actually have higher costs, e.g. adapting to different modes and more complex analytic procedures. Dr. Roberts stressed that it is essential to be clear about goals and to evaluate impact, to establish key metrics and set a threshold for “tolerable” mode effects, in order to maximise the survey quality. A few tips for a successful survey: setting a tight deadline to avoid procrastination, using multiple contact methods and showing appreciation to respondents by offering small (token) cash incentives.