The Economic Migration Barometer Series: A Survey Translation Case Study
Episode 1 – Retrieving and leveraging existing translations
by Steve Dept, cApStAn CEO
Please don’t spend too much time looking up the Economic Migration Barometer. It is a fictitious project, which we set up for the exclusive purpose of this informative series on good practice in survey translation. We have collated events and features from several multilingual surveys on which we have worked with reputable survey organisations. The purpose is to illustrate the complexity of survey translation and the added value of a robust linguistic quality assurance design. We plan to send out 4 consecutive issues in this series, and this is the initial offering.
Our imaginary Economic Migration Barometer (EMB) collects data on attitudes towards economic migration in countries from which people are known to travel abroad to search work opportunities. This is the second wave of this survey, and part of the questions could be recycled from the previous wave. The face-to-face interviews take place in 12 different languages.
In the first wave, the questionnaire was administered in 6 of those languages. It is agreed to have the new questionnaire translated by professionals. The Principal Investigator of the EMB asked cApStAn what could be done to produce valid and reliable survey translations and to maximize comparability across languages and over time in this multilingual survey. In this context, we also investigated whether translated content from the first wave could be leveraged.
First, we examined content that remained the same across the two waves of the EMB survey, which we refer to as trend content. Before wave 2, the EMB questionnaire authors chose to revise the questions for which they thought the wording could be improved. Perhaps the wording did improve, but what seemed to be minor revisions in the English master version called for more extensive edits in the existing survey translations from the first wave. As the International Association for the Evaluation of Educational Achievement (IEA) famously put it: if you want to measure change, don’t change the measure. Fine-tuning the master version of a repeated question can lead to new meaning shifts in translated versions, and there is a risk of losing the trend.
Second, we looked at what survey translation instructions could be shared with the linguists. The initial plan was to send the new master version in the form of an Excel spreadsheet. There were tags for fills, placeholders and routing. We recommended (i) to encapsulate those tags so that the translators could concentrate on text; (ii) to prepare a style guide covering e.g. gender neutrality and forms of address; (iii) to generate a glossary with recurring terms and expressions; (iv) to draw up question-by-question translation and adaptation notes; and (v) to clearly distinguish new materials from questions retrieved from a previous wave, for which there are existing translations.
This led to a two-pronged approach for trend content vs new content, as you can see in the figure below.
The rule we applied was that, since existing translations had been used to collect data, the translations of new questions should – as far as possible – use the same wording for response scales, the same form of address, the same recurring expressions as the trend questions, as long as they were not outdated or blatantly incorrect. In multilingual surveys, cosmetic improvements of the wording should not give rise to systematic editing of trend questions, as this can have unintended effects on response behaviour.
In the next episode, we shall describe the translatability assessment and the process of drawing up question-by-question translation and adaptation notes in survey translation.
Meanwhile, if you’d like to learn about different translation and adaptation designs for multilingual surveys, do e‑mail us at email@example.com or fill the form below, and we’ll get back to you as soon as we can.