Comparability of certain notions, and perception of ethnicity in particular, across EU countries

Recently, Steve Dept, cApStAn’s founding partner and director of strategic partnerships was interviewed by Margaux LUCAS-NOWACKI, PhD student in political sciences, University of Strasbourg. The session was very informative and we thought of sharing the questions and answers with everyone in the survey localisation space.

Could you describe the whole process of translating a survey for the EU (calls for tender, average time, briefing meetings, delivery and changes on the initial work produced by cApStAn)?

For the majority of surveys commissioned by the European institutions, there is indeed a tendering procedure. In most cases, the terms of reference cover design, implementation and analysis of the results. The translation is just one section of the tender. The organisations that respond to proposals can be research institutes, survey and polling organisations or consulting firms. These organisations need a reliable partner with experience in survey translation, as this is a very specific type of translation: a straightforward translation may elicit different respondent behaviours, and it is often necessary to adapt the question to conform to local usage and context. In this context, cApStAn is often called in as partner or subcontractor.

It may take 2-3 months, sometimes more for the tendering procedure. It may be for a single survey (for example the Roma Survey of European Agency for Fundamental Rights) or for a framework contract, as was the case for Eurobarometer Flash surveys.

Some times the assignment specifications are very brief for translation, something along the lines of “a translation and back translation design must be implemented, and the translated questions will be equivalent to the master questionnaire”. Sometimes, the description of the translation and adaptation requirements are spread out across 2-3 pages. When the requirements are stringent or when the procedures are complex, cApStAn will draft the relevant section of the proposal.

If the proposal is successful, the survey organisation and cApStAn work together: we participate in the kick-off meeting with the EU, we define workflows, contribute to the timeline, ask input from the EU’s local experts if the domain is specific, e.g., health and safety.

After the kick-off meeting, we shall participate in evaluating the translatability of a mature draft of the questionnaire, write guidelines for the translation teams, train them (invite representatives from the contracting authority to the training sessions). If there is a pilot and a main data collection, the work on a large survey can take up to two years, but the timeline for translation is always rather tight. We usually organise milestones and cut-off dates for revisions to the master questionnaire, but these end up being a bit flexible, there is a lot of exception management in survey translation.

How is the translation of a survey organised within cApStAn?

Once an assignment is confirmed, we first reserve the capacity of a team of experienced survey translators. We schedule a project-specific training session, as the tools and specifications differ across surveys. We develop general guidelines and a set of question-by-question translation and adaptation notes, which need to be approved by the questionnaire authors. Then we usually design or contribute to designing a workflow. Whenever this is compatible with the contracting authority’s terms of reference, we base our approach on the Cross-cultural Survey Guidelines[1] (CCSG) on translation and adaptation. I serve on the Advisory Board of these CCSG. When there is a complex design, we may have up to four linguists per language version: for example, two initial translators, who each produce one version of the survey; a third linguist, the most senior survey translator, who reviews both versions and merges them into a third version, called the reconciled version. S he marks controversial sections and an adjudication meeting is organised, where the two translators and the reconcilers discuss every difficult issue and try to come to a consensual version, under the supervision of a cApStAn consultant. Then there may be a request for feedback from a local expert, and then final proofreading by an editor who was not involved in the translation process. There are many other (possible) workflows, of course.

[1] Survey Research Center. (2016). Guidelines for Best Practice in Cross-Cultural Surveys. Ann Arbor, MI: Survey Research Center, Institute for Social Research, University of Michigan. Retrieved April 04, 2022, from http://ccsg.isr.umich.edu/.

What documents does cApStAn translate?

The most important instrument is the survey questionnaire, because it is a sensitive data collection instrument. Fieldwork materials such as invitation letters, reminders, informed consent or privacy notice, contact sheet, interviewer instructions, back-checking questionnaires or pilot feedback forms often require translation, too, but this is more straightforward and will usually require single translation and proofreading.

In some projects, we are also asked to translate reports in several languages, this is a very interesting part of the work for us, as we can look at a synthesis of the outcomes of data collection instruments for which we contributed to craft the different language versions.

How do you explain that the EU calls for translation agencies when it hires a large number of translators and interpreters?

(i) Demand exceeds capacity, and survey translators need to have experience in adapting data collection instruments.
(ii) Survey questionnaire development is outsourced together with survey implementation. It is now commonly admitted that translation and adaptation are part of survey design and need to be factored in at questionnaire development time. We perform e.g. translatability assessments, for which staff of DG Translation are not trained. There is a constant interaction between survey organisations and cApStAn, including when analysing differential item functioning (DIF) after a pilot.

How is it possible to “engineer equivalence”?

Translation technology has been around for 30 years. Mature translation technology such as computer-assisted translation tools (CAT tools) allows to generate, maintain and leverage translation memories, use term bases or glossaries, apply style guides, run QA routines. However, the gold standard in survey methodology is team translation, whereby different players participate in the production of a translated or adapted version of the questionnaire. Most translation management systems (TMS) and CAT tools are not designed for such complex workflows. One works with a mix of professional translators and experts who are not trained translators. This should not be a reason to forego the benefits of technology. So, we tweak a free and open-source software (FOSS) tool and provide training and support. This allows everyone to use the same technology.

Also, there are many different survey modes: CAPI, PAPI, CATI, CAWI[1], and if the survey is multi-modal, we engineer and single source multiple channel publishing approach, so that each text segment is translated only once. This increases consistency and comparability across languages and across modes. In some languages we need to produce gendered versions of the questionnaires, using dynamic text. Translating or adapting questionnaires that will be delivered through an online platform requires working with fills, placeholders, routing instructions – this is not the translators’ work, but the engineers’ work. There are numerous examples whereby cooperation between a platform engineer, a translation technologist, a language expert and, of course, the questionnaire authors results in a much better master version, easily manageable by translation teams.

[1] CAPI Computer-assisted personal interview; PAPI Paper and pencil interview; CATI Computer-assisted telephone interview; CAWI – computer-assisted web interview

What would be your personal understandings of race and ethnicity?

You did say “personal”, right? My personal view is that race and ethnicity have become irrelevant concepts in modern society. There have been so many migrations, blends and cross-cultural relationships over the centuries that “race” is an illusion. In the US, there used to be the “one drop rule” It meant that the nation’s answer to the question ‘Who is black?’ has long been “a black is any person with any known African black ancestry”. Nowadays, the US Census, after much debate and controversy, allows citizens to identify themselves by more than one racial classification. That is an evolution that populists, tribalists and other nationalists attempt to challenge, but we are inevitably evolving toward a single human race.

Translating/adapting questions about ethnicity is extremely tricky and sensitive, and it is not rare that participating countries make these questions optional or add a “prefer not to answer” option.

What precautions can be taken to prevent meaning shifts?

Let us first differentiate meaning shifts, which are language-driven, and perception shifts, which are culture-driven. Meaning shifts can occur, for example, when there is partial semantic overlap between a term in the source language and its equivalent in the target language. This is one area where the translatability assessment goes a long way: each newly developed question is submitted to three different experienced translators from three different language families. In Europe that could be the Germanic, Romance and Slavic language families (and perhaps Greek and Finnish or Hungarian, for good measure). They produce an advance translation and choose from a set of translatability categories to describe the issues they encounter. Then a senior linguist at cApStAn collates this feedback, and produces a consolidated translatability report. In this report, suggestions are made for question-by-question translation or adaptation notes that address these issues or, in some cases, a proposal is made to reformulate the question, without loss of meaning. Sometimes the translation note will paraphrase, or allow to translate one word as two words in certain languages. In surveys, the idea is to obtain the same clarity in each question, so that the data collected is as comparable as possible.

How is comparability built on terms which carry competing meanings in different languages?

Survey translators and cApStAn consultants work to maximise comparability by constantly checking whether the question will be understood by the target group the way the questionnaire author intended it to be understood. Survey translators need information about the constructs that are being measured, about the intent of the questionnaire scales. Functional equivalence cannot be established ex ante, even by the best translation teams in the world. There will always be some questions that do not function precisely as intended, there is always a risk that a subgroup of the target population understands the response differently. Concepts such as acquiescence or social desirability cannot be measured up front. That is why it so important to also have cognitive interviews, focus groups or online pre-tests with probing questions: the more information one gathers about how the questions are understood, the more we can fine-tune translations to come closer to the intended understanding. This will often require deviating from the wording of the master questionnaire to recreate an equivalence that would be lost with a straightforward translation.

Video interview recording link