Translatability and Cultural Suitability Workshops for Test Authors

Steve Dept, cApStAn CEO

It may take a psychologist or a subject matter expert to be a test author and develop good questions, but nothing speaks against that person being monolingual, right? It doesn’t prevent them from doing a sterling job. Anglo-Saxon items writing practices have dominated the assessment universe for over half a century, and psychologists who master only English are by no means an extinct species. Likewise, while translators are by definition at least bilingual, they do not need to be ashamed of their limited knowledge of the psychometric properties of a test, for example.

When, as a linguist, I was asked to translate and adapt test questions, in the mid-nineties, my office was on the university campus, so I attended psychology courses and discovered the existence of psychometrics. I learnt that test authors crafted their test questions carefully, produced distracting information, and that they made selective use of intentional repetitions, literal and synonymous matches. This newly acquired knowledge helped me get my focus right when translating or adapting test or questionnaire materials.

Twenty-five years (and many international large scales assessments) later, I try to return the favour: test authors are eager to learn about the challenges that linguists face when translating a test into their language. This awareness helps test authors get their focus right when writing test items earmarked for translation and adaptation. The first translatability workshops for test authors date back to 2007 and, judging by the reactions of item developers at the time, we knew we were onto something. We soon became aware that it would be helpful to bundle the notion of cultural suitability with translatability: meaning shifts are driven by language, while perception shifts are driven by culture.

So, what are Translatability workshops?

The translatability workshops have a theoretical part and an interactive part.

As researchers acquire new insights in how and why different test adaptations affect measurement, there is a growing consensus around the need to integrate planning for translation/adaptation in test development. The ITC Guidelines for Translating and Adapting Tests have been our starting point. For questionnaires, we refer to the Cross-Cultural Survey Guidelines (see here for the Translation and Adaptation chapters). Our experience in implementing these guidelines is at the core of the knowledge we are so keen to share.

We help raise awareness in test authors of the gap between a well-designed test written in English for English-speaking candidates and an English source version of a test, earmarked for translation into say Mandarin, Russian and Spanish for Latin America. Source versions serve as a basis for translation and adaptation and need to be as unambiguous as possible. Dependency on subtle register issues or nuances should be kept to a minimum if the test items need to be administered in several languages. Items written for an English audience are not always fit for the purpose of translation/adaptation. The cultural suitability can be seen as a continuum on which certain item characteristics can be placed.

When psychologists who are native speakers of English design a test, they may need to refrain from relying on their writing skills and focus on the constructs they want to measure. It is critical to produce documentation on underlying constructs and on elements that require adaptation in the test author’s eyes. However, test authors do not always have the toolset that allow them to conceptualize and synthesize this documentation. An important part of our workshops focuses on generating relevant, user-friendly item-per-item translation and adaptation notes. This is covered in the interactive part of the translatability workshop, using cApStAn’s framework of 14 Translatability categories.

What does a translatability workshop cover?

The theoretical part includes an overview of linguistic, cultural and formal features that are known to drive (or influence) the level of difficulty of items that assess competencies. This includes proportional length of key and distractors in MCQ, the grammatical match between question and response options, the use of verbatim and synonymous matches but also stereotypes that can be avoided because they may be loaded in ways that can induce e.g., gender bias or cultural bias due to variable sensitivity to specific elements depending on demographic, religious, or ethnic context.

The practical part of the workshop usually includes exercises based on actual tests. This could consist of a guided analysis of an existing situational judgement test. The analysis is designed to help the test authors acknowledge:

  • the extent to which the use of jargon was intentional;
  • the importance of formulating key information in a translatable way;
  • the range of adaptation permitted in distracting materials;
  • the potential impact (on response patterns) of differences in professional environment between e.g. Chinese, Russian and Latin American work culture.

At the end of the translatability workshop, we provide a recap of do’s and don’ts when writing tests that need to be translated/administered in several languages.

At the end of the day, the test authors have a toolkit to reference linguistic, semantic or cultural components that are prone to ambiguity. They learn to think in terms of translatability and cultural suitability and can apply these concepts in their test development process.

cApStAn has been responsible for ensuring linguistic equivalence of multiple language versions of various large-scale international surveys and tests since its inception. If you would like to speak to one of our experts about your requirements, please write to us in the form below

Contact Us

We'd love to hear from you, be prepared for a quick response


Chaussée de La Hulpe 268, 1170 Brussels

+32 2 663 1729


121 S. Broad Street, Suite 1710, Philadelphia, PA 19107

+1 267 469 2611