A Translatability Workshop for Test Authors
by Steve Dept, cApStAn CEO
An experienced psychometrician and test author who writes good questions can be monolingual. It doesn’t prevent them from doing a sterling job. As a matter of fact, Anglo-Saxon items writing practices have dominated the assessment universe for about half a century now, and psychologists who master only English are by no means an extinct species. Likewise, translators are by definition at least bilingual, but do not blush when it appears that they have only limited knowledge of the psychometric properties of a test.
When my career as a linguist drove me to adapt test questions, in the mid-nineties, my office was on the university campus, so I attended a psychology course and discovered the existence of psychometrics. I learnt that test authors crafted their test questions carefully, selected distracting information, and that they made selective use of intentional repetitions, literal and synonymous matches. This newly acquired knowledge helped me get my focus right when translating or adapting test or questionnaire materials.
Twenty-five years (and many international large scales assessments) later, I decided to try to return the favour: why not give test authors the opportunity to learn about the challenges that linguists face when they translate a test into their language. This awareness would help test authors get their focus right when writing test items earmarked for translation into other languages (and adapted for other markets). The first translatability workshops for test authors date back to 2007 and, judging by the reactions of item developers, we knew we were onto something.
So, what are Translatability workshops?
The translatability workshops have a theoretical part and an interactive part.
As researchers acquire new insights in how and why different test adaptations affect measurement, there is a growing consensus around the need to integrate planning for translation/adaptation in test development. At cApStAn, we are familiar with the literature in that respect. The ITC Guidelines for Translating and Adapting Tests have been our starting point from the outset, and I, serve on the Advisory Board of the Cross-Cultural Survey Guidelines (see here for the Translation and Adaptation chapters). Our experience in implementing these guidelines is at the core of the knowledge we are so keen to share.
We help raise awareness in test authors of the gap between a well-designed test written in English for English-speaking candidates and an English source version of a test, earmarked for translation into say Mandarin, Russian and Spanish for Latin America. Source versions serve as a basis for translation and adaptation and need to be as unambiguous as possible. Dependency on subtle register issues or nuances should be kept to a minimum if the test items need to be administered in several languages. Items written for an English audience may be less fit for the purpose of translation/adaptation than items in which consideration is given to adaptability, in particular by a reviewing team that represents several languages.
When psychologists who are native speakers of English design a test, they may at times need to refrain from relying on their writing skills and sharpen the focus on the constructs they want to measure. While it is of course critical to produce documentation on underlying constructs and on those concepts that may require adaptation, it is also important to synthesize this documentation: a team approach can be useful to produce relevant, user-friendly item-per-item translation and adaptation notes while designing the test. This is covered in the interactive part of the translatability workshop, using cApStAn’s framework of 14 Translatability categories.
A recent example of a Translatability workshop with item developers:
The last workshop we gave to a testing company’s item developers began with a brief overview of linguistic and formal features that are known to drive (or influence) the level of difficulty of items that assess competencies. This includes the use of passive voice, the proportional length of key and distractors in MCQ, the grammatical match between question and response options, and the use of verbatim and synonymous matches.
All the exercises were built around actual test instruments from that particular testing company.
The second part of the workshop consisted of a guided analysis of an existing e‑tray exercise: this analysis helped the test authors acknowledge:
- the extent to which the use of jargon was intentional;
- the importance of formulating key information in a translatable way;
- the range of adaptation permitted in distracting materials;
- the potential impact (on response patterns) of differences in professional environment between e.g. Chinese, Russian and Latin American work culture.
At the end of the translatability workshop, we provided a recap of do’s and don’ts when writing simulation exercises, e-tray exercises or situational judgement tests that need to be translated/administered in several languages.
At the end of the day, the test authors had acquired an overview of linguistic, semantic or cultural components that are prone to ambiguity. They confirmed that they had learnt to think in terms of translatability and were confident that they could apply the lessons learned in their test development process.
This capacity-building approach increases validity and comparability, simply because more language parameters are taken into account by psychometricians; likewise, more psychometric characteristics are taken into account by linguists.
Please send any questions, comments and feedback to email@example.com