Looking at Comparability from the Vantage Point of Translation and Adaptation
by Steve Dept – cApStAn partner
When either the OECD or GESIS—the Leibniz Institute for Social Sciences—organise a methodological seminar, it is an opportunity to learn something from authoritative sources and to test one’s working assumptions with experts from the field. When OECD and GESIS join forces to organize such a seminar, one knows in advance that the level will be high. It came as no surprise that the OECD-GESIS Seminar on Translating and Adapting Instruments in Large Scale Assessments, which took place at the OECD headquarters in Paris on June 7th and 8th, was a resounding success. We at cApStAn were delighted that so much attention was given to translation and adaptation of data collection instruments, which has been our bread and butter for two decades now.
The breadth of coverage of the topic was impressive: some of the most prolific academics in the field (Prof. Fons Van de Vijver, Prof. Stephen Sireci, Prof. Klaus Boehnke) were there to draw our attention to the hard science behind comparability and remind us that the Holy Grail of functional equivalence is hardly within reach when comparing data from multiple cultures using instruments in multiple languages. At any rate, functional equivalence can’t be proven. We even learnt that there is a scientifically valid alternative approach to translating assessments: the emic approach, which suggests that—rather than translating and adapting instruments—we could develop a measure of our construct from within each culture, and then validate the measurement by looking at the relationship between the latent construct and a comparison variable. Thought-provoking, even if difficult to put into practice on a larger scale.
Experienced researchers (Dr Dorothée Behr, Dr Britta Upsing, Dr Maria Elena Oliveri) shared their insights on guidelines for adapting tests and on how they are perceived and implemented in the field. The use of technology in International Large-scale Assessments (aka ILSAs) was covered by Yuri Pettinicchi—who delved into automated evaluation of survey translations—and cApStAn’s very own Danina Lupsa, who presented best practice in translation technology as it is (or should be) used in ILSAs in the present day. Steve Dept, one of cApStAn’s founding partners, used his presentation as an opportunity to reflect on the history of translation quality management and on the most suitable metrics and diagnostics for test instruments adapted in multiple languages. The presentations drew many astute questions from the audience. As one of the attendees put it, the whole event was “a kind of master class in translating assessments”.
The eminent Professor Ronald Hambleton’s keynote address added a welcome touch of humour to his towering perspective on adapting achievement tests.
Our take-home message from this series of exchanges can be summarized with a key term used by Dr Diana Zavala-Rojas in one of the Q&A sessions: triangulation. There is a wealth of knowledge available, but in spite of the proliferation of multidisciplinary task forces, reliance on existing knowledge bases is still too scant. Several attendees of the OECD-GESIS seminar will be at the 11th Conference of the International Test Commission in Montreal early July, and appointments were made: field practitioners and researchers will gather in Montreal, open new communication channels and identify more ways to usefully benefit from each other’s expertise. cApStAn will send a team of 4 to Montreal.