Challenges in parallel translation verification of test items with script/audio, e.g. for pre-school children
by Pisana Ferrari – cApStAn Ambassador to the Global Village/Roberta Lizzi – Senior project manager, external human resources director @cApStAn
cApStAn has almost two decades of experience in the linguistic quality control of test items in international large scale assessements, such as PISA, PIAAC, PIRLS, TIMSS, etc. The translation verification process compares source and target versions, segment by segment. Each segment is checked against standardised intervention categories, to detect potential ambiguities, mistranslations, grammar or syntax issues, missing/added information, inconsistencies, words left in source language, etc. All issues are documented, segment by segment, in a centralised monitoring tool, which also includes proposed corrective actions.
It is somewhat more of a challenge to work with assessment instruments that contain mostly graphic material and audio material, for example in surveys for small children, who rely exclusively on visuals and audio to understand the test. In addition to the above aspects which are typical of written production, other factors come into play. In these assessments, it is also important to check that:
– the audio matches the agreed script (i.e. no deviation is made during the recording session);
– the voice-overs speak with a correct neutral accent, free from regional/foreign inflection;
– the voice-overs have a correct pronunciation, free from defects (e.g. stuttering, lisps);
– the voices correspond for gender and age to the characters used in the script (male/female and young/old characters);
– the text is recorded in a way that doesn’t “give away” the answer by stressing a word (voices need to be as neutral as possible in order to avoid influencing the respondent’s answer);
– in languages that are gender-dependent (i.e. that present variations for males/females), the text which is addressed directly to the respondents is recorded for both male and female respondents to avoid gender-bias.
To conclude, while the ultimate aim of verification for both written and audio materials is the same-making sure that the results are comparable in all tested language versions-the focus is inevitably different: the script might be linguistically correct and match the source content-wise, but if the voice reveals emotions that give a hint to the correct answer, this could compromise the data for that question. Selecting the linguists with the appropriate skills to spot the slight differences in intonation, pronunciation and inflection and training them to focus on the audio rendering is part of cApStAn mission in this emerging domain.