Capstan: Linguistic Quality Control




Content processed by cApStAn remains under embargo as long as the client has not placed it in the public domain. The methodology we use, test and improve, however, is widely shared: we participate in conferences (CSDI, ESRA, AAPOR and WAPOR, ITC, E-ATP, ESRC); we publish articles, posts and even chapters in survey methodology handbooks; we organise linguistic quality assurance workshops for language professionals. Our translation technologists develop web-based apps that can be made available to a broader public. Partnerships with research institutes make it possible for academics to use our empirical data in their research. Our approach is clear: the more people are aware of best practices, the better for our profession.

Is this what you need? Contact us now


  • Dept, Ferrari & Wäyrynen (2010) – “[…] a test designed to measure the respondents’ competencies in sentence processing included a set of short statements, which the respondents had to label as true or false. One of the sentences was “All plants need light to grow.” The Tuareg from sub-Saharan Africa claimed that, since in the desert there is a lot of light all year and nothing grows at all, it was less obvious for Tuareg respondents to identify this statement as true than for somebody living in a place where sunlight is scarce. Since the test item was intended to assess sentence processing skills rather than scientific literacy, the statement could be adapted to “All plants need water to grow.””

  • Dept (2013) – “In multilingual surveys, there is a strong trend towards performing more upstream work to reduce the need for downstream corrective action. Along these lines, a new step has been designed and implemented recently, and its output is most promising: newly developed questionnaire items undergo a Translatability Assessment before they are finalised and sent to countries for translation/adaptation.

  • Upsing, Goldhammer, Rölke & Ferrari (2011) – “Using the PIAAC Study (Programme for the Assessment of Adult Competencies) as an example, this paper describes all stages of the localisation process for an international large-scale assessment. The process ranges from the development of source items to translation, adaptation of layout issues and meta-data adaptations. The paper concludes with a discussion of lessons learned and open questions.”

  • Dept, Ferrari & Mendelovits (2012) – “As regards the localization of assessment instruments for participating countries, it is no longer an option to administer a large scale survey without some level of sophistication in the localization design. Thus, linguistic quality assurance (LQA) and control (LQC) processes are designed and implemented at both Field Trial and Main Survey phases…”

  • Dept, Ferrari & Halleux (2017) – “Significant advances have been made in localisation designs for international surveys over the last couple of decades. The following section sets seven perspectives that should function as guiding principles to prepare a localisation process that can be considered as current best practice. Different components from existing approaches are described, starting with the now obsolete but still widely used ‘back translation’ design and ending with the highly sophisticated design used in the OECD’s Programme for International Student Assessment (PISA). This chapter points out strengths and weaknesses of each approach and singles out practices that seem to yield satisfactory outcomes. It will conclude with recommendations for a checklist of requirements that should constitute a good starting point for any localisation design.”

  • You want more ? Let us send you our publications :