translation technology
15.07.2022

Why involving language services providers (LSPs) in the translation platform design is important

by Roberta Lizzi – Senior project manager, external human resources director

When multinational surveys and assessments go digital, content creators, principal investigators or test owners tend to rely on platforms, not only for delivery, but also for the management of the translation process. These platforms can be off-the-shelf or developed in-house. As a consequence, language service providers (LSP) and their linguists can be asked to perform translation, review or verification in online environments designed by the organisation administering the survey or assessment. The intent may be laudable – to streamline and centralise the translation process in one secure environment, while avoiding iterations of multiple files. In practice, the outcome can fall short of the expectations for a variety of reasons: ad hoc platforms can have a negative impact both on the output quality and on the timeline.

Some shortcomings

As a Project Manager, I have recently been in charge of two projects by two different clients where our linguists were asked to review the translation of assessment items and questionnaires in an online tool developed to host the process end-to-end: translation, review and verification, including documentation of changes and comments by the different parties involved.

One cannot expect such an online translation management suite to have the same features as a computer-aided translation tool (CAT tool). Nevertheless, even the most basic functionalities, such as a spell-checker or a search and replace function, were missing. Our mail inbox was soon flooded with queries from the reviewers: “how can I be sure that the final translation doesn’t contain spelling mistakes?”; “Is there a way to know how many times and where the term ‘zzz’ appears?”; “Is there an auto-propagation function?”; “Is it possible to have a complete overview of the entire survey so that I can re-read it in one go?” Under those conditions, I knew the linguists would find it more challenging than usual to ensure the quality they are used to deliver.

Difficult workarounds

To mitigate this, we came up with a convoluted workaround to part of the problem: it was possible to export the translation in a file format that allowed us to perform the spell check outside the platform and then echo corrections inside the platform. Not fun. It required extra manipulation from all parties involved, including the client, who had to provide the export on request. For other parts of the problem, our hollow reply just had to be “we are sorry, but there is no solution for that”.

There was no way for us to solve technical glitches reported during the review process: linguists could not preview the content after editing. Sometimes they could not save their changes. These bugs had to be fixed by the client’s platform designers and, unsurprisingly, this resulted in idle time before the system was up and running again.

In another case, we discovered by chance that the initial translator was editing the target version while the review was still in progress, so that the reviewer was working on a moving target. Fortunately, this “breach” in the system was discovered early, the client was alerted, a notification was sent to the initial translator and a locked version patch was developed on the fly.

Impact on the final product

The main advantage of these systems is that the final translations are all stored in a centralised repository and all the documentation and metadata are recorded in a single place.

The main setback is that these systems fail to meet the linguists’ elementary needs. Translators and reviewers alike are asked to work in an environment that requires a learning curve but fails to provide standard functionalities that drive the final quality of the product. Working in an unfamiliar environment that does not cater for the quality assurance tools linguists are used to generates frustration and efficiency loss.

The solution? Upstream cooperation

It is understandable that the platform engineers are not aware of requirements of translators and reviewers. They have other priorities.

In projects where the client set up a multidisciplinary task force involving test developers, platform engineers and cApStAn’s linguists, working conditions were significantly better: in the earliest stages of a project, the LSP can contribute invaluable assistance about segmentation rules, exchange standards, needs and requirements of a translation environment.

There have been instances where an API was rapidly adopted as the most straightforward solution: linguists could work in their preferred CAT tool and get the best of both worlds, while the test owners had access to all the information they needed, in a single, central location.

Involving the LSPs at platform development stage also allows to test the system with an eye on the translation functions and see whether some improvements can still be implemented before the platform goes “live”. All users ultimately benefit from this cooperation and this will result in an increased quality of the translation, a more enthusiastic participation of all parties and a more efficient time management.

Contact Us

We'd love to hear from you, be prepared for a quick response

This field is for validation purposes and should be left unchanged.

Brussels

Chaussée de La Hulpe 268, 1170 Brussels

+32 2 663 1729

Philadelphia

121 S. Broad Street, Suite 1710, Philadelphia, PA 19107

+1 267 469 2611