
cApStAn will be a Gold Sponsor at eAA 2026 in London
From June 8 to June 10, cApStAn will be in London for the 2026 e-Assessment Association International Conference. This year, we’re proud to be joining as a Gold Sponsor.
At cApStAn, we work at the intersection of linguistics and psychometrics: we help determine and ensure that the languages used in an assessment are aligned with its measurement goals. Since the eAA is conceived as a forum where stakeholders and professionals can exchange views on how technology transforms education and educational measurement, we hope to share some useful insights on the benefits and risks of technology-based assessments, both in monolingual and multilingual settings.
At this edition of the conference, we’re excited to listen and learn, and also share our own experiences in navigating the growing roles of generative AI.
We have joined forces with Enovate, a versatile assessment platform provider from Norway, to showcase how linguistic expertise can be integrated in test design: when platform developers and translation technologists work together, best practices can be implemented from end-to-end, across the entire assessment cycle. This will allow for hands-on demonstrations of automated item health checks as well as item localisation workflows in a technology-rich environment.
cApStAn and Enovate on the Syndicate Stage
Steve Dept from cApStAn and Øyvind Meistad from Enovate will be on the Syndicate Stage to examine “How reliable can an automated item health check become?”
The writing skills and the socioeconomic and cultural background of item writers can introduce a cultural bias. The level of reading proficiency needed to fully understand assessment questions may put talented individuals at a disadvantage if the language of the test is not their heritage language. In the age of natural language processing (NLP), it is possible to use data from previous cultural review panels, translatability assessments and item health checks to generate an automated screening report of a draft test. In a case study, we tried to determine the level of human oversight required to make an automated scrutiny relevant and effective.
Visit our Booth!
Meet our colleagues Steve Dept, Øyvind Meistad, Devasmita Ghosh and Ingrid Meistad at our exhibit booth. We very much look forward to lively conversations with assessment professionals who work in digital environments as test owners, test sponsors and test users.