25.11.2020

Multidimensional thinking by a visionary researcher

A Q&A session between Dr Alina Von Davier and our CEO, Steve Dept

1. Steve: What is ACTNext and what is its main purpose?

Alina: ACTNext is an R&D-based innovation hub at ACT. ACTNext is an interdisciplinary team, covering expertise from test and learning tools design, to computational psychometrics to AI and machine learning, and software development for data governance. The teams at ACTNext work on designing and/or redesigning ACT’s products, developing new capabilities, especially edtech capabilities, and doing foundational R&D.  ACTNext is also very active in the research community and shares ideas, papers, and blogs via our website (actnext.org), and via our podcast and blogs on ACTNext Navigator, on Medium.com.

2. Steve: What does your leading role at ACTNext entail?

Alina: ACTNext is nowadays a large team of about 75 people. Like any leadership position, my job has several components: a business component where I collaborate with our CEO’s Management Team at ACT on strategy and decisions; a people & project management component where I collaborate with the ACTNext leadership team on strategy and decisions for our team; and a visionary component, where I rely on my background as a researcher, and my drive to learn and seek out new information, and my best thinking about the future to identify directions for the team to help ACT support learners holistically in the 21st century.

As a researcher, I am genuinely interested in many research topics and I am always looking for collaborators & students with whom I can explore these questions. I am fortunate to be able to rely on a leadership team stocked with brilliant minds, and together we explore and discuss exciting horizons in research, development and innovation ideas. My strategy at ACTNext is multifaceted: I bring together passion for supporting learners, with the technical and computational psychometrics framework in which we operate; this novel perspective needs a carefully crafted story that helps us communicate, both internally and externally. My team also incorporates a business analyst, who helps our researchers better communicate with ACT’s own business units and translates the excitement and importance of developing new psychometric models or AI algorithms into the language of value-added business propositions.

3. Steve: What projects are you working on currently?

Alina: Given that the team is quite large, we have many irons in the fire. I will list only a very few:

1. We collaborate with our colleagues from other divisions at ACT to design & help deliver the changes announced for the ACT test in our September 2020 initiative. Some of our team members are involved in the design, others are involved in data governance, and still others are involved with the psychometric decisions.

2. The AI/ML team is working on building and refining our in-house automated content generators (items, passages, graphics, etc.). One outstanding project, Sphinx,  combines cutting-edge research, development, UX design, architecture and involves collaboration across many divisions in the company.  The same team also works on the company’s automated scoring engine, the Constructed Response Automated Scoring Engine (CRASE+), which is now being trained to be applied at scale within the company.

3. The Learning Solutions team leads the work on Creative Thinking for PISA 21, but also researches the possibilities for teaching & learning creative thinking and other 21st Century skills, such as collaborative problem solving.

4. Steve: Are there robust multidisciplinary approaches in place to harness the combined power of Natural Language Processing (NLP) and Machine Learning (ML) in item development, without losing out on measurement?

Alina: I touched on this a bit earlier, but in a word, yes. Sphinx, our prototype automated-content generator, seamlessly integrates collaboration between machine learning  and human experts to create passages and items. This unique generator is based on a combination of ML techniques and item models. We are working to more accurately predict the difficulty of the generated items—much more accurately than has been possible in the past. We expect that improvement in the methodologies, and the sheer size of the data should allow us to better estimate the item parameters.

5. Steve: Do you see the potential of a translation model that would effectively integrate psychometric characteristics in the production of multiple language versions of a test?

Alina: My first reaction is yes. However, it will depend on how many constraints are put on the model, on the specific application, and how large the confidence interval for the estimates is considered acceptable. A combination of item models, AI, simulations & calibrations, and SME supervision may do it.

6. Steve: How do you see in the testing universe 5 years from now?

Alina: In 2012, the need for building and expanding expertise in big data was clear to me. We are now in 2020, and our community is still discussing the role of process data. My vision and my team’s vision may be bolder than what the reality may hold in 5 years, but insights and circumstances can change everything . I am writing this while we are all in quarantine: so I see a significant amount of tech-based teaching, learning, and assessment in our future. One of my personal frustrations is the over-use and abuse of the idea of comparability in standardized tests in the US: many famous and great tests have been around for 30 to 60 years, and the test forms have been built to stay comparable over time. While comparability with the status of 10 years ago may be attractive to some, it precludes any progress in testing: for instance, the way in which students’ learning experience changed over time cannot be captured by a test that is constrained by comparability to a test form from decades ago. We need to adopt a market-basket paradigm from economics (which has been proposed in the past in the context of NAEP, for example). We cannot let the past dictate what and how future students should be tested on.

7. Steve: What would be your tips for organisations looking to take their assessment to a global audience?

Alina: Think big. In terms of scale, of course, but more importantly, vision. A global audience is a market: one needs to get to know the stakeholders and the understand the constraints. The world is getting smaller by the day and there are significant convergences in  societies around the globe. Almost everyone wants to offer a “good educational opportunity” for the next generation, and for the most part “a good educational opportunity” means the same thing across cultures. The culture, politics and policies differ more than the educational goals. What is a bold vision for a global educational platform? In my opinion, it involves the use of technology to facilitate personalized access to quality resources (including the language of the learner), assessment systems that are integrated with the learning opportunities, and delivery platforms that are flexible and can accommodate multiple testing modalities with appropriate security affordances. 

Based on my experience, one needs to develop a level of authority in the field (be recognized as a company with high-quality products), one needs to listen to the individual customers/countries/ministry of education, and work with them to craft solutions that address their unique needs, and ensure that validity and efficacy studies are included in the design of the collaboration/contract. All of these latter elements are necessary to support a company’s level of authority, but also from a communication standpoint, to cement a trust-focused relationship in the marketplace.

Alina von Davier ACTNext_color-2

Von Davier is an award winning author and esteemed researcher who has received funding from the National Science Foundation, Spencer Foundation, MacArthur Foundation, and the Army Research Institute, among others. Previously, von Davier was a senior research director at Educational Testing Service (ETS), where she led the Computational Psychometrics Research Center, and the Center for Psychometrics for International Tests.

Von Davier is currently an adjunct professor at the University of Iowa and Fordham University, president of the International Association of Computerized Adaptive Testing (IACAT), she is also a member of the Association of Test Publishers (ATP) board of directors, the Smart Sparrow  board of directors and the advisory board for Duolingo

She earned her doctorate in mathematics from Otto von Guericke University of Magdeburg, Germany, and her master of science degree in mathematics from the University of Bucharest, Romania. 

Dr Alina von Davier on Twitter, Linkedin

Alina von Davier ACT NEXT

About ACTNext

ACTNext is an R&D-based innovation hub at ACT. ACTNext is an interdisciplinary team, covering expertise from test and learning tools design, to computational psychometrics to AI and machine learning, and software development for data governance. The teams at ACTNext work on designing and/or redesigning ACT’s products, developing new capabilities, especially edtech capabilities, and doing foundational R&D.  ACTNext is also very active in the research community and shares ideas, papers, and blogs via our website (actnext.org), and via our podcast and blogs on ACTNext Navigator, on Medium.com.