/
/
How to navigate AI in test development

Blog

How to navigate AI in test development

Isabelle Gonthier, PhD, ICE-CCP

Share

PSI attended numerous conferences in September and October this year. Our team participated in panel sessions, engaged in discussions, and made connections around the world – from Arizona and Colorado in the United States to Vienna in Austria and Vancouver in Canada. One of the hottest topics on every conference agenda was Artificial Intelligence (AI) in test development, specifically using generative AI to assist in the creation of exam content.

The debate around AI is taking place across almost every industry, and testing is no different. Regardless of the sector we work in, perhaps the most important point to acknowledge is that no one has all the answers on AI. We are all navigating how to responsibly use AI in test development – without compromising security, validity, and copyright.

Do you have a plan for AI in test development?

While we don’t have all the answers, the team at PSI is here to guide our clients in the responsible use of AI in testing. Whether you want to dip your toes in the water or dive right in, the first step is to create a plan. There’s a balance to be had between being overly cautious and using AI without due consideration of the risks. Achieving this balance starts with your strategy and plan.

Do you understand what AI in test development means for your organization? Do you have sufficient knowledge of copyright and Intellectual Property (IP) issues? How are you communicating this to your stakeholders, including the Subject Matter Experts (SMEs) involved in test development? There are a lot of critical questions, and part of our shared learning is to work together, progress the conversation, and come up with the answers.

A human validated approach

If you plan to use AI in item generation and development your SMEs should always be a big part of the process. No matter what, your test content should be verified by a human SME. There are two key elements when it comes to working with AI and SMEs:

  1. Education and information. Be clear with your SMEs about what they should and shouldn’t do when it comes to using generative AI. This isn’t to instill fear, but to ensure they understand the benefits and potential pitfalls.
  2. How to use generative AI. Explain exactly how you want your SMEs to use AI and provide instructions to show them. In some cases, a module on how to use generative AI could be an additional element of SME training.

The exact approach is likely to be different for every testing program. Our first recommendation would be to use a form of Closed AI, keeping proprietary content private to ensure the security of your test items. However, our objective is to help clients find ways to use AI while leveraging the considerable investment they make in SMEs. For example, while every item should still be reviewed and validated by SMEs, the involvement of AI will likely free up time for SMEs to further enhance test items and take content to the next level.

This approach allows programs to make the most of the opportunities presented by AI in test development. This includes time and cost efficiencies, as well as the potential to create more test content, grow item banks, reduce test content exposure, and enhance test security. All while maintaining the quality, validity and reliability of tests.

We know there are still a lot of questions about exactly how to bring AI and SMEs together in the test development process. This includes concerns about the quality of input data and the security of AI outputs. The PSI team was heavily involved in discussions about all of these questions during conference season and we continue to invest in furthering our own knowledge on the subject.

A good place to start

As well as contributing to the ongoing debate, we are already working on AI solutions for item development. We have started to implement this in practice exams, which is a good place to begin for any testing organization. If you have concerns about using AI, then practice exams are a low stakes environment to test, learn, and iterate your processes.

Whether you have started using AI or not, our expert team is on hand to partner with testing programs to find the right way forward, specific to your test development needs. Our stance on any new technology is to find responsible and ethical ways to use the tools available that genuinely advance testing programs, rather than just adopting a technology because it is shiny and new.

Industry body opinion

In addition to conference sessions, concerns about the use of AI in testing have been raised more formally. Again, we are following the debate, getting involved where relevant, and feeding back the latest developments to our clients.

For example, in October this year the Professional Certification Coalition (PCC) shared a response to the August 30, 2023 Notice of Inquiry on Artificial Intelligence (AI) Systems. The PCC provided background about the important role of SMEs in a “rigorous, systemic process of defining the expected knowledge, skills and abilities for their particular profession”, also highlighting that this is a “time-intensive and expensive process.”

The view of the PCC is that AI and generative AI tools “hold promise for reducing the costs and time involved in developing test items, as these tools can be used to generate draft content.” This does come with a caveat that, “in order to ensure that any content generated by an AI tool was accurate, appropriate, relevant, written in the desired style and format, and worded unambiguously, the process of test development would necessarily include a significant level of human review by SMEs.”

It is also significant that The White House has issued an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Much like the PCC view, the Order acknowledges that “responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.” At the same time, the document highlights the importance and urgency of “governing the use of AI safely and responsibly.” This includes having a policy, guiding principles, and priorities.

Where we are now

Official documents, conference presentations, and our engagement with AI experts range from practical examples of how to use AI in the test development lifecycle to discussions about the legal impacts on content. It’s clear that the use of generative AI in test development isn’t a thing of the future, it’s happening already.

AI in test development is not just a theoretical concept, it’s a practical tool that is improving how we develop test items and test forms. However, while the outlook is positive our focus now is around how we use it safely and securely. Testing organizations that wait to see what happens run the risk of getting left behind. The important thing is to be in the game, to keep having challenging conversations, and to protect the integrity of testing programs by having an AI strategy and plan.

Share