Automated Test Evaluation

A business offering education and training to disparate set of students leveraged our platform to help them with automated assessment of their students’ submissions.

How we did it

Scoring Model

To mimic how humans would assess any student submission

NLP

To recognise the submission in different ‘dialects’ of English

“When we were approached by the MindWave team, we couldn’t understand how assessment could be done by a computer. What they did blew our minds. We were stunned to see how their models predicted the scores for each student submission nearly as perfectly as our educators did.”

Challenge

An automated way to accurately assess student submissions was the primary challenge thrown at the team. Using our AI platform to deliver meaningful results, our team first required to identify the criteria on which student’s submission should be judged along with their relative importance. The next challenge was to build a scoring model that would mimic the way humans assess student submissions along with the variation or dialect of English used by them.

  • Identify the judging criteria for submissions
  • Assess their relative importance
  • Build a scoring model to mimic human assessment
  • A method that would learn and improve the scoring model iteratively
  • Recognize student submissions in different English dialects

Solution

A solution that met all the requirements was created by systematically tackling each of the challenges. We designed and implementation some new algorithms in order to build the best solution and leveraged several powerful features from the MindWave platform that allowed the development of an accurate assessment and scoring model.

  • Core natural language understanding engine
  • Identification of core characteristics of each student submission by extracting relevant concepts and respective categories from within the collected data
  • Creation of a scoring model suitable for the submissions
  • Machine Learning for scoring model refinement to incorporate feedback of human test assessors

Result

The solution was delivered within the stipulated time. Everyone was stunned to discover that the software predicted the scores for each student submission almost as accurately as they were judged by humans. Plans were put in place to enhance the solution further to incorporate additional criteria to judge the quality of the submissions beyond the ones covered in the original solution and provide real-time feedback to guide the test-taker towards improving their submissions

Other Stories

AI powered Consumer Profiling for Banking

Given very limited customer data that included the name, e-mail address and location, MindWave produced a variety of individual consumer insights for more personalised targeting.

Search Retrieval of News with sentiment related to corporates

A real-time API was required by the client to integrate with their app which would obtain relevant news results for any professional’s name  with or without additional information such as their role or organization name.

Automated Student Assessment

The use of MindWave AI made it possible for automated assessment and adaptive testing. This means assessing the difficulty of each question, and then score the student responses.