Automated Test Evaluation
A business offering education and training to disparate set of students leveraged our platform to help them with automated assessment of their students’ submissions.
How we did it
To mimic how humans would assess any student submission
To recognise the submission in different ‘dialects’ of English
“When we were approached by the MindWave team, we couldn’t understand how assessment could be done by a computer. What they did blew our minds. We were stunned to see how their models predicted the scores for each student submission nearly as perfectly as our educators did.”
An automated way to accurately assess student submissions was the primary challenge thrown at the team. Using our AI platform to deliver meaningful results, our team first required to identify the criteria on which student’s submission should be judged along with their relative importance. The next challenge was to build a scoring model that would mimic the way humans assess student submissions along with the variation or dialect of English used by them.
- Identify the judging criteria for submissions
- Assess their relative importance
- Build a scoring model to mimic human assessment
- A method that would learn and improve the scoring model iteratively
- Recognize student submissions in different English dialects
A solution that met all the requirements was created by systematically tackling each of the challenges. We designed and implementation some new algorithms in order to build the best solution and leveraged several powerful features from the MindWave platform that allowed the development of an accurate assessment and scoring model.
- Core natural language understanding engine
- Identification of core characteristics of each student submission by extracting relevant concepts and respective categories from within the collected data
- Creation of a scoring model suitable for the submissions
- Machine Learning for scoring model refinement to incorporate feedback of human test assessors
The solution was delivered within the stipulated time. Everyone was stunned to discover that the software predicted the scores for each student submission almost as accurately as they were judged by humans. Plans were put in place to enhance the solution further to incorporate additional criteria to judge the quality of the submissions beyond the ones covered in the original solution and provide real-time feedback to guide the test-taker towards improving their submissions