Ensuring Fairness in E-Assessments: Best Practices and Strategies
In today’s educational environment, the assessment of knowledge and skills is undergoing rapid changes, with e-assessments playing a central role in this transformation. However, a pressing question looms large: how do we ensure that these e-assessments are fair and equitable for all, regardless of their background, circumstances, or abilities? To account for factors like diversity, socio-economic status, and disabilities that impact fairness in assessments, a proactive approach to designing, developing, and delivering assessments is crucial.
Designing for Fairness
In the process of assessment design, it is crucial to establish fairness right from the outset. This involves taking into account individuals with disabilities and those who may not be native speakers of the language.
Universal design principles advocate for the use of simple, clear, and unambiguous language in test items to cater to a diverse range of candidates. It is equally important to refrain from using language that could reinforce negative stereotypes. Providing comprehensive instructions, including scoring criteria and procedures, is essential to assist candidates in achieving their best possible results.
Incorporating collaborative authoring and diverse sources helps ensure a well-rounded representation of candidate perspectives. Offering assessments in multiple languages, including Sign Language, and conducting regular audits to maintain fairness and keep items and tests up-to-date are recommended practices.
To maintain consistency across various test papers, test blueprints can be employed. Item Response Theory (IRT) aids in selecting statistically equivalent items across multiple forms, achieving content balance, and ensuring comparable difficulty and accuracy.
Fairness in Delivery
To bridge the digital divide and ensure that no candidate is at a disadvantage due to a lack of digital infrastructure, universities employ multiple delivery modes, including pen-and-paper, centralized, distributed, and offline testing. In areas with weak internet connectivity, a hybrid model can be employed to ensure candidate responses are reliably captured in the event of internet failure. This approach guarantees that assessments remain accessible to all.
Ensure that the examination platform adheres to the Web Content Accessibility Guidelines, ensuring compatibility with widely used screen reading tools. Provide customization options such as screen settings adjustments, text size modifications, support for color blindness, and text-to-speech functionality. Accommodate candidates with disabilities by offering resources like voiced or Braille testing materials, examination aids, extended time, and breaks.
Fairness in Proctoring
While AI has enhanced proctoring efficiency, several challenges persist, including issues like false positives, biases in identifying students of color, and cultural sensitivities, such as religious attire. Moreover, individuals with disabilities may be more susceptible to triggering false alarms, such as involuntary head movements. Augmented proctoring which combines AI with human interventions can be used to overcome these limitations.
In order to maintain fairness and uphold meritocracy, robust security measures should be implemented during exams, such as utilizing a secondary camera for heightened surveillance, activating a lockdown browser, and restricting the usage of applications.
Fairness in Evaluation
To guarantee impartial evaluation, it is essential to anonymize candidates’ personal information. Additionally, it is crucial for markers to undergo comprehensive training to ensure they possess a clear grasp of the evaluation process and can consistently and fairly assess answer scripts.
One effective approach is utilizing a multiple-marker system, in which markers independently evaluate answers and then collaborate to establish the final score, thereby mitigating potential biases. Alternatively, external markers can be employed to assess exam data to validate the fairness of scoring, preventing excessively harsh or lenient evaluations.
When evaluating group projects, various rubrics that encompass both team and individual scoring criteria can be utilized, ensuring a fair assessment of group projects and duly recognizing the contributions of deserving candidates.
Analytics for Fairness
In the age of data and AI, analytics can contribute to the continuous improvement of the assessment process using data-driven insights. By meticulously scrutinizing data, we can uncover biases in various facets of assessments, including question design, grading criteria, and even the conduct of online proctors.
By scrutinizing the performance of individual assessment items, it becomes possible to identify whether certain questions are disproportionately challenging or exhibit favoritism toward specific student groups. This insight allows for adjustments to be made, fostering the development of a more balanced assessment.
In conclusion, ensuring fairness in e-assessments is a multifaceted endeavor, encompassing the design, delivery, proctoring, evaluation, and analysis of assessments. By adopting strategies like Universal Design and Item Response Theory, embracing multiple delivery modes, and employing augmented proctoring, institutions can create a level playing field for all candidates, regardless of their backgrounds, circumstances, or abilities. It’s not just about testing knowledge; it’s about testing it fairly.