The Evolution and Role of Adaptive Tests

Introduction

The concept of tailored testing, initially introduced by William Turnbull, has long been a part of oral exams. In these exams, an examiner would adjust the difficulty of questions based on the test taker’s responses, continuing until a satisfactory level of understanding and confidence in the test taker’s score was achieved. Over time, this approach has been referred to by various names, including adaptive testing, branched testing, individualized testing, programmed testing, and sequential item testing.

Computers have played a role in testing since the 1970s, initially used for scoring and processing test reports. However, it wasn’t until the 1980s that they began administering tests, and the computing power needed to run Item Response Theory (IRT) based algorithms for computer adaptive tests (CAT) became sufficient in the 1990s.

Introduction of Concepts Paving the Way for Adaptive Testing

The first adaptive test, the Binet-Simon test, was age-based (ages 3-13) and compared a child’s performance to that of an average child of the same age. L.L. Thurstone was the first to introduce the concept of item difficulty, and Benjamin’s tailor-made test utilized item difficulties to determine which items to administer based on responses. The advent of IRT for modelling item responses and estimating test taker’s proficiency (ability) has led to the sophisticated Computer Adaptive Testing (CAT) systems we use today.

Advantages of CAT

  1. Flexible Scheduling: Tests can be taken at any convenient time within a specified window.
  2. Test Shortening: Tests can be 30-50% shorter without compromising accuracy.
  3. Relevance: Irrelevant questions are minimized.
  4. Improved Security: Each user receives a unique set of items, reducing the risk of cheating.

How IRT-based Computer Adaptive Testing (CAT) Works

The main components of a CAT system include:

  • Item Pool: A database of potential test items.
  • Initial Ability Estimation Algorithm: Used for test taker’s proficiency estimation during early part of the test. Techniques like Maximum A Posteriori (MAP) and Expected A Posteriori (EAP) estimators, Maximum Likelihood Estimation with Fences (MLEF), and Maximum Likelihood Estimation with Truncation (MLET) are used.
  • Intermediate Ability Estimation Algorithm: Typically, Maximum Likelihood Estimation.
  • Final Ability Estimation Algorithm: Estimation of proficiency at the test end to report to the test taker.
  • Item Selection Criteria: The criteria based on which the next test item is selected. Methods such as Fisher information-based selection or nearest b-value selection are used.
  • Content Constraint Management: Maintaining the required proportion of items from various content areas using methods like scripting.
  • Rules for Ending the Test: How to stop the test. It could be based on test length, Standard Error of Measurement (SEM), etc.

At the start of a CAT, the test taker’s proficiency is unknown, so the test begins with an item of average difficulty. CAT adapts to the test taker, presenting more challenging items after correct responses and easier items after incorrect ones. This process continues until a predefined stopping criterion is met.

The CAT algorithm operates iteratively through these steps:

  1. Evaluate all un-administered items to determine the best one to present as the next item, based on the current proficiency(ability) estimate of the test taker.
  2. Administer the selected item and record the test taker’s response.
  3. Having more information with response to one more item, the understanding of the proficiency of the test taker is updated.
  4. Repeat steps 1-3 until the stopping criterion is met.

Multistage Testing (MST)

Another adaptive testing design is Multistage Testing (MST), which addresses some limitations of CAT. MST offers advantages such as item review, item skipping, better control over test content, adherence to target content distributions, and consistent item order. While MST sacrifices some adaptivity compared to CAT, it remains more accurate than linear tests.

MST adapts at the sub-test (module) level rather than the item level. Each test stage has multiple modules (easy, medium, difficult). Based on performance in an initial routing module, test takers are directed to subsequent modules, where their performance determines further routing. This adaptivity at each stage continues until the final proficiency or ability estimate is reached.

Conclusion

Adaptive testing has revolutionized the way assessments are conducted, making them more personalized, efficient, and secure. From the early concepts of tailored testing to the sophisticated CAT and MST systems available today, the evolution of adaptive testing reflects significant advancements in educational and psychological measurement. With tools like Excelsoft’s SarasTM educators and institutions can leverage cutting-edge technology to deliver accurate and engaging assessments. As adaptive testing continues to evolve, it holds great promise for enhancing learning and evaluation processes across diverse fields.

Excelsoft’s Adaptive Testing Solutions

Excelsoft provides both CAT and MST test drivers. Our CAT solution, SarasTM offers a mix of algorithms to achieve optimal results and includes a simulator to fine-tune test configurations and algorithm choices. The solution facilitates configuration in terms of number of test panels, stages, and module assemblies, delivering comprehensive reports on both tests and candidate performances.

Exploring Security Measures in E-Assessments

On September 11, early in the morning, my friend called me in panic, “BlackMamba attacked more than 10K students in our University.” I was shocked but surprised, “That’s not possible. Do we get these in Europe?” He clarified, “I am talking about the AI-based malware that we were discussing the other day. Can you help?” 

 

I was a bit perplexed but can understand the gravity of the situation. By the way, I used this simulated conversation to set the context for potential threats that we will face in the near future. 

 

Incidentally, “BlackMamba” is really causing significant damage to various businesses, including financial losses and damage to their reputation. “BlackMamba” can dynamically alter its own code each time it executes, bypassing endpoint detection software, and remains hidden. It then infiltrates targeted systems via phishing campaigns or software vulnerabilities. Once installed, the malware steals sensitive information, leading to data theft or even impairing the IT Infrastructure. 

 

The recent cyberattack on the University of Duisburg-Essen that shut down the entire IT Infrastructure, including the internet, was one such attack. Another incident was the ransomware attack on Munster Technological University. These are just a few incidents that rocked the cybersecurity space in Education technology.

 

Let’s roll back for a moment. Till now, industries have chosen technology at different paces; however, the situation was different this time. Adoption of emerging technologies like AI and cloud computing were instantly adopted by industries. They were eager to compete from the very beginning. While technology makes our lives easier, easy access to shared information also affects many legal issues for businesses.

 

The flip side of this fairy tale is that “Fast-paced digitalization makes businesses vulnerable to cyber-attacks.”

The internet has connected our world, and cybercriminals are exploiting the information in a connected world for fraudulent purposes. In addition, AI also enhances the efficiency of hackers, making it easier to automate these crimes, lower the entry barrier, and scale up the attacks beyond the capacity of our current cyber defense systems. Many businesses are developing advanced security systems that can identify and prevent threats in real time to counter this challenge and protect themselves.

 

It’s time we evolve our security systems from submissive observers to practical responders.

 

So, what about Industries thriving in the L&D and Assessment Domain?

 

This time, we folks from L&D and Performance Assessment started early, and we found these newer technologies to be the game changers. The tech and operational nuances were discussed and strategized at all levels. Every stakeholder agreed that maintaining the security of online learning, online tests, online exams, and online assessments is vital for operations. Security plays a big role in ensuring accurate measurement, but it also protects the organization’s intellectual property and brand integrity. 

 

Implementing new-age security systems ensures that the outcomes produced by Learning and Assessment systems are fair, reliable, and valid. It also endorses that any credential, certification, license, or qualification has been achieved honestly. 

 

Stakes are really high, and L&D teams are seriously looking for partners who can help them with their queries. Let me share a gist of common queries from the industry. 

Almost all of them wanted to know about new developments to ensure test security, for example, detecting any test misconduct. Around 80 percent of the people were worried about how to validate a test taker and ensure uniformity across remote and in-person testing. There were some queries on operational aspects like, “How to prevent a test taker from sharing test items or forms online?” and “What type of secure online remote tests, exams, and assessments are used in education?” 

 

In addition to these, there were some strong queries on policy implementation, such as “Preparation and communication of a strong test security policy,” “Need of a standard guideline document on security measures on how to prevent misconduct on test day,” and “A list of secure testing measures that I can use during test design and development.” 

 

But the question that is echoing everywhere is, “How can AI help students and teachers with assessment?

 

In the end, Test and Assessment is all about “conduct” and “misconduct.” However, the fear of misconduct shows its veracity. There are various forms of misconduct, such as getting an illegal copy of content before a test, copying answers from another test taker, or using someone else to take the test. There are many other examples. Imagine a scenario where a hacker uses generative AI tools to make apps that search the internet and create fake profiles of the targets. They can make fake websites that trick people into giving their credentials or make many websites with small differences from the others, increasing their chances of bypassing network security tools. Think about the impact of this on a University where Digital learning and assessment are at risk. All the data and processes are vulnerable.

 

That’s scary! You need to stay safe out there!

 

Technology providers should address these challenges by preventing and outpacing fraudulent practices and ensure the validity of services administered by addressing security risks and threats. 

So the question is, “How are we exploring Security Measures in E-Assessment?”

My Journey with Security

 

It’s almost 25 years since I got exposed to security. It might look outdated, but revisiting it with the present-day context might help connect the dots. 

 

My first exposure to the term security dates back to 1996, when I was advised, “Never permit the undeserving to acquire unexpected advantage. This will reduce vulnerability or hostile acts and enhance freedom of action for the deserving ones.” Getting exposure to the Reveal-Secret-Security philosophy was an eye-opener, and it helped me immensely to carry out the operations that looked vulnerable. 

 

Almost ten years later, sometime around 2006, I had a chance to share these inputs with my Principal architect, who was designing a performance assessment tool for a government project. The outcome of our discussion was to build a robust solution that: 

 

  1. Doesn’t Reveal – Plug Exposure, Leak, and Giveaway
  2. Keeps Secrets- Assign seals for Classified, Restricted, Confidential
  3. Maintains Security- Enforce and Confirm Certainty, Safety, Reliability, Dependability

 

In a week’s time, we were ready with our wireframes for a client demo. The presentation went as planned, and the prospect looked impressed. He said, “Friends, the stakes are really high, and we are not in a business of trust. We are looking for transparency. You guys do attract my attention, but here are my two cents. I want you guys to consider assessments as an open practice that endorses legitimate inference. Assessments must promulgate an Information Assurance model and employ multiple measures of performance. Lastly, assessments should measure what is worth learning, not just what is easy to measure.” He finished off and requested a timeframe from us to develop the manifestations, which can be evaluated at their end for requirement alignment. We asked for a week’s time to prepare for the response. 

 

For us, the challenge was to analyze these pointers and break them into consumable pieces. By the end of day one, we came up with a list of parameters that map to the requirement. The rest were sleepless nights, but we were able to define and build the components that addressed the security challenges residing within the client’s requirements.

 

Let me detail it for you. This might be long, but you will find it interesting.

To implement the first one, “Assessment must be an open practice,” we need to address Physical Security, Human Security, Application Security, Code and Container Security, Assessment and Test Security, Critical Infrastructure Security for Network, Database, and any Third-party Device.

 

For the next one, “Assessment must endorse legitimate inference,” we need to work on Data Security and Data Access, Data minimization, Data Handling, Data Protection, Data Classification, Data Discovery, and Data Governance.

 

To implement the third one, “Assessment must promulgate Information Assurance model,” we need to define and implement Informed consent, Assurance of Availability, Protection of Confidentiality, Protection of Test Integrity, Protection of Authenticity, Non-repudiation of User Data, and Transparency Assurance.

 

The fourth one, “Assessment should employ multiple measures of performance,” had a lot of subjective components such as assessment validity with respect to learning objective, Authenticity of learner’s performance and work relevance, Sufficiency to judge the coverage of learning outcome, Reliability to track performance over a time-span, and life cycle of Terminal and Enabling Objectives. All of these are crucial to help businesses define measurement indicators, evaluation metrics, decision sequences, and classification of Responses.

 

The last one, “Assessment should measure what is worth learning, not just what is easy to measure,” has a lot of dependency on the business objectives. It covers Benchmark Learning with Performance, Benchmark Learning Content Mapped to Cognitive Load, Benchmark Learning Actions Mapped to Performance Skills, Map Test Item with Objective Domain, Map Test Bank to Performance Skill, Benchmark User Experience of Publishing Platform, Validation of the Decision tree with Respect to the Objective Domain, and Restrict Bias for Cognitive and Performance Skills.

 

These pointers are still relevant for various assessment interventions. It is essential to adopt principles like these to make sure that emerging technologies achieve their economic potential and don’t go rogue, undermining accountability, affecting the vulnerable, and reinforcing unethical biases. I use these principles to design my core security strategy. I also use emerging technologies along with my core strategy to create learning and assessment interventions that are strong and performance-ready.

Some key technologies have gained a lot of popularity in the last two years, for example, using Generative AI to eliminate algorithmic and human bias, using AI-powered augmented proctoring for online exam supervision, using Generative AI models to create new assessment formats, using AI-driven machine learning algorithms to produce complex essay responses, using AI-powered adaptive secured testing methods to improve the fairness of assessments, using AI-driven algorithm to provide personalized feedback system that detects learning gaps and offers targeted interventions and many more.

 

In addition, collaborative research on using Geo-fencing and Blockchain technology will revolutionize the assessment business. The power to make secured Question Bank Vaults with Geo-fencing, Time Stamps, Bio maps, Captcha loggers, Wearable bands, Proximity scans, and Sentiment analyses is a reality and will be realized by 2025.

 

My mentor once said, “The baseline for assessment is accuracy, and the accuracy of assessment result depend on assessment security. 

 

For more information on our pursuit, reach out to us at connect@excelsoftcorp.com

Explore this and many such interesting articles at the eLearning Industry

Read these blogs to discover the latest insights on online assessments

·        How to Ensure the Security of your Test Content

·        Latest Trends and Developments in Online Assessments

·        Exploring Potential of Learning Assessments in Metaverse

·        A Tactical Guide For Transitioning from Paper-based to Online Testing

Ensuring Fairness in E-Assessments: Best Practices and Strategies

In today’s educational environment, the assessment of knowledge and skills is undergoing rapid changes, with e-assessments playing a central role in this transformation. However, a pressing question looms large: how do we ensure that these e-assessments are fair and equitable for all, regardless of their background, circumstances, or abilities? To account for factors like diversity, socio-economic status, and disabilities that impact fairness in assessments, a proactive approach to designing, developing, and delivering assessments is crucial.

Designing for Fairness

In the process of assessment design, it is crucial to establish fairness right from the outset. This involves taking into account individuals with disabilities and those who may not be native speakers of the language.

Universal design principles advocate for the use of simple, clear, and unambiguous language in test items to cater to a diverse range of candidates. It is equally important to refrain from using language that could reinforce negative stereotypes. Providing comprehensive instructions, including scoring criteria and procedures, is essential to assist candidates in achieving their best possible results.

Incorporating collaborative authoring and diverse sources helps ensure a well-rounded representation of candidate perspectives. Offering assessments in multiple languages, including Sign Language, and conducting regular audits to maintain fairness and keep items and tests up-to-date are recommended practices.

To maintain consistency across various test papers, test blueprints can be employed. Item Response Theory (IRT) aids in selecting statistically equivalent items across multiple forms, achieving content balance, and ensuring comparable difficulty and accuracy.

Fairness in Delivery

To bridge the digital divide and ensure that no candidate is at a disadvantage due to a lack of digital infrastructure, universities employ multiple delivery modes, including pen-and-paper, centralized, distributed, and offline testing. In areas with weak internet connectivity, a hybrid model can be employed to ensure candidate responses are reliably captured in the event of internet failure. This approach guarantees that assessments remain accessible to all.

Ensure that the examination platform adheres to the Web Content Accessibility Guidelines, ensuring compatibility with widely used screen reading tools. Provide customization options such as screen settings adjustments, text size modifications, support for color blindness, and text-to-speech functionality. Accommodate candidates with disabilities by offering resources like voiced or Braille testing materials, examination aids, extended time, and breaks.

Fairness in Proctoring

While AI has enhanced proctoring efficiency, several challenges persist, including issues like false positives, biases in identifying students of color, and cultural sensitivities, such as religious attire. Moreover, individuals with disabilities may be more susceptible to triggering false alarms, such as involuntary head movements. Augmented proctoring which combines AI with human interventions can be used to overcome these limitations.

In order to maintain fairness and uphold meritocracy, robust security measures should be implemented during exams, such as utilizing a secondary camera for heightened surveillance, activating a lockdown browser, and restricting the usage of applications.

Fairness in Evaluation

To guarantee impartial evaluation, it is essential to anonymize candidates’ personal information. Additionally, it is crucial for markers to undergo comprehensive training to ensure they possess a clear grasp of the evaluation process and can consistently and fairly assess answer scripts.

One effective approach is utilizing a multiple-marker system, in which markers independently evaluate answers and then collaborate to establish the final score, thereby mitigating potential biases. Alternatively, external markers can be employed to assess exam data to validate the fairness of scoring, preventing excessively harsh or lenient evaluations.

When evaluating group projects, various rubrics that encompass both team and individual scoring criteria can be utilized, ensuring a fair assessment of group projects and duly recognizing the contributions of deserving candidates.

Analytics for Fairness

In the age of data and AI, analytics can contribute to the continuous improvement of the assessment process using data-driven insights. By meticulously scrutinizing data, we can uncover biases in various facets of assessments, including question design, grading criteria, and even the conduct of online proctors.

By scrutinizing the performance of individual assessment items, it becomes possible to identify whether certain questions are disproportionately challenging or exhibit favoritism toward specific student groups. This insight allows for adjustments to be made, fostering the development of a more balanced assessment.

Conclusion

In conclusion, ensuring fairness in e-assessments is a multifaceted endeavor, encompassing the design, delivery, proctoring, evaluation, and analysis of assessments. By adopting strategies like Universal Design and Item Response Theory, embracing multiple delivery modes, and employing augmented proctoring, institutions can create a level playing field for all candidates, regardless of their backgrounds, circumstances, or abilities. It’s not just about testing knowledge; it’s about testing it fairly.

Exploring the Challenges and Opportunities of Online Assessments

Online assessments have become increasingly prevalent in educational and professional settings, revolutionizing the way we evaluate and measure knowledge and skills. With advancements in technology and the widespread availability of internet access, conducting assessments online offers several advantages. However, like any emerging practice, online assessments also come with their own set of challenges. In this article, we will explore both the challenges and opportunities presented by online assessments.

Opportunities Offered by Online Assessments

Online assessments present numerous opportunities for both students and educators to enhance their learning and teaching experiences.

Some of the specific opportunities that online assessments offer are:

  • Flexibility: Online assessments offer flexibility and convenience as test-takers have the freedom to schedule and complete assessments according to their preferences, eliminating the requirement of being physically present at a specific venue.

  • Data-driven Instruction: Online assessments can provide educators with data about student performance, which can be used to improve instruction. This data can be used to identify areas where students are struggling and to make adjustments to the curriculum.
  • Interactive Test Format: Online exams utilize interactive question formats with multimedia elements like videos, photos, and simulations. These formats engage test-takers, simulate real-world scenarios, and assess complex skills that traditional pen-and-paper tests may struggle to measure.
  • Immediate Feedback: Online assessments often offer automated grading and immediate feedback, enabling test-takers to receive prompt results, which can help them to identify areas where they need to improve
  • Scalability: Online assessments can be administered to large groups of students at once, which can save time and resources. This can be especially beneficial for schools with large student populations.

Challenges Associated with Online Assessments

Given the abundance of opportunities that online assessments offer, the widespread popularity of online assessments is not surprising. However, with this transition come unique challenges that need to be addressed to ensure the integrity, authenticity, and fairness of online assessments.

Some of the key challenges associated with online assessments that require proactive solutions are:

  • Technical– Technical problems can hinder the smooth conduct of online assessments. Issues such as internet connectivity, software compatibility, and device malfunction can disrupt the assessment process and affect the test-takers experience.

  • Academic Integrity– Ensuring the security and integrity of online assessments is crucial. The ease of accessing unauthorized materials, online resources, or collaborating with others can increase the risk of cheating and plagiarism among students.
  • Access and Equity– Online assessments require access to reliable internet connections, appropriate devices, and digital literacy skills. However, not all individuals or regions have equal access to these resources, leading to potential disparities and limited opportunities.
  • Digital Literacy- Students and even teachers who lack digital literacy may struggle with navigating the features and functionalities of online assessment platforms. They may find it challenging to locate assessments, submit responses, or access feedback. This can result in frustration and hinder their ability to participate in the assessment process effectively.
  •  Measuring Learning Outcomes– Measuring learning outcomes in online assessments can be more challenging than traditional face-to-face assessments because of the absence of direct observation and limited student interaction.
  • Data Privacy– Online assessment involves collecting and storing sensitive student data, including personal information, academic performance, and assessment results. Insufficient security measures in online assessment platforms can make them vulnerable to cyberattacks.

Mitigating Challenges and Maximizing Opportunities of Online Assessments

In order to effectively address the challenges and maximize the advantages of online assessments, institutions can implement several strategies.

Some of the measures that can be taken to tackle these challenges and optimize the opportunities of online assessments are:

  • Offer Technical Support: Institutions should provide comprehensive technical support to participants, including clear instructions for accessing the assessment platform, troubleshooting guides, and a dedicated support team. Conducting system compatibility checks beforehand and sharing recommendations for browsers and devices can minimize technical issues.
  • Ensure Academic Integrity: Instructors can employ proctoring software to monitor students’ activity during assessments, detecting suspicious behaviors like online searches or collaboration. The plagiarism detection software would identify unauthorized content usage, while secure browser settings would prevent access to external websites and applications during the assessment. Mandating assessments in controlled environments like labs or test centers reduces the chances of cheating with unauthorized materials.

  • Ensure Accessibility and Equity: Instructors should strive for equity by providing all students with access to the technology and resources that they will need to complete the assessment. This can involve offering alternative assessment formats for individuals with disabilities, ensuring compatibility with assistive technologies, and providing accommodations for those with specific needs.
  • Improve Digital Literacy- Offer training programs or workshops to educate students and teachers on digital literacy skills specific to online assessments, focusing on navigating assessment platforms, understanding technical requirements, and using assessment tools effectively. Incorporate hands-on practice and provide resources like video tutorials or step-by-step guides.
  • Incorporate Various Question Types: Include different question formats that go beyond simple recall and encourage higher-order thinking, like essay questions, case studies, simulations, or scenario-based assessments that require students to apply knowledge to solve problems or analyze complex situations.
  •  Provide Detailed Rubrics: Develop clear and specific scoring rubrics that align with the learning outcomes. Rubrics help ensure consistent and fair assessment while providing students with transparent expectations and criteria for success.
  • Ensure Data Security– Implement robust data protection measures, including encryption, secure servers, and adherence to relevant data privacy regulations. Obtain consent from students or their guardians for data collection and usage. Educate teachers and staff about data privacy best practices to minimize risks.

Conclusion

To sum up, online assessments provide valuable opportunities for evaluating knowledge and skills, but they also come with challenges that require attention for successful assessment experiences. Mitigating these challenges involves offering comprehensive technical support, implementing strong security measures, and promoting academic integrity and equity. By carefully planning, continuously improving, and utilizing suitable technologies and resources, online assessments can effectively evaluate and measure knowledge and skills, ensuring a fair and meaningful assessment experience in the digital era.

Explore SarasTM to Overcome Your Assessment Needs and Challenges

SarasTM offers a comprehensive solution tailored to tackle the obstacles encountered in online assessments. Its user-friendly features and intuitive interface empower educators and students to effortlessly navigate the digital assessment landscape. By offering extensive technical support and prioritizing data privacy and security, SarasTMenhances the assessment experience and promotes student engagement and academic integrity.

Don’t let the challenges hold you back; embrace SarasTM and unlock the full potential of online assessments in the educational journey.

Learn more about SarasTM features and how it can be the right platform for you.

Exploring Potential of Learning Assessments in Metaverse

Assessments have long been a critical component of the learning process, providing students and educators with valuable insights into what has been learned and where there is room for improvement. However, as technology continues to evolve, so do the methods for conducting assessments. In particular, the rise of the metaverse has opened up exciting new possibilities for assessing learning.

Metaverse – The Next Generation of the Internet

The metaverse is a virtual space where people can interact with each other and digital objects in a fully immersive and interactive way. A device-agnostic Metaverse accessible via PCs, game consoles, and smartphones could result in a huge learning ecosystem. 

The Metaverse Infrastructure Building

The Metaverse proponents envisage a fully-immersive content streaming environment where users are able to go from one experience to another seamlessly. This would involve a continuous stream of interconnected data — a computational efficiency improvement of over 1,000x versus today.

To make the vision of the Metaverse a reality would need a significant investment in a confluence of technologies, such as

  • Compute (Central Processing Unit (CPU), Graphics Processing Unit (GPU)) 
  • Storage (Data Centers, Cloud), Edge Computing 
  • Network Infrastructure (Low Latency, High Bandwidth) 
  • Consumer Hardware (Headsets, Real World Modelling) 
  • Game Development Platforms

This investment would need to be orders of magnitude higher than today’s levels to facilitate the enabling infrastructure for virtual worlds to be enhanced by VR and AR.

The Metaverse Ecosystem

The Metaverse Connectivity Challenge

Meta Platforms highlighted the connectivity challenges and significant advancements needed in network latency, symmetrical bandwidth, and overall speed of networks for an open and interoperable Metaverse.


Network Latency Constraint

Today’s latency-sensitive apps, like video calling/cloud gaming, have a round-trip latency of 75 ms to 150 ms, and multiplayer, complex games can go sub-30 ms. However, for the Metaverse to be truly immersive, graphics must update much faster, i.e., single to low double-digit milliseconds. Local real-time rendering could enable this but necessitate large downloads for complex scenes, which may be unfeasible. Instead, the development of remote rendering over edge cloud or a hybrid between local and remote rendering will likely play a greater role in the future.

 

Immersive Video-Streaming Gaps

The Metaverse will likely be accessed via a head-mounted display, centimeters away from the eye, requiring large-resolution videos, potentially well beyond 4K. This would require substantial improvements in network throughput and innovations across the hardware and software stack.

 

Ways in which assessments can be conducted in the metaverse

Assessments in the metaverse can potentially transform how we evaluate learning. They can be more engaging and interactive than traditional assessments and can provide educators with detailed information about a student’s understanding of a subject.

Interactive Simulations

One of the most promising applications of the metaverse for learning is in the area of simulation-based assessments. Simulations are powerful tools for assessing learners’ knowledge and skills, as they provide a safe and controlled environment where learners can practice and receive feedback on their performance.

For example, in a science class, students could be asked to conduct experiments in a virtual lab, which could then be used to evaluate their understanding of the scientific concepts being studied. In the metaverse, simulations can be even more realistic and engaging, as learners can interact with virtual objects and environments in a way that feels real.

Game-based Assessments

Games have long been used as a way to engage students and assess their learning. The metaverse allows for the creation of game-based assessments that can be used to evaluate a student’s understanding of a particular subject.

For instance, a teacher could design a challenge that requires students to apply their arithmetic skills to solve puzzles and unlock new game levels.

Collaborative Assessments

The metaverse allows for real-time collaboration between students, which can be used to assess teamwork and collaboration skills. The students could be asked to work together to solve a problem in a virtual environment, with their performance evaluated based on how well they work together.

For example, in business education, learners could collaborate in the metaverse to develop and pitch a business idea to a panel of virtual investors. The learners would be assessed not only on the quality of their idea but also on their ability to work effectively as a team and communicate their ideas effectively.

Similarly, in language education, learners could collaborate in the metaverse to practice their language skills with native speakers from around the world.

Performance-based Assessments

Students can demonstrate their skills and knowledge through performance-based assessments in the metaverse.

For instance, in an art class, students could be asked to create a digital artwork, which could then be evaluated based on the quality of the work and the creative thinking involved.

Personalized Assessments

The metaverse allows for the creation of personalized assessments that can be tailored to a student’s individual needs and abilities.

For example, a student who struggles with reading could be given a virtual environment that is designed to help improve his/her reading skills.

Formative Assessments

The metaverse also provides new opportunities for formative assessment, where learners receive ongoing feedback on their progress and performance. In the metaverse, this feedback can be provided in real-time through the use of virtual mentors and personalized learning paths.

For example, in K-12 education, learners could work through a series of virtual activities and assessments that are tailored to their individual learning needs. As they progress through these activities, they receive feedback and guidance from virtual mentors, who help them identify areas where they need to improve and provide suggestions for how to do so.

Conclusion

To summarize, constructing the Metaverse infrastructure is a major challenge that necessitates significant investment in compute, storage, network infrastructure, consumer hardware, and game development platforms. The connectivity challenge is a significant obstacle that requires improved network latency, symmetrical bandwidth, and overall network speed for an open and interoperable Metaverse.

However, the Metaverse has potential beyond gaming and entertainment. Assessments in the Metaverse can revolutionize how we evaluate learning through interactive simulations, game-based assessments, collaborative assessments, performance-based assessments, personalized assessments, and formative assessments. The Metaverse offers new opportunities for engaging and interactive assessments that can provide educators with comprehensive insights into a student’s understanding of a subject.

How to Select the Best Assessment Platform for Higher Ed?

Assessments are central to learning for any higher-ed program, offering educators the data to enhance learning, define goals, and promote institutional change. Without the right software, gathering this data can be laborious and time-consuming, so you must ensure that your software can meet your requirements.

The global pandemic gave rise to a shift towards online and blended learning, leading to a surge in digital assessment platforms. Understanding how these solutions can add value to the learning process at your institution is imperative.

Let’s first understand what digital assessments are before delving into any details.

Digital Assessment is the application of digital technologies to create, deliver, grade, report, and manage examinations to assess students’ learning. They can be used as remote, online assessments or in-person deployments on campus. Whether digital or analog, the goal of assessment remains evaluation for effective learning and credentialing.

Why higher-ed institutions are increasingly adopting digital assessments

  • Save time and effort: Automated grading simplifies evaluation and takes less faculty time, allowing more time for instruction and learning. Depending on the class size, commercial off-the-shelf software have shown to save grading time by 30% to 50% on average.
  • Advanced capabilities: A good assessment platform offers educators complex workflows for collaborative authoring, the capability of tagging questions, question-level analysis, management, and more.
  • Formative Feedback Opportunities: A frequently cited advantage of online assessment is the ease with which detailed feedback can be provided to students. Feedback can be provided on time and frequently in various formats, including written, audio-recorded, and video-recorded.
  • Accessibility and Flexibility: Both students and teachers value how accessible online tests are. Instead of being forced to work within the confines of a classroom, students have more freedom in approaching their assignments because they can decide when and where to complete them. Students with jobs, family obligations, or other reasons that may limit their capacity to be present on campus may feel greatly relieved.

It should come as no surprise that an effective assessment platform may help your institution tremendously. Selecting the ideal software is a complex choice. For easy integration and quick results, you should consider a few crucial criteria when selecting the assessment platform.

What features should be considered for choosing an e-Assessment platform?

  • User-friendly interface: The interface of the assessment platform needs to be user-friendly. An intuitive and simple-to-use platform is favorably perceived by both educators and students. The assessments must be simple for the candidates to interact with and complete. For educators, it should be easy to create and evaluate the tests and generate results instantly.
  • Rich media support: The platform should enable creating questions with rich media (e.g., audio and video-based assessment, Geometry, Mathematics, Scientific Equations, and more) to make assessments more engaging and effective.
  • Enhanced Security: The platform must offer a high level of security to prevent candidate malpractices. The integrity of an online test can be ensured by using screen, audio, and video proctoring. The platform should also be secured so that the data is not exposed to any vulnerability.
  • Accessibility Standards: The platform should adhere to mandated requirements for accessibility, such as 508 accessibility guidelines and WCAG 2.0 Level AA requirements.
  • Multi-language compatibility: The platform should offer compatibility with regional languages, regardless of the standardization of the language, to make it accessible for both educators and students.
  • Reporting and Analysis: The platform should offer comprehensive reporting and analytics to provide educators with rich diagnostics and support for student remediation.
  • Availability of technical support: In addition to assisting students in self-regulating their learning, timely support enables instructors to feel confident and competent using e-learning tools. Technical support can be offered on-campus or through active platform support.

Why is SarasTM the best Online Assessment Platform for your institution?

Saras™ Test and Assessment is a comprehensive online testing solution that enables educators to author and deliver highly personalized assessments. Adhering to globally recognized QTI standards, Saras™ offers instructors a user-friendly interface to create tests and deliver them to learners as formative assessments, summative assessments, or in-classroom quizzes. With over 650 configuration settings, Saras™ adapts to your way of working, making it the right fit for your exam requirements.

Learn more about SarasTM features and how it can be the right platform for your institute.

Latest Trends and Developments in Online Assessments

The pandemic in 2020 has changed the way we teach and learn. Schools continue to close and reopen in many parts of the world, demanding alternatives for instructional continuity. This has drawn considerable investments in modernizing and digitizing the educational systems. A wide range of digital tools is emerging to support learning and assessment for enhancing learning outcomes.

What does it mean for online assessments in the near future?

The online exam software market size was valued at USD 5.3 billion in 2020 and is projected to reach USD 11 billion by 2028, growing at a CAGR of 9.01% from 2021 to 2028. Beyond what LMSs have traditionally offered, external digital assessments are increasingly being adopted. Significant developments in computer-based testing are redefining the assessment potential. It would be helpful for practitioners, decision-makers, researchers, and system developers to identify trends in the characteristics of online assessment systems to select or create them effectively.

Emerging trends in online assessments

  • Formative assessments are rising and becoming more interactive
    With the necessity of early feedback and student remediation, digital tools are increasingly used over analog methods to create and deploy assessments faster. Educators prefer to create exams with various types of questions such as multiple-choice questions, fill-in-the-blanks, match the columns, essay type, Likert scale, image labeling, etc. Multimedia, like audio, images, video, etc., is included to make assessments more engaging.
  • Assessments are becoming more candidate-centric
    There is a trend toward more actionable assessment data from which insights can be drawn. Rather than imposing rigid standards to evaluate students’ competencies, educators are tailoring assessments to students’ responses to personalize learning. Using technology tools, teachers can simplify and shorten the feedback loop to drive their instruction. Platforms are incorporating Item Response Theory (IRT) based question and test calibration. This allows educators to appraise student performance using metrics such as depth of knowledge level, skills, guessing factor, and more.
  • There is a greater emphasis on inclusivity
    With a growing concern for accessible education, there has been an increased focus on making assessments more diverse, inclusive, and fair. By utilizing assistive technologies, test platforms are becoming more accessible to users with various needs, challenges, and disabilities. Some important ways to make the test interface perceptible to users are to provide text alternatives for non-text content, to make the content easier to see and hear, to provide different input modalities beyond the keyboard, and to provide multilingual support.
  • Traditional assessment delivery is being phased out
    Institutes are no longer dependent only on pen and paper-based delivery. Although there has been a slow transition back to physical spaces, educators are well aware of the advantages of online proctoring. Many have adopted a blended or online assessment delivery model, allowing educators to monitor the candidate test progress remotely. Proctors can see students’ live webcam feed and on-screen activities, e.g., navigation, click activity, etc. Latest remote proctoring platforms offer features like an AI anti-cheat engine, lockdown browser, etc., to ensure the sanctity of the exam.
  • AI for auto-marking is becoming popular
    Educators are increasingly using the auto-marking function of online assessment platforms, which frees up significant time for instruction. In addition to scoring multiple-choice or short-form questions, these platforms use AI to grade longer answers and written work, making auto-marking an indispensable platform function.

How SarasTM can address your current and future assessment needs

The adoption of online assessments will continue to rise, making it necessary to use an advanced, efficient online assessment platform to meet the requirements. Whether you are an educational publisher, a school, a university, or a certification body, Saras™ offers a secure end-to-end online testing solution, combining test authoring, delivery, proctoring and evaluation.

Learn more about SarasTM features and how it can be the right platform for you.

A Tactical Guide For Transitioning from Paper-based to Online Testing

So, you’ve recently purchased an Online Testing Platform that you hope will enable you to transition your organization from paper-based testing to online testing. What next? There are lots of questions to be addressed and while no two testing organizations are exactly the same, there are many issues that are common. With this blog, we will try to provide a tactical guide for planning and executing your transition from traditional paper-based testing to delivering tests online.

The challenges of moving away from Paper-based assessments are numerous. Some are related to logistics and some are related to psychometrics. It is important to identify these challenges and plan the steps necessary to overcome them in order to achieve success. Here we look at a stepped approach where you can continue to use your organization’s proven item bank of questions while making this transition. This approach will ensure test integrity while switching to online and also provides time for working out the logistics first. We also recommend short pilots at various stages in order to make the overall transition process smooth.

Configuring your Testing System

Below are some of the key topics that you should consider with your team as you build and configure your Online Testing System –

  • Item Authoring and Item Banking
    1. Question types and considerations for changes.
    2. Randomization options and impacts
    3. Workflow/s for building and approving questions/tests
    4. Use New/Existing Metadata for Items
    5. IRT impact during the transition
    6. Build Blueprints
    7. Create templates for Test Construction
    8. Create templates for Test Delivery
  • General
    1. Manage Roles & Privileges
    2. Ensure the security of Items & Data
    3. Ensure the application branding is in line with your organization’s branding guidelines
    4. Confirm how candidates will access the tests (through a portal, link, etc.)
  • Test delivery
    1. Test from any location or specific computerized locations
    2. Identification of testing sites
    3. Secure delivery options
    4. Logistics for connectivity
    5. Distribution of information related to tests (i.e. list of test takers for proctors, any supplemental forms, etc.)
    6. Randomization of questions/responses
    7. Proctor training–In-person proctoring or remote proctoring
  • Report/Analytics
    1. Question types and impact
    2. Gathering new data
    3. Checking for cheating

Planning for Rollout of Live Testing

As you work through your logistics consider an approach to rollout for your live online testing program.

Develop a video tutorial for candidates to understand navigation and screen functions.

Internal Pilot test using all materials (except actual test) that would be used by your real test takers.

  • Create a “real” test for staff in your organization not directly involved in creating tests. Or you can use a focus group if you have access to one.
  • If possible, use one of the sites you plan on using the actual test; or as close as possible in setup.
  • Set up a few common situations (workstation with a flaky internet connection, bad mouse; lefty in the room) – always a good idea to have a couple of workstations set up for lefties.
  • Have the proctor give the same instructions as would be used on test day.
  • Have test takers complete the test as would be done on test day.
  • Give each person a note sheet for them to write down questions. Only answer questions in person that you would in the test environment.
  • Proctor the test using the software that will be used during the actual test. Have the proctor try some actual managing of the room situations such as pausing the test for a person to go to the restroom; pausing the test for an entire room; adding time for a person/group/all.
  • For the test content consider randomizing questions or answers to see how the scoring works.
  • Score the exam and review the results.
  • Survey the test takers and hold a feedback session.
  • Make adjustments

Small Pilots with Real Test Takers

Schedule and hold pilots with real candidates that

  • Require candidates to log in ahead of the test to alleviate any login issues at the site.
  • Require candidates to review the video tutorial.

Characteristics of a good 1st pilot exam for scheduled, in-person, and proctored exam:

  • A small set of known candidates (e.g. Promotional exam for Civil Service organizations; single class in a school)
  • All test takers located near a test location
  • Test site with all the same computers
  • Tech-savvy proctors
  • Standard multiple choice questions with proven reliability
    1. All stems and questions are somewhat short to avoid any screen arrangement issues
    2. Questions can be randomized (or not).
  • Survey Candidates a week after the exam
  • Get test results out as soon as possible
  • Review and identify any adjustments that need to be made

Characteristics of a good 2nd pilot exam for scheduled, in-person, and proctored exam:

  • A larger, but still small set of known candidates (e.g. Promotional exam for Civil Service organizations; 2-3 classes in a school)
  • Test takers at multiple test locations
  • Test sites with different computers
  • Tech-savvy proctors
  • Standard multiple choice questions with proven reliability
    1. All stems and questions and somewhat short to avoid any screen arrangement issues
    2. Questions can be randomized (or not).
  • Survey Candidates a week after the exam
  • Get test results out as soon as possible
  • Review and identify any adjustments that need to be made.

And then,

The team’s learnings during the administration of the pilot exams will provide you with a sampling of the myriad of logistical changes required for migration to online, but also enabled you to achieve success.
From here, you can move forward with building your full program including:

  • Online Scheduling
  • Testing from anywhere using remote proctoring
  • New question types to take advantage of your new technology and more.

SaaS-based Online Proctoring, Here to Stay!

The pandemic in 2020 brought upon the realization and the critical need for organizations to equip for a ready transition to digital assessments in such unforeseen circumstances. These organizations include educational institutions, universities, certification and testing organizations, government bodies, and many more. With digital assessments came the need to adopt remote or online proctoring to ensure the integrity of exams delivered, especially in high-stake exams. Smaller businesses also looked to online proctoring solutions for administering and monitoring exams in small scale with economical budgets and resources.

Enter, a SaaS-based proctoring solution! A SaaS model of software has multiple benefits for any organization or individual using such an offering, not limited to but including – low setup and infrastructure costs, pay for what you use, scalability, accessibility from anywhere, easy customization, and more. In keeping with other cloud services, SaaS offers small businesses an opportunity to disrupt existing markets while taking advantage of fair SaaS pricing models.

We are now in the post-pandemic era with businesses and organizations slowly transitioning back to physical spaces. However, the world saw the opportunity and power of online learning and assessments, including the advantages of online proctoring, and we think it is here to stay.

Why a SaaS solution for Online Proctoring?

In the context of online proctoring solutions, a SaaS-based solution minimizes the barrier to entry for small organizations looking to embrace digital assessments and proctoring. A single-hosted SaaS instance can support several organizations with multi-tenancy, reducing costs for businesses that use the proctoring platform with a pay-as-you-use pricing model.

During the pandemic, an increasing number of organizations, small and large, adopted online proctoring solutions to monitor and maintain the integrity of digitally administered exams. This surge in adoption has helped solution providers further understand the various proctoring needs of organizations worldwide based on exam needs, budget, resource availability, and more. It has further enabled multiple layers of proctoring services to emerge providing the flexibility of choice to businesses wanting to use online proctoring. With the SaaS-based model, businesses can pick and pay only for the proctoring services they choose, making it more economical and convenient. Consider an example of an institution wanting to deliver online exams in a remote location with poor internet access. They may choose only to avail of the image-based record/review proctoring service and may additionally opt for AI-based screening of recorded images. In such a case, they will need to pay only for what they pick.

For larger businesses, it allows them to explore the services that best suit their requirement. They can either choose to scale on the SaaS platform or probably opt for an enterprise version of the solution.

Conclusion

With a SaaS-based proctoring solution, businesses or organizations can onboard quickly and start using the proctoring services immediately. A business can simply register on the web-based proctoring platform, select the type of proctoring services required and pay for the bundle of services chosen to obtain a license for usage, with which they can begin to use the platform for conducting and monitoring online exams without delay. This breaks the conventional method of purchasing software as the platform can offer a usage-based subscription like the number of tests. It can even easily integrate with any standard compliant test delivery platform with just a few simple configurations within the SaaS application.SaaS-based proctoring solutions have been around for quite some time now, but such advantagesimply that they will continue to be the preferred choice for the foreseeable future.

Excelsoft’s easyProctor offers a comprehensive SaaS-based online proctoring solution alongside a suite if digital assessment solutions for any organization looking to transition to online testing. Contact us to know more about how our products can help you transform.

How to ensure the Security of your Test Content

An article on How to ensure the Security of your Test Content by Excelsoft

We live in a world where access to information is at our fingertips! In this “Online First” world, it is becoming increasingly difficult, yet increasingly important for Test Publishers and Test Delivery Platforms to ensure the security of Test Content.

Items are the core IP of Test Publishers. It is very important to protect these items and limit their exposure as much as possible during the test workflow. This not only includes secure storage and transfer of items at the platform level but also includes ensuring the security of items during the test authoring process, while performing admin activities, during the test delivery process, and also post-test delivery.

This article lists a few guidelines that you need to follow to ensure the security of your test content during the assessment lifecycle.

Security During Authoring and Admin Activities

An article on How to ensure the Security of your Test Content by Excelsoft

The number of admin and authors may be limited (when compared to the test-takers). However, most of these users typically have direct access to your item and test banks. You need to have the right policies in place to ensure your items have the minimum exposure-

  • Always use separate instances for Item Authoring/Banking and Test Delivery.
  • Keep your Item Bank and Test Assembly servers behind a firewall
  • Ensure strict role-based access to the Item Bank
  • Enforce a workflow-based authoring of items to ensure authors and reviewers can only access and modify items that are assigned to them
  • Enforce the use of Lockdown Browser (yes, lockdown browser!) while authoring items to prevent authors and reviewers from copying, printing item content.
  • Disable preview of items and tests for non-power users.

Security During Test Delivery

An article on How to ensure the Security of your Test Content by Excelsoft

The test content has the maximum exposure during test delivery. Here are a few guidelines to limit this exposure-

  • Use multiple forms for each of your tests to limit the exposure of items in your bank.
  • As much as possible, deliver your tests using a Lockdown Browser, so that it is not easy for test-takers to take screenshots, copy or print the Test Content.
  • Use templatized items with placeholders wherever possible. Eg: Let us consider the item stem – “John has 2 dozens of bananas. He gives 5 bananas to Mary. How many bananas does John have now?”. Here the names and quantities, namely – [John], [Mary], [Bananas], [2 dozens], [5], [He] can all be made as placeholders and filled in dynamically based on certain rules. So the item might as well be “Tracy has 4 dozens of apples. She gives 14 apples to Chris. How many apples does Tracy have now?”.

Security Post Test Delivery

An article on How to ensure the Security of your Test Content by Excelsoft

In spite of all our efforts, candidates can still manage to take snapshots of their screens from another device, write down the test content on paper or even just remember the questions and dump them on common brain dump sites on the web. Your platform needs to monitor the web especially the brain dump websites and flag any items that may have been leaked out.

  • Use web crawlers that can crawl the web, especially the brain dump websites to identify any exposed content.
  • Flag items that have been exposed and notify key stakeholders
  • Replace flagged items regularly and weed them out of your item bank.


At Excelsoft, we constantly strive to improve our products and platforms to meet the highest security standards and adopt the best security practices.


Please reach out to adarsh@excelsoftcorp.com for more information on our products and services.

Lockdown Browser Allow-List vs Block-List

A blog on Lockdown Browser Allow-List vs Block-List by Excelsoft

When implementing a lockdown browser for Test Security, there are two main approaches to application control: Application Whitelisting (Allow-list) and Application Blacklisting (Block-list).

A blog on Lockdown Browser Allow-List vs Block-List by Excelsoft

With no defined guidelines on which is better, Test Administrators often face situations where they have to choose between the two. In this article we’ll look at the pros and cons of both these approaches.

Blacklisting (Block-list)

Blacklisting is one of the oldest approaches in computer security. This approach is used by most antivirus softwares to block unwanted programs and applications. The process of blacklisting applications involves the creation of a list containing all the programs, applications or executables that might pose a threat to test security, either in the form of capturing test content or by assisting the candidate in taking the test. The default behavior in Blacklisting is to allow access and block only the applications in the Block-list.

Blacklisting takes a Threat-centric approach and looks for apps that can be a threat.

Pros and cons of blacklisting

Pros

  1. Blacklisting is best suited in BYOD scenarios where you do not have control over the Hardware and OS configurations of the candidate machines.
  2. The biggest benefit of blacklisting is its simplicity. You need to block only known non-essential software and run everything else.
  3. All other essential programs and applications can continue to run by default, reducing the volume of support tickets raised for essential applications being blocked

Cons

  1. While blocking every application that is distrusted is simple and efficient, it may not always be the best approach as new applications are created every day, making it impossible for you to keep a comprehensive and updated list of applications to be blocked
  2. There is always a possibility of an unknown/unregistered/rogue application running in the background without getting blocked by the Lockdown Browser.

Whitelisting (Allow-list)

Just as the name suggests, whitelisting is the opposite of blacklisting, where a list of trusted programs and applications are exclusively allowed to run. This method of application control can either be based on policies like file name, product or application or it can be applied at an executable level, where the digital certificate or cryptographic hash of an executable is verified. The default behavior in Whitelisting is to block access, and allow only the applications in the Allow-list.

Whitelisting takes a trust-centric approach and looks for trusted apps.

Pros and cons of whitelisting

Pros

  1. Whitelisting only allows a limited number of applications to run, effectively minimizing the security threat
  2. Best suited in a controlled environment (Testing Centers, University Labs), where it is easy to control the programs and applications to be allowed on each of the machines.

Cons

  1. This approach is not suitable in a BYOD scenario where there is no control on the Hardware and OS configurations of the machines used.
  2. Building a whitelist may seem easy, but one inadvertent move can result in help desk queries piling up. Inability to access essential applications would put various critical tasks on halt.
  3. Determining the list of programs and applications that should be allowed to execute across Hardware and OS combinations is an intensive process, and keeping this list updated is even harder.

Conclusion

Whitelisting is clearly the more secure option, but it is best suited in a controlled Test Environment. Blacklisting is less secure, but it’s a more practical option. It is simple, reasonably secure and is best suited when your candidates are taking the test on their own devices.