Adaptive Quiz: CAT (Computer-Adaptive Testing) implementation for Moodle

Activities ::: mod_adaptivequiz
Maintained by Adam Franco, Vitaly Potenko
Create tests that efficiently measure users' abilities by adapting the questions difficulty to the estimation of user's ability.
Latest release:
573 sites
1k downloads
83 fans
Current versions available: 8

The Adaptive Quiz activity enables a teacher to create tests that efficiently measure the takers' abilities. Adaptive tests are comprised of questions selected from the question bank that are tagged with a score of their difficulty. The questions are chosen to match the estimated ability level of the current test-taker. If the test-taker succeeds on a question, a more challenging question is presented next. If the test-taker answers a question incorrectly, a less-challenging question is presented next. This technique will develop into a sequence of questions converging on the test-taker's effective ability level. The test stops when the test-taker's ability is determined to the required accuracy.

The Adaptive Quiz activity uses the "Practical Adaptive Testing CAT Algorithm" by B.D. Wright published in Rasch Measurement Transactions, 1988, 2:2 p.24 and discussed in John Linacre's "Computer-Adaptive Testing: A Methodology Whose Time Has Come." MESA Memorandum No. 69 (2000).

This Moodle activity module was created as a collaborative effort between Middlebury College and Remote Learner. Later on it was adopted by Vitaly Potenko to keep it compatible with new Moodle versions and enhance with new features.

Below you'll find short documentation on the plugin to explain its essential concepts and flows.

The Question Bank

To begin with, questions to be used with this activity are added or imported into Moodle's question bank. Only questions that can automatically be graded may be used. As well, questions should not award partial credit. The questions can be placed in one or more categories.

This activity is best suited to determining an ability measure along a unidimensional scale. While the scale can be very broad, the questions must all provide a measure of ability or aptitude on the same scale. In a placement test for example, questions low on the scale that novices are able to answer correctly should also be answerable by experts, while questions higher on the scale should only be answerable by experts or a lucky guess. Questions that do not discriminate between takers of different abilities on will make the test ineffective and may provide inconclusive results.

Take for example a language placement test. Low-difficulty vocabulary and reading-comprehension questions would likely be answerable by all but the most novice test-takers. Likewise, high-difficulty questions involving advanced grammatical constructs and nuanced reading-comprehension would be likely only be correctly answered by advanced, high-level test-takers. Such questions would all be good candidates for usage in an Adaptive Test. In contrast, a question like "Is 25¥ a good price for a sandwich?" would not measure language ability but rather local knowledge and would be as likely to be answered correctly by a novice speaker who has recently been to China as it would be answered incorrectly by an advanced speaker who comes from Taiwan -- where a different currency is used. Such questions should not be included in the question-pool.

Questions must be tagged tagged with a 'difficulty score' using the format 'adpq_n' where n is a positive integer, e.g. 'adpq_1' or 'adpq_57'. The range of the scale is arbitrary (e.g. 1-10, 0-99, 1-1000), but should have enough levels to distinguish between question difficulties.

The Testing Process

The Adaptive Test activity is configured with a fixed starting level. The test will begin by presenting the test-taker with a random question from that starting level. As described in Linacre (2000), it often makes sense to have the starting level be in the lower part of the difficulty range so that most test-takers get to answer at least one of the first few questions correctly, helping their moral.

After the test-taker submits their answer, the system calculates the target question difficulty it will select next. If the last question was answered correctly, the next question will be harder; if the last question was answered incorrectly, the next question will be easier. The system also calculates a measure of the test-taker's ability and the standard error for that measure. A next random question at or near the target difficulty is selected and presented to the user.

This process of alternating harder questions following correct answers and easier questions following wrong answers continues until one of the stopping conditions is met. The possible stopping conditions are as follows:

  • there are no remaining easier questions to ask after a wrong answer
  • there are no remaining harder questions to ask after a correct answer
  • the standard error in the measure has become precise enough to stop
  • the maximum number of questions has been exceeded


Attempt graph


Test Parameters and Operation

The primary parameters for tuning the operation of the test are:

  • the starting level
  • the minimum number of questions
  • the maximum number of questions
  • the standard error to stop

Relationship between Maximum Number of Questions and Standard Error

As discussed in Wright (1988), the formula for calculating the standard error is given by:

Standard Error (± logits) = sqrt((R+W)/(R*W))

where R is the number of right answers and W is the number of wrong answers. This value is on a logit scale, so we can apply the inverse-logit function to convert it to an percentage scale:

Standard Error (± %) = ((1 / ( 1 + e^( -1 * sqrt((R+W)/(R*W)) ) ) ) - 0.5) * 100

Looking at the Standard Error function, it is important to note that it depends only on the difference between the number of right and wrong answers and the total number of answers, not on any other features such as which answers were right and which answers were wrong. For a given number of questions asked, the Standard Error will be smallest when half the answers are right and half are wrong. From this, we can deduce the minimum standard error possible to achieve for any number of questions asked:

  • 10 questions (5 right, 5 wrong) → Minimum Standard Error = ± 15.30%
  • 20 questions (10 right, 10 wrong) → Minimum Standard Error = ± 11.00%
  • 30 questions (15 right, 15 wrong) →  Minimum Standard Error = ± 9.03%
  • 40 questions (20 right, 20 wrong) →  Minimum Standard Error = ± 7.84%
  • 50 questions (25 right, 25 wrong) →  Minimum Standard Error = ± 7.02%
  • 60 questions (30 right, 30 wrong) →  Minimum Standard Error = ± 6.42%
  • 70 questions (35 right, 35 wrong) →  Minimum Standard Error = ± 5.95%
  • 80 questions (40 right, 40 wrong) →  Minimum Standard Error = ± 5.57%
  • 90 questions (45 right, 45 wrong) →  Minimum Standard Error = ± 5.25%
  • 100 questions (50 right, 50 wrong) →  Minimum Standard Error = ± 4.98%
  • 110 questions (55 right, 55 wrong) →  Minimum Standard Error = ± 4.75%
  • 120 questions (60 right, 60 wrong) →  Minimum Standard Error = ± 4.55%
  • 130 questions (65 right, 65 wrong) →  Minimum Standard Error = ± 4.37%
  • 140 questions (70 right, 70 wrong) →  Minimum Standard Error = ± 4.22%
  • 150 questions (75 right, 75 wrong) →  Minimum Standard Error = ± 4.07%
  • 160 questions (80 right, 80 wrong) →  Minimum Standard Error = ± 3.94%
  • 170 questions (85 right, 85 wrong) →  Minimum Standard Error = ± 3.83%
  • 180 questions (90 right, 90 wrong) →  Minimum Standard Error = ± 3.72%
  • 190 questions (95 right, 95 wrong) →  Minimum Standard Error = ± 3.62%
  • 200 questions (100 right, 100 wrong) →  Minimum Standard Error = ± 3.53%

What this listing indicates is that for a test configured with a maximum of 50 questions and a "standard error to stop" of 7%, the maximum number of questions will always be encountered first and stop the test. Conversely, if you are looking for a standard error of 5% or better, the test must ask at least 100 questions.

Note that these are best-case scenarios for the number of questions asked. If a test-taker answers a lopsided run of questions right or wrong the test will require more questions to reach a target standard of error.

Minimum Number of Questions

For most purposes this value can be set to 1 since the standard of error to stop will generally set a base-line for the number of questions required. This could be configured to be greater than the minimum number of questions needed to achieve the standard of error to stop if you wish to ensure that all test-takers answer additional questions.

Starting Level

As mentioned above, this usually will be set in the lower part of the difficulty range (about 1/3 of the way up from the bottom) so that most test takers will be able answer one of the first two questions correctly and get a moral boost from their correct answers. If the starting level is too high, low-ability users would be asked several questions they can't answer before the test begins asking them questions at a level they can answer.

Scoring

As discussed in Wright (1988), the formula for calculating the ability measure is given by:

Ability Measure = H/L + ln(R/W)

where H is the sum of all question difficulties answered, L is the number of questions answered, R is the number of right answers, and W is the number of wrong answers.

Note that this measure is not affected by the order of answers, just the total difficulty and number of right and wrong answers. This measure is dependent on the test algorithm presenting alternating easier/harder questions as the user answers wrong/right and may not be applicable to other algorithms. In practice, this means that the ability measure should not greatly affected by a small number of spurious right or wrong answers.

As discussed in Linacre (2000), the ability measure of the test taker aligns with the question-difficulty at which the test-taker has a 50% probability of answering a question correctly.

For example, given a test with levels 1-10 and a test-taker that answered every question 5 and below correctly and every question 6 and up wrong, the test-taker's ability measure would fall close to 5.5. Remember that the ability measure does have error associated with it. Be sure to take the standard error amount into account when acting on the score.

Screenshots

Screenshot #0

Contributors

Adam Franco (Lead maintainer): Former maintainer
Please login to view contributors details and/or to contact them

Comments RSS

Comments

  • baraa abd el-hady
    Tue, 27 Dec 2022, 10:13 PM
    halloo all
    this is a great work
    i need to do one modification
    can i stop the process of stopping the quiz when the student answer wrong answers many times

    i need the student to answer wrong 3 times then he go to the next level of difficulty and so on
    how can this happen
  • Vitaly Potenko
    Fri, 30 Dec 2022, 5:25 AM
    Hi Baraa, what you're enquerying for is currently not possible. Moreover, this is not how the CAT algorithm utilized by the plugin currently is intended to work, see the link in the plugin's description above to get more info on that.
    However, the good news is that some big changes in the plugin are going to come in the new year. More specifically, the plugin will provide an interface to inject different implementations of CAT algorithm. This will allow developers for implementing other algorithms to get used by the plugin. Your suggested modification may be such implementation, but you'll have to provide the PHP code either yourself or with the help of some PHP developer.
  • synnac w
    Fri, 30 Dec 2022, 1:02 PM
    hi Vitaly, about the possibility of implementing different algorithms, this is really great news. By other algorithms, do you mean the new version will come with some inbuilt alternatives? Or we will have to do the tweak ourselves? BTW, when will the new version be released? Thank you!
  • Khalid KABCHI
    Mon, 9 Jan 2023, 6:55 PM
    Hello Vitally,
    Let me first introduce myself: khalid KABCHI doctoral student of Mohamed first university. Morocco
    In the context of scientific research, via the experimentation of the pedagogical contributions of adaptive testing in the assessments of learner achievements. To do this I opted for the use of Plugin Adaptive Quiz.
    But unfortunately, I have a problem relaive to the display of the images of the questions for the pupils (On the other hand in edition mode the images are displayed without any problem).
    I want to inform you that I use:
    Moodle 3.10.9 (Build: 20220117)
    mod_adaptivequiz 2.1.2 / mod_adaptivequiz: 2022040100

    Please help me solve this problem which is blocking all my research.
  • Francisco Javier Córdoba Gómez
    Fri, 13 Jan 2023, 11:18 PM
    Hello Vitaly, Adam and all. I would like you to help me in how to create an adaptive quiz in order to do it in one of my courses in Moodle.
    I also have a question: to create an adaptive quiz , do I have to create different categories of questions with different levels of difficultie? or do I have to create only one bank with all the questions combining different levels of difficultie?
    Thank you veru much.
    BW
    FC
  • Vitaly Potenko
    Mon, 23 Jan 2023, 5:51 PM
    Hey folks! Sorry for being silent for some period, the new version has just been released for Moodle 3.8 - 4.0, see the release notes.

    to synnac w - by default the plugin will contain the algorithm it has now, nothing will change from the user's perspective. It's more like an SDK for third-party developers to inject their desired behaviour to how adaptive algorithm works.

    to Khalid KABCHI - nice to meet you here! The version which has just been released contains the fix. Please, let me know if you still have any problems with it.

    to Francisco Javier Córdoba Gómez - you may start with having one category of questions in your bank where you put the questions you want to use in adaptive quiz, with all the difficulty levels you have, and then select this category in the quiz settings. This is the most straightforward way. Let me know if you still have extra guiding on that. Thank you for the question though, you also reminded me of having some Wiki's in the legacy Adam's repository, I haven't transferred it to any new place. There is some good info there in how to use the plugin. It should be definitely added to my agenda to transfer that documentation to the plugin's current repository.
  • Khalid KABCHI
    Tue, 24 Jan 2023, 1:31 AM
    Hi Vitaly,
    Nice to meet you too.
    I will come back to you as soon as I try to install the recent pluging.
    Thanks to you for agreeing to help me to unblock this situation which hinders the progress of my thesis.
  • Khalid KABCHI
    Tue, 24 Jan 2023, 7:40 PM
    Hello again Vitaly,
    I have indeed upgraded my platform to install Moodle 4.0.6 (Build: 20230116).
    Then I created my question bank and my categories according to the level of difficulty. BUT when I try to add a resource the following error message appears:
    An error has occurred
    Programming error detected. This needs to be fixed by a programmer: Invalid component used in plugin/component_callback():ltisource_message_handler
  • Vitaly Potenko
    Tue, 24 Jan 2023, 9:27 PM
    Hello Khalid, I very much doubt this error has to deal with the adaptive quiz plugin. Just looking at this - plugin/component_callback():ltisource_message_handler, I can say it's somehow related to some LTI tool, I cannot recall anything similar in the plugin I support. I'd recommend to check how other activities work, and check what third party plugins are installed in your Moodle.
  • Khalid KABCHI
    Wed, 15 Feb 2023, 7:11 PM
    hey Vitaly,
    After immigrating to Moodle 4.0.6 (Build: 20230116) and when I try to install the Adaptive Quiz pulging. I get an error message:
    The extension must be installed and activated.
    Unicode (UTF-8) data storage is required. Any new installation of Moodle must be done in a database with a default Unicode (UTF-8) character set. If you are upgrading Moodle, please migrate your database to Unicode (see the Notifications page).

    How can I solve this problem? Is there anyone who can help me because all my research is blocked because of this problem.
  • Vitaly Potenko
    Mon, 20 Feb 2023, 1:00 AM
    Hi Khalid,

    Again, I very much doubt the error message is related specifically to the plugin. I would first make sure your Moodle instance can run smoothly without the plugin, then try adding some other third-party plugins to see if any other plugin causes this error thrown.
  • Paulo Paclibar
    Fri, 18 Aug 2023, 10:07 AM
    Hi!
    Can I integrate this plugin on my TalentLMS course?
    Thank you!
  • Vitaly Potenko
    Fri, 18 Aug 2023, 5:59 PM
    Hi Paulo,

    I'm not familiar with TalentLMS at all, so I cannot answer that, sorry.
  • Paulo Paclibar
    Mon, 21 Aug 2023, 10:40 AM
    Hi Vitaly,
    Thank you for the response. If we avail moodle LMS, can we now integrate this?
  • Vitaly Potenko
    Tue, 22 Aug 2023, 3:15 AM
    @Paulo,
    you don't need to take any specific steps to use this plugin for Moodle, like any other plugin in this repository it's ready to be installed in Moodle and you're set! This is basically the point of this repository and one of the strongest points of Moodle itself - extend possibilities of your LMS as easy and quickly as possible.
Please login to post comments