Activities: CAPQuiz

mod_capquiz
Maintained by Picture of Sebastian Gundersen Sebastian Gundersen, Picture of George Schaathun George Schaathun, Picture of André Storhaug André Storhaug
CAP is short for Computer Adaptive Practice. In CAPQuiz, the proficiency is measured by a rating. Good answers increase the rating, and bad answers decrease it. To increase the rating, students need to give good answers more of than bad ones over time. Estimating question difficulty is known to be difficult. CAPQuiz automates this process to some extent. The question author must provide an initial estimate, but CAPQuiz improves the estimates based by comparing how the same student answers different questions. Hence the rated question sets will improve over time.
33 sites
120 downloads
7 fans

What is it?

CAP is short for /Computer Adaptive Practice/, a term coined by Klinkenberg, Straatemeier, and van der Maas (2011). Where most LMS quiz systems give the students a fixed sequence of questions regardless of how well the students answer, a CAP system will estimate student ability based on their answers, and try to find questions at the right level of difficulty.

In CAPQuiz, the proficiency is measured by a rating. Good answers increase the rating, and bad answers decrease it. To increase the rating, students need to give good answers more of than bad ones /over time/. We have used CAPQuiz as a mandatory assignment, where the students have to reach a certain rating in order to be allowed to sit the exam.

Estimating question difficulty is known to be difficult. CAPQuiz automates this process to some extent. The question author must provide an initial estimate, but CAPQuiz improves the estimates based by comparing how the same student answers different questions. Hence the rated question sets will improve over time.

Documentation

Documentation is available here, including installation instructions.

History:

The idea of an adaptive learning system at NTNU in Ålesund (then Ålesund University College) was first conceived by Siebe van Albada. His efforts led to a prototype, known as MathGen, written as a standalone server in python.

The first prototype was tested by several lecturers, and was well received by students. There were, however, many problems which we lacked the resources to handle. Most of these problems had already been solved by Moodle and the STACK question type, and it made sense to reimplement the adaptive quiz functionality in Moodle to take advantage of this.

Credits:

Project lead: Hans Georg Schaathun: hasc@ntnu.no

Developers:

Original idea: Siebe Bruno Van Albada siebe.b.v.albada@ntnu.no

The first prototype was funded in part by Norgesuniversitetet.

The development of CAPQuiz has been funded in part by internal grants from Ålesund University College and NTNU Toppundervisning at NTNU - Norwegian University of Science and Technology.

Screenshots

Screenshot #0
Screenshot #1
Screenshot #2
Screenshot #3
Screenshot #4
Screenshot #5
Screenshot #6
Screenshot #7

Contributors

Picture of Sebastian Gundersen
Sebastian Gundersen (Lead maintainer)
Picture of George Schaathun
George Schaathun
Please login to view contributors details and/or to contact them

Comments RSS

Show comments
  • Picture of Plugins bot
    Sat, Sep 29, 2018, 5:20 AM
    Approval issue created: CONTRIB-7465
  • Picture of Dan Marsden
    Fri, Feb 1, 2019, 3:26 PM
    Thanks - please see the final comments related to the review of this plugin in CONTRIB-7465.
  • Picture of Kees Koopman
    Fri, Mar 15, 2019, 10:42 PM
    Hi,

    I came across the CAPQuiz activity in the plug-in collection, a plug-in that allows adaptive testing.

    A student has a certain 'rating', is offered vagues of various levels and, depending on the answers, goes up or down in 'rating' (or stays the same).

    Well, that requires a considerable amount of various questions, each with its own level rating ('rating').

    Suppose the candidate receives a basic rating of 1200 at the start of the quiz.

    QUESTION Is it the intention that he / she will first be offered questions with a rating of 1200? And that questions with a higher or lower rating follow, depending on the answers given? The manual is unclear about this.

    Who else has dived here and can / wants to provide clarity?

    Thanks in advance.

    Kees Koopman
  • Picture of George Schaathun
    Fri, Mar 15, 2019, 11:39 PM
    I'll try to give some clarity.
    First of all, the plugin is brand new; only just ready for field testing, and I am preparing to do so in my module in the Autumn this year. I did, however, use a similar system (in-house, standalone implementation) i 2017, so I know the idea is sound.
    In terms of how it is supposed to work,
    1. The intention is to use parameterised questions, which is simple to do in mathematics and physics, using questions types such as STACK or (I think) numeric. In this case, it makes sense to give the students the same question over and over again, just with different numbers. You are absolutely right that if each question is unique, as would be common in many non-mathematical subjects, an enormous number of questions is needed.
    2. Question delivery is usually randomised and it can be tuned somewhat according to pedagogic requirements. If a student at 1200 gets a question at 1200, it means that the success probability is 50%. Klinkenberg et al (2011)¹ argue that the student should get odds better than 50%, maybe 75%, which means that he should ideally have a question at lower rating. This is configurable.
    3. Optimising the question selection algorithm, I take it, is an open, and very interesting, research problem. We are working on this, and I know of other teams, too, who have similar ideas and challenges.
    4. The rating model which is used, is the same one used in chess. This means that you increase in rating when you answer better than expected, and the more surprising the result, the greater the change. Similarly, bad answers make you drop. At every point in time, you get random questions to give you approximately the ideal success probability.
    Klinkenberg's paper explains the underlying mechanism very well, although there are several points open to debate and further research.

    So essentially, you are right, the intention is what you suggest. I shall review the manual when I prepare my own module, and make sure that it is better explained. I hope this helps.

    What subjects to you teach? Do you think the idea could work in your subject?



    ¹ https://www.sciencedirect.com/science/article/pii/S0360131511000418
  • Picture of Kees Koopman
    Sat, Mar 16, 2019, 12:48 AM
    Hi George,

    Thank you for your quick and comprehensive response.

    1. I hadn't thought of that. Thank you for the tip (numeric with its own data set).

    2. Student level = 1200 | Question level = 1200 ==> success 50%
    I get that smile.
    Klinkenberg and others say: prefer a higher chance, for example 75%.
    I also understand that smile.

    You write: "This is configurable."
    Do you mean the setting at "Configure N-closest":
    Desired user win probability 0.75

    I am a physics teacher in secondary education. This is perfect for practicing and grinding formulas, calculations etc.
    I am also a trainer of teachers and organizations / schools in the field of Education & ICT.

    Many thanks for this digital elaboration of a grand idea.

    Sincerely,
    Kees Koopman.
  • Picture of George Schaathun
    Sat, Mar 16, 2019, 1:03 AM
    Yes, that's right. When you set desired win probability, the system calculates the corresponding (ideal) question rating, and pick a question uniformly at random from the set of N questions with rating closest to the ideal. The underlying model is the same one as is used in chess, that of Elo. Except for linear scaling, this is equivalent to the Rasch model used in psychometrics.
  • Picture of Kees Koopman
    Mon, Mar 18, 2019, 2:52 AM
    Hi George,

    Thank you for your quick reply.
    I now understand making questions and also assigning a rating.

    I have a follow-up question.

    I would like to perform a conditional action based on the rating or number of stars achieved. How do I do this?

    Thank you in advance for your response.
    Sincerely,
    Kees Koopman
  • Picture of George Schaathun
    Fri, Mar 22, 2019, 9:33 PM
    I am not sure that is possible; it goes beyond my use of Moodle ever. Do you have an quick example of another plugin which allows you to perform conditional actions?

    I know we had some challenges with this, as we wanted to implement the stars as badges, rather than as a plugin internal score. It may or may not be related.

    I'll try to talk to my colleagues. With a good example from another plugin, it might be an easy thing to fix.
  • Picture of Kees Koopman
    Sat, Mar 23, 2019, 5:35 PM
    Hi George,

    Thank you for your response.
    My question was: "I would like to perform a conditional action based on the rating or number of stars achieved. How do I do this?"

    I think I mean something simpler than you may think.

    For example: a student receives a grade in the gradebook after taking a test. I can then have a conditional action depend on that figure. If 'grade for test <6.0', then 'do repeat material'. Etc.

    This 'repeat material' has a 'Restricted Access'. For example, a certain grade for a test, a certain stash object or a certain level (Level up!).

    What I want is that there is a 'Restrict Access filter' that can filter on 'student level' or 'number of stars'.

    See also: https://moodle.org/plugins/availability_xp and https://moodle.org/plugins/availability_stash.

    I hope my explanation is clear; otherwise ask for an explanation.

    Thanks in advance!
    Regards, Kees Koopman.
  • Picture of George Schaathun
    Mon, Mar 25, 2019, 3:56 PM
    Right, I think what you are asking for is for CAPQuiz to issue a grade, which can be recognised as such by other moodle modules/the core.
    It is still beyond my use of Moodle; I use Moodle only for activities, I do not organise taught modules. But obviously you are right. Issuing a grade makes sense, and I have a couple of students who will be working on CAPQuiz and JazzQuiz over the Summer, and this sounds like something we can fix in that timeframe.

    I was just wondering what use case you have in mind. My own use case was to allow the students to continue work on the activity indefinately. There is a deadline for reaching three stars, and I simply record outside moodle who have achieved it soon after the deadline, without disrupting the flow for those heading for five stars.
    How would you organise it (in an ideal world)? Close the activity at the deadline and use the number of stars as a grade? Issue a grade at the deadline and keep the activity open? Or?
    :-- George
  • Picture of Kees Koopman
    Mon, Mar 25, 2019, 11:19 PM

    Hi George,

    Thank you for your response.

    I didn't mean so much that a student gets a grade. That also does not do justice to the intention of CAPQuiz, I think.

    Somewhere in the Moodle database is the student's obtained rating. Joe Smith, for example, has a rating of 1565.

    I have a document, certificate, test, web link or something available for students who have a rating of 1550 and higher. Joe Smith is eligible for this (after all: 1565> 1550).

    In that follow-up document, with Restrict Access there is something like: if the student scores at CAPQuiz> 1550, then he can see the document.

    Writing this I think of a wish.
    At "Activity Completion" >> "Completion tracking" >> "Show activity as complete when conditions are with" >> "Require a rating of ....." (see stash or level up).
    On the dots you can then enter the minimum rating where Moodle automatically ticks the CAPQuiz. This can then be used as input for the Progress Bar.



    You write that you have a deadline for reaching three stars and that you register that outside of Moodle. I like to see the latter automatically and within Moodle smile.
    The Moodle progress bar that I use is a fantastic instrument for that.
    The student may (or must) continue to improve himself; In my opinion, CAPQuiz should simply remain open.

    Regards, Kees Koopman.
Please login to post comments