Testing CBM

Re: CBM Tests

by Tony Gardner-Medwin -
Number of replies: 0
Picture of Particularly helpful Moodlers Picture of Plugin developers

You say:  "I must say I found the feedback with LAPT too difficult to show to the student in the VLE I'm working on now."

The immediate f/b with CBM (mark=1,2,3 or 0,-2 or -6 if incorrect) is simple, so is not I think what you're referring to here. I guess you are talking about the overall scores on a test - which incidentally in a formative self-test are I think of relatively minor importance because (unlike the immediate Q f/b) they don't contribute much to learning - just to the student feeling good or bad.

I agonise a lot about how best to present overall scores, and I think if you use CBM it is important to understand the issues. In a sense the problem lies with conventional scores, since guesses can give conventional marks that are perceived as quite high. In a typical exercise with MCQs and TF Qs (like your "Basis 2") chance might give on average 38% correct (3.8 / 10). Since CBM rewards correct guesses less than knowledge it reduces this effect: guesses (acknowledged with C=1) would give on average only 13% of the maximum possible score (3.8 / 30). There is an immediate problem that if you present CBM scores in this way (as Tim does in the code for Moodle 2.1) then the students may feel that CBM is bad because it tends always to score them lower. A typical student on a typical exercise gets (from LAPT data) about 70% correct and an average CBM mark of about 1.2 (40% of the maximum possible).

One simple way of bringing the scores more in line and making the comparison more psychologically positive (as currently in the code for Moodle 2.0) is to calculate the CBM percentage in relation to "all correct at C=2". This means that a typical CBM score becomes around 60% rather than 40%, and the maximum possible score (all correct at C=3) is 150%. The idea of 150% jars with some people but can be seen simply as a bonus for not only getting everything right, but also knowing you could justify being sure about everything.

An alternative strategy to bring the scores more in line (used in LAPT) is firstly to convert conventional scores to a more sensible scale (denoted "percent knowledge" or "percent above chance") in which guesses yield 0% on average, and all correct gives 100%. Typical scores (and typical passmarks) then convert to around "50% knowledge".  CBM scores treated in the same way (0% for guesses at C=1 to 100% for all correct at C=3) are more in line, but can be brought still closer in line for average students across the whole range of ability by using an equation that boosts the lower scores a bit while leaving the maximum at 100%. The near equivalence of these two types of score on average is shown in the graph at http://www.ucl.ac.uk/lapt/laptlite/sys/lpscoring.htm  and is both important for anyone involved in standard setting and constructive for its psychological impact on students: students can see whether their insight about the reliability of their answers is better or worse than for other typical students getting the same percentage of questions correct.