Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Cris Fuhrman -
Number of replies: 7

I read up on how to make good MCQs and one very important point is to use as many functional distractors as possible. This is harder than it sounds.

Something that would be easy would be to identify likely non-functional distractors. They are simply answers that few students chose. The papers I read suggest that eliminating them from MCQs is an effective way to improve question quality. 

I'm suggesting adding statistics to questions that would keep track of this data. Something simple would work. Each multiple-choice question in the bank has added metadata keeping track of the number of times a choice appears (numberOfAppearances) and the number of times it gets chosen (numberOfTimesChosen). Here's a domain model of the problem (not a database model):

Statistics for Choices in multiple-choice questions

With this data, it would be very easy to identify questions that have non-functional distractors. I find this statistic much more useful to improve my questions than the current set of statistics that are at a "question" level only.

Of course, editing the text of a Choice should effectively re-set the stats to 0. There might be other subtleties on how to handle the data based on use cases. Actually, modeling distractors as a separate entity (independent of questions) is interesting, since I often re-use the text. A good distractor (functional) is useful in several questions. I can see the question editor having a feature to "suggest" functional distractors from other questions in the same category, for example. But these are all features that require more work. The basic feature of calculating the efficacy of a distractor is a simple start.

Average of ratings: -

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Tim Hunt -
Picture of Core developers Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers

Have you looked at the information that is already available by going to Quiz -> Results -> Statistics?

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Mark Hardwick -

I didn't realize the data in "statistics" was available.  I've got to figure out what some of the terms are actually calculating.

Cris,  Thanks for the question.

Tim,  Thanks for the answer.

Mark

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Tim Hunt -
Picture of Core developers Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers

The documentation is at http://docs.moodle.org/26/en/Quiz_statistics_report, and http://docs.moodle.org/dev/Quiz_statistics_calculations if you want to see the equations.

This two-page summary from the OU, http://labspace.open.ac.uk/file.php/3484/Brief_Guide_to_iCMA_reports.pdf, linked there, is also helpful.

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Cris Fuhrman -

Hi Tim,

Yes, I looked at the statistics that are present. In fact it is possible to get the statistics I'm asking for a given question. However, it's a lot of clicks and those statistics are calculated only for a question used in a single quiz. There is no persistent data attached to the distractor. 

Heres an example from the docs.moodle.org page:

Analysis of responses to a MCQ

I think what I'm asking for is a different use case. The data I'm suggesting would be information that's accumulated over the lifetime of a question. A question might appear in several quizzes, or it might be in the same quiz over several semesters. Some questions are chosen at random, and so their frequency might be low in a given quiz in a semester.

We use (or are trying to, if we can get permissions right) the same question bank in multiple courses and over multiple semesters. The idea is we do not want to copy our questions from one semester to the next, because we are constantly improving them. We want to use a common database of questions among several courses, because some content is common between courses.

Finally, the current statistics report doesn't get to the heart of the matter. It doesn't clearly show you what is a nonfunctional distractor. In other words, I want to see the wrong answers which don't get chosen labeled as such because its important. 

The nonfunctional distractors in this question are the incorrect answers that rarely get clicked. Using the example above, "freezing winters in mid Northwest America" only got clicked two times and is the least functional distractor.

Again, the set of statistics shown is calculated from one question in a single quiz only. I'm suggesting a function that keeps track of this data persistently for a question as it may be used in many many quizzes.

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Joseph Rézeau -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers Picture of Translators

Hi Cris,

That's an interesting discussion. Thanks for the link to the "Multiple-Choice Test Items: Guidelines for University Faculty" booklet.

Here is what I told my students when I was teaching future language teachers at my University and they had to create their own language questions and quizzes.

In a MCQ question, the correct choice/answer is the easiest to provide, simply because it is the correct answer. Usually, a first "functional" distractor is fairly easy to find. I define it as an answer which might be correct (in other circumstances/contexts), but which (just) fails to be correct in the current context. After that, the teacher has the difficult task of finding one or more so-called "functional" distractors, which must be close enough to being "correct" in order to beguile the weaker students and not as far as being evidently incorrect.

An important point which has not been mentioned in this discussion so far is the type of testing you are conducting. All your examples relate to "facts". In my experience as a language teacher I was mostly testing not facts but correctness of language. In that context, my experience led me to easily find functional distractors in the guise of frequent student mistakes!

Finally, and to come back to your original posting, I understand how frustrating it can be to have in the Moodle database a huge amount of information which might be useful for research but is not readily available, simply because Moodle is mostly oriented towards teaching and learning, not towards research.

Joseph

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Jean-Michel Védrine -

I have to agree with Cris here,

I think that additionally to quiz reports we should have question bank reports because there are quite a lot of interesting statistics that could be done here.

Average of ratings: Useful (3)

Re: Identifying non-functional (useless) distractors (choices) in Multiple-choice questions

by Cris Fuhrman -

Joseph wrote: 

In that context, my experience led me to easily find functional distractors in the guise of frequent student mistakes!

Yes, that is my experience as well. Misconceptions / biases are very useful to know as an instructor (you learn them over time, unless you were lucky enough to have a former instructor explain them to you in the context of a course). It's really the difference between a good MQC and a mediocre one.