My team and I are having problems with the marking of our multiple choice questions that have more than one answer.
The problems are as follows:
If the students tick all of the answers they will automatically get the question right. This can only be corrected if you insert a minus % for the incorrect answers in which case they will get 0 for the question i.e if there are 6 questions and 3 correct answers, the correct answers are +33% and the incorrect answers are given a value of -33%. This will also only work for an even number of questions (?).
The example above will also not work for questions where they get two right and one wrong. They will then only get 1 mark, instead of two because of the -33% for incorrect answers.
If you go back and leave out the negative percentages then they will once again be able to tick every answer and get the question 100% right. It seems to be a catch 22 situation.
We don't want the students to be able to work in adaptive mode either, because this is a final assessment and not a general quiz where they would be able to check the answer and change it.
Does anyone know a way around this, other than someone having to go through every single quiz afterwards and verify the result of each and every question? This will be extremely cumbersome, as there are 3 units each with three quizzes in each unit. Each quiz is, on average, 10 to 15 questions long. Help!
It becomes confusing when you are trying to translate a written multiselect multichoice, where it has previously been marked physically by a person and they are have allocated one mark for one correct answer, to an automated multiselect multichoice.