Understanding grading evaluation algorithm in peer review Workshop module

Understanding grading evaluation algorithm in peer review Workshop module

by Zabelle Motte -
Number of replies: 2

Hello,
In my institution, several professors are enthousiastic while using the peer review worshop module. We find the possibilities of this module very impressive especifically the possibility to grade the grading process.

We had a search about the grading algorithms because we wanna give explanations about it to students. We wanna use the accumulative evaluation that is described by the following formula :

http://docs.moodle.org/dev/Workshop_2.0_specification#Accumulative

In this formula, the grade is computed based on a ratio composed of
- the grade given by the student for one criteria as numerator;
- the maximum grade given by all students for that criteria as denominator.

We think such ratio will encourage students to give high scores (while giving 100% to all peers ensures them to have 100% as evaluators).

Intuitively, we would prefer de denominator to be the mean or median grade among all students given grades.

What do you think about it ?

Is there a scientific justification for the choice of the maximum value ?

Are there other options for this formula based on scientific argumentation ?

Finally, if we wanna tune the formula, in which file do we have to make the change ?

Thanks in advance for answer,

Zabelle

 

 

 

 

Average of ratings: -
In reply to Zabelle Motte

Re: Understanding grading evaluation algorithm in peer review Workshop module

by David Mudrák -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers Picture of Plugins guardians Picture of Testers Picture of Translators

Hi Isabelle

I am afraid there are several different concepts mixed up in your post. Firstly, the Accumulative grading strategy (with up-to-date docs available at http://docs.moodle.org/25/en/Workshop_grading_strategies#Accumulative_grading_strategy) determines how the grade for submission is calculated. It has nothing to do with the grade for assessment. The formula describes how the grade for submission is calculated from the filled assessment form. It is a standard formula for weighted mean. The denominator is the sum of weights of all aspects in the form (or, if they all have the same weight equal to 1, their count), not a maximum grade given by all students, as you say.

Also, I can't agree with your conclusion that students are encouraged to give high scores. The grading evaluation method "Comparison with the best assessment" (documented at http://docs.moodle.org/25/en/Using_Workshop#Grade_for_assessment) gives high grade for those assessments that are closer to a "common sense" which itself is determined as the mean of all received assessments.

In any case, you are free to implement your own grading evaluation method in as a subplugin and use it in your courses. The Moodle plugins directory already contains one such alternative.

For the reference, let me point you to another thread in this forum where the calculation of grades for assessments was discussed.

In reply to David Mudrák

Re: Understanding grading evaluation algorithm in peer review Workshop module

by Annette Knobloch -

In Workshop, I have several students who correctly gave low scores to other students with poor work. Mostly, I had each student grade three other students, due to information from hours, days, weeks and months of searching in Moodle, eventually to find that's what Moodle demands in order to calculate an assessment grade. So, within each correct student's set of co-graders, the other two graders graded higher, but not as well. But the incorrect graders' scores clustered together, such that whatver Moodle is doing with these grades, the correct grader was outdistanced, and got a harsh 'grading grade'. Due to the extreme difficulty of finding answers, and the trial and error nature of implementing Workshop, I am now resorting to additional trial and error, to add my own assessment, give it a substantial weight, such as 8 or 12, making me equivalent to 8 or 12 persons! And/or overriding the assessment grade, such that the resulting grading grade is incorrect for the criteria, but such that the correct student grader's assessment grade is near to the an appropriate numeric value.  I have NOT yet tried to override the poor graders' assessment grades downward, because there are more of them, and I would be incoherrent in a court of law, trying to explain how these grades were determined by Moodle. Actually, I would be able to explain what I did, but I doubt that irate students and/or their parents and lawyers would acquiesce.

At this point in time, I have 2/19 students with this issue. One of those had an additional issue, with correct grading of one Aspect of the assignment, but incorrectly harsh grading of the other Aspect of the assignment. So I had lots of trial and error, added my own assessment, and overweighted my own assessment, until the harsly graded student's grade came up to a better numerical value.

Hmm, have there been any law suits involving faculty, universities, and Moodle due to Moodle computation of Workshop assessment grades? Is there any specification, algorithm, or, in my words, formula or set of formulas, since 2009, for Moodle 2.4.4?