My observation refers to Moodle Doc Using Workshop. There it states:
Grade for assessment
Grades for assessment are displayed in the braces () in the Workshop grades report. The final grade for assessment is calculated as the average of particular grading grades.
There is not a single formula to describe the calculation. However the process is deterministic. Workshop picks one of the assessments as the best one - that is closest to the mean of all assessments - and gives it 100% grade. Then it measures a 'distance' of all other assessments from this best one and gives them the lower grade, the more different they are from the best (given that the best one represents a consensus of the majority of assessors). The parameter of the calculation is how strict we should be, that is how quickly the grades fall down if they differ from the best one.
I tried to understand the "grades for assessment" with the following data (based on 80% uncategorised for grade for submission, 20% uncategorised for grade for assessment, and "very strict" comparison of assessments:
- The 1st row is OK to receive 20, cause 53.33 is closest to the mean.
- The 2nd row is OK to receive 20, cause 48.00 is closest to the mean.
- The 3rd row is OK to receive 20, cause 26.66 is closest to the mean.
- The 4th row shows the problem: why the heck does the circled 58.66 receive the 20, even though 56 is closer to the mean?
Could someone please reveal the formula behind this calculation? Or is it maybe even a bug?
Thank you very much for your help.