Moodle Plugins directory: Comparison with the weightiest assessment | Moodle.org
Comparison with the weightiest assessment
Workshop evaluation plugins ::: workshopeval_weightiest
Maintained by
Albert Gasset,
David Pinyol Gras
Evaluation method for Moodle Workshop that uses the assessments with highest weight as reference for evaluating other assessments.
Latest release:
92 sites
54 downloads
7 fans
Current versions available: 4
This method allows teachers to decide which assessments are "good" by giving them the highest weight. The rest of assessments are evaluated by comparison to the "good" assessments, similary to "comparison with the best assessment".
Useful links
Contributors
Albert Gasset (Lead maintainer)
David Pinyol Gras: Creator
Please login to view contributors details and/or to contact them
My experience says that it is crucial for users (especially for students) to understand how grade calculation works. I think it's necessary to provide a good explanation - e.g. here at the plugin description field (as mentioned above in the plugin check results).
With the hope the reported plugin check results will be addressed soon, you are cleared to land. Welcome to the plugins directory.
Thanks.
Because I could not find any documentation of the plugin behaviour, I will try to formulate something here. Please correct me if I'm wrong!
By trial and error, I conclude that the weights that I assign to the peer assessments equal the weights in the weighted average that forms the grade of the original submission, for instance: One submission receives the peer grades 64, 80, 77, 80. The teacher gives the first submission weight 5 without touching the other weights; this results in grade (5*64+2*80+77)/8=69.625, or approximately 70, for the submission.
By inspecting the source code of the plugin, mainly the file lib.php, I conclude that the impact, of the weights that I assign, on the grading of the peer assessment is the following: Each individual peer assessment of a given submission is compared to all weightiest assessments of that submission, and the full score minus the shortest distance becomes the score for the peer assessment. The percentage tolerance under "Comparison of assessments" determines how much two assessments in a dimension need to differ in order to be considered "different", so that differences equalling the tolerance exactly are considered equal.
$distance += abs($agrade - $rgrade) * $weight;
Or even this
$distance += $weight * 100 * (abs($agrade - $rgrade) - $settings->comparison * 10) / (100 - $settings->comparison * 10);
In this sense, the score would be proportional to the distance to the "good" assessment, similary to "comparison with the best assessment".