Thanks Gurman for opening this topic. Let me try and explain the reasoning for the current behaviour (I am the author of the Workshop 2.x code).
As a teacher, you have several options to affect the grades given to students in the Workshop (in their role both as authors and as peer-reviewers).
- You can provide your own assessment, eventually with higher weight than peers have, so it will have bigger impact on the calculations. Your assessment is considered as another peer-assessment so it affects both the assessed submission's grade, and also the assessment grades of all others who assessed the submission, too.
- You can override the final grade for each submission, e.g. because you are the only one who know about some author's disabilities that led to receiving lower grade. So even if you accept all the peer-assessments, you still want to set the different grade. This way, you amend just the submission's grade without affecting reviewers' assessment grades.
- You can override the individual grade for assessment. That is the grade that is calculated by the grading evaluation plugin in use. The typical use case is when reviewers get lower grades from the "comparison with the best assessment" evaluation method just because their review is not "in line" with all others assessments of the same submission. Yet you realize that even if the reviewer was sole in their assessment, they were actually right. Maybe they were the only one who spotted a mistake or so. So you want to fix this particular mis-calculation.
- As a last resort, you can still override both final grades for submission and grade for assessment for each student in the gradebook once the grade is pushed there (which currently happens only when you are closing the workshop, this will be improved in the future).
My reasoning was that when teachers are overriding a grade in the Workshop, they should have particular reason for that. So they re-grade particular submission (because this and that in this submission is good/wrong), or particular assessment (because this and that in this assessment is good/wrong). There should be a clear evidence that a particular grade has been overridden and for what reason.
What you would like to have is the ability to override the submission grade coming from a single reviewer. But that grade is calculated by the selected grading strategy and is based on how the assessment form (e.g. rubric) has been filled. Imagine there are three reviewers of a submission. They all fill the assessment form (say a rubric) same. I believe it is reasonable to expect that same filled rubric should lead to same grade. Why would you want to override one of it? In my mind, it just goes against the whole idea of using multi-criterial assessment forms to achieve more objective assessment.
That is what led me to implementing the current behaviour. I generally love the feedback from teachers working with the Workshop and I'll be happy to hear your use-cases / reasoning for your feature request. Thanks in advance!