grading algorithm for workshop - control on reveal feedback by Eric Lin -Wednesday, May 20, 2020, 11:14 PM

grading algorithm for workshop - control on reveal feedback by Eric Lin -Wednesday, May 20, 2020, 11:14 PM

by Eric Lin -
Number of replies: 3

I have used workshop in the past, but it has been awhile. Given that, I wanted to renew some questions to see if this has changed before I use it again. 

In the past, I wanted to have each student submission sent to three reviewers. I would add myself as a reviewer, giving myself a very large weight. This would do two things: first, it would mean that the grade is largely determined by me (and I could be explicit about that weight). Second, the grade on the feedback for those providing feedback would be heavily influenced by my grade, since the feedback grade is a function of deviation from the aggregate grade. The further off the reviewer is, the lower score they receive. 

In the past, this seemed to be true, but it was not at all transparent how this reviewer grade came out. Is there any clear explanation of how those grades are computed as a function of the aggregate score? It is hard to explain the credibility of that mark to a student if the method by which it is computed is not clear. 

Finally, while the final grade is clear to the reviewer and the submitter of the work, is there any way to reveal the comments that I as a teacher have for the submitted work, so those who have done reviews (and not scored highly) can see where their evaluations may have deviated from mine?

I want feedback to be anonymous, but I also want reviewers to learn how to do good reviews. I think this is a great way to wring more learning out of assignments, but I want to make sure that activity works like I'd like it to.


Average of ratings: -
In reply to Eric Lin

Re: grading algorithm for workshop - control on reveal feedback by Eric Lin -Wednesday, May 20, 2020, 11:14 PM

by David Mudrák -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers Picture of Plugins guardians Picture of Testers Picture of Translators

For the default method "Comparison with the best assessment", there is not a single formula even if the process is deterministic. I am not aware of a better description than at https://docs.moodle.org/en/Using_Workshop#Grade_for_assessment

Please note you may consider using alternative grading evaluation methods from https://moodle.org/plugins/?q=type:workshopeval

In reply to David Mudrák

Re: grading algorithm for workshop - control on reveal feedback by Eric Lin -Wednesday, May 20, 2020, 11:14 PM

by Eric Lin -
I've read this before, and experimented with the grading outcome. If the process is deterministic, then it can be specified with parameters and, if not a formula, then code. Is it possible to see the code on this? The determination of the top score is clear enough. What is missing is how the distance to ideal and therefore the degredation of grades is mapped to a declining score. I remember there being a "strictness" scale as far as how well one converges to the ideal score. Can there be some more explanation of how that function works? Is this linear? Or does it work more like OLS, where big deviations are "penalized" more?
In reply to Eric Lin

Re: grading algorithm for workshop - control on reveal feedback by Eric Lin -Wednesday, May 20, 2020, 11:14 PM

by David Mudrák -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers Picture of Plugins guardians Picture of Testers Picture of Translators

Is it possible to see the code on this?

Of course it is - see the file mod/workshop/eval/best/lib.php around the line no. 150 - the method process_assessments() and other methods it uses such as average_assessment(), weighted_variance() or assessments_distance().

There are also unit tests for these, and it should not be hard to even produce some kind of charts and tables illustrating the behavior of the algorithm.

One of my long-term nice-to-have features for workshop is adding support for radar / spider charts that would display how one submission was assessed by multiple reviewers (one reviewer = one colored line around the web), each one assessing multiple criteria (each direction in the chart representing one criterion). It would then allow to show the hypothetical "best" (average) assessment and how the student's one is different from it.

Average of ratings: Useful (1)