That is a very valid question. The original Workshop code including the grading grade calculation was written by Ray Kingdon who left us unfortunately, as far as I know. I don't know if he was a statistician, although giving the method he decided to use would suggest so.
The good news for you is that the grading grade calculation in the Workshop 2.x has been rewritten in a pluggable way. If, for any reason, the default "Comparison with the best assessment" method does not fit your needs or legal requirements, you are free to develop an own subplugin for Workshop to calculate the grade. I will encourage you to share your work with the community in that case, for sure.
From my point of view, the grading grade should be interpreted really carefully. More than a grade, I would personally take it more as an indicator for both the teacher and the reviewer. Note that you can exclude it from aggregation in the course gradebook settings, so it can still give the reviewers some feedback (given the meaning of this grade has been communicated with them) while not influencing their course total score.
Thanks for you interest in this area. Not many people dig that deep into the code. It will be useful to hear your eventual suggestions for improvements or even a complete new design for a new grading evaluation method.
At the recent Moodle Research conference there was a research suggestion mentioned, some variant of Turing test. The basic research question in such a test reads "would human teachers, when evaluating workshop submissions, give same grading grades as Moodle workshop code gives?" I am pretty sure this field deserves a lot of research that would lead to a new improved grading evaluation method in Moodle.