I set a workshop to require 100 students to produce questions of the multichoice-single/multi answer type after being randomly assigned a set of topics selected form a pool of n topics.
Each student has to write 8 questions and will review work from 3peers following a grid of criteria and scores (a rubric actually, but provided separately, not as a workshop rubric)
Questions are filled into a pre-set excel file, submitted as an a attached file, and I assume (never done this before...) each will receive 3 attachments to be downloaded for assessment.
In the assessment form I initially set this up so that each "aspect" corresponds to one question so they are expected to grade every single question separately which will equally weigh into the final grade.
What I didn't understand is how workshop calculates the best assessor to compare with to calculate grades for assessment. If the comparison is made on aspect1 for all 100 students this is obviously not going to be fair as the initial topics are randomly selected and they are not comparable. However, having each student three assessors, if the comparison is made between aspect1 of those three assessors only then the comparison is legitimate.
I would be grateful if anyone who could please clarify this.