I think the module is most useful when you want to get your students assessing other students' work. In one of my courses students are asked to create a PowerPoint presentation. There are too many students (over 300) for the students to give live presentations, their work is assessed as stand-alone (multimedia) pieces of work. The workshop asks students to first assess a number of example presentations, this introduces the students to the scope of the presentations and how they are assessed. They prepare their own presentation and, once they've assessed the examples, submit it on-line. They are then give a number of the new presentations to assess. In this course, the students are asked to assess 4 examples and 5 of new presentations. (In another of my courses the numbers were 3 and 4 respectively and the work was a report produced in Word.)
When the dust has settled from the assessing and submissions, the assessments are "analysed". Before that, however, I grade the example presentations, in my case I have 14 examples. These represent the benchmark assessments against which the students assessments are judged. Remember the students were asked to assess (four of) these. The analysis looks at how well each student judged these examples. It is looking to discard a number of the student assessments (the proportion is set by the teacher). It compares each set of (student) assessments of each piece of work (in my case that up to 5 assessments). If the student performed poorly when assessing the example assessments their assessments of the "real" work is treated with suspiction and their assessments are much more likely to be dropped then those from students who did well with the examples. At this stage I have only assessed the examples, in my case that's 14 assessments.
The analysis might show that some of the new pieces of work have either no assessments (they've all been dropped) or, more likely, the assessments have markedly different grades. In these exceptional cases I would go in and assess the work. I would/should also assess a sample of the pieces of work, usually some at the top, some in the middle and some at the bottom of the range of grades. With those done, the analysis is repeated. This time with more "teacher" assessments the judgement of the student assessments will be sharper. In fact, the analysis can weight the teacher's assessments and if the teacher is not the "best" assessor extra weighting on the teacher's assessments can make the teacher the top assessor.
Sounds complicated but in practise it means the students are doing a lot of the assessment. A real advantage in my case where I've got 300+ students. In order to show that the teacher takes these assessments seriously the grade for the assignment can have an assessment element in addition to the (average) grade given to the actual piece of work. Thus we're asking the student not only to produce something but to be aware of what's good and bad in that work. The Workshop module can also ask the student to self assess their work. In my case I've not included that, I feel there's enough assessment going on already. Finally, the students see the (peer and teacher, if there is one) assessments of their work and (optionally) a list of the best submissions and so the discussions can go on...
Hope that gives some flavour of the module,