I have used Workshop for real three times now, with some 300 participants each time, and I agree that a small extrinsic motivation for peers to write good feedback is an interesting feature to have. Although some, maybe even most, students really take the time to read and comment on the work of their colleagues, we think it's important to have some way to judge and maybe even rate the quality of the feedback (both the grading grade and the comments).
Now, of course Workshop calculates the grading grade through a rather opaque algorithm that takes into account how far the peer grade is from some "ideal" grade (mean of peer grades, or the teacher's grade if she gave an assessment with a high weight). That's all fine, but with the umber of peers we use (usually 4) I never felt comfortable accepting the calculated grading grade and usually fall back on the "proportional" algorithm (it's a plugin I think, that just gives a grading grade proportional to the number of reviews given by the student).
What we do now is emphasize the importance of the comments and let tutors read them and override the grading grade. But I must say the interface can be confusing. Still, it's possible.
But what Gus proposes is very interesting: if the students had a way to flag or rate the grades and comments they received from their peers, the process described above would be much simpler because now teachers could much easier find and act on peer grades and comments that are perceived as unfair. There is a problem with the phases though: students can see their grades and feedback only in the final phase. Besides an interface that allows them to flag or rate reviews in the final phase, it would also help to let teachers override grades and grading grades in the final phase (I realize this may be difficult to implement)