Yes, that's a rudimentary example (I guess necessarily so in the context). The initial question in this case, "How good is it?" is setting up the teacher's expectations of standards of submitted work: The learners are being asked to "second guess" their teacher. The usual strategy in this case is to show examples of work that are graded and annotated as to why they got what they did. Here's a similar example from Trinity College London's GESE exams: http://www.trinitycollege.co.uk/site/?id=1803 (left column, under basic information you'll see descriptors, look up tables, etc.) They give some examples of past candidates and their grades here: http://www.trinitycollege.co.uk/site/?id=2046 The fact that this is part of summative (tests) rather than formative assessment (constructive feedback) should allude to what kind of context prescribed rubrics are typically used for.
Something else that I think is also relevant is that in many peer review writing programmes, there's usually as much emphasis on developing reviewing skills (meta cognitive skills) are there is on the submitted work itself. Another strategy that is of significant interest in research is real time co-construction of feedback by two or more reviewers, i.e. they sit down together in pairs to look over another classmate's work, in turn overseen by a "review reviewer", i.e. the reviewers get feedback on their reviews.
Assessing learners' work effectively and productively is a difficult skill to master and just as difficult when assessing their own. However, it appears to be a worthwhile learning strategy, since at least one research paper has noted that reviewers make greater gains than reviewees in A-B group comparisons. See: "To give is better than to receive: The benefits of peer review to the reviewer’s own writing" by Kristi Lundstrom, Wendy Baker, Journal of Second Language Writing 18 (2009) 30–43
I think it's important to keep in mind why we think learners should self and peer assess, then whatever strategies we develop, rather than being validated by how closely they match a definition, can be evaluated against how effective they are at promoting the kinds of learning we're aiming for.