I had a look at Calibrated Peer Review (thanks for the link, Erlyn). The "calibrated" stands for the fact that the student needs to assess a number of teacher-supplied pieces of work before starting to assess work by peers. This of course is also an option in the workshop module. However in the workshop module this calibration is done before the student submits their own work rather than afterwards whereas CPR always do it after the student has submitted their own work. Now both models have their advantages and disadvantages. I can think of circumstances where it would be helpful for the student to see example work before submitting their own work but I can also think of circumstances where this would be inappropriate. So should it perhaps be an option for the teacher to choose whether the calibration should take place before or after submission?
Calibration after submission
Number of replies: 2Re: Calibration after submission
Hi Gustav,
I'm glad you liked the link!
In the ESL composition classes I teach, it would work best for students NOT to see "calibrated" examples provided by the teacher before writing their own essays, especially if the examples were provided from the same text the students were to write about. (The result would be a series of nearly identical essays with none showing much creativity or deviation from the model.)
In an ESL composition class, I think the first workshop activity would work best in about the third or second week of classes at the earliest. If the activity were a Giving Instructions essay of 325-375 words, students would have already been introduced to (1) topic sentences including main idea and supporting details, (2) time transitions and chronological order, (3) simple, compound, and complex sentences, and most important (4) models of former students' Giving Instructions essays. With that, I believe students would have enough information to begin writing their own five-paragraph Giving Instructions essays.
Then (and again, this is how I would envision it in an ESL composition class because students need to know how to objectively evaluate their own writing strengths and weaknesses), students could read and respond to the teacher submitted "calibrated" essays and exercises, which would serve as still another model, a model of evaluation. Then students could respond to their peers' essays, then their own essays.
In answer to your question, therefore, I'd prefer the option for the teacher to provide the calibration after students submit their work. Students could then try to apply those calibration standards to their peers' essays, then their own essays.
Re: Calibration after submission
I also checked out that site Erlyn, interesting and very nice.
Is what you both are referring to similar to the issue I brought up here,
http://moodle.org/mod/forum/discuss.php?d=15541&parent=75788
The example I used was only an example. I really am suggesting the ability/flexibility to setup a workshop with
- the number of student-teacher interactions before a final submission determined by the teacher
- ability to assess the number or student-teacher interactions with a grade of "completed"/"uncompleted" that can be translated into a grade [if using Rubrics (I prefer) or not].
In other words, a teacher when planning a workshop can determine
- how many student submissions are to be allowed based on suggestions building on the prior student submission.
- a way for a teachers response to a submission can be noted
- how he/she wants to grade the submissions prior to the final submission.
- what the time window will be for all of this to happen
- automatic beginning time and closing time for the workshop.
- emails notifying a teacher and a student when something is submitted to them from either party.
- ability to accept late submissions and to grade them.
Does the workshop have this type of fluidity coded in or does it only allow a certain number of student-teacher submissions with limits as to how the submissions and final grade is determined?
Thanks in advance for your response.
WP1