the document called Development:Workshop 2.0 specification is getting the shape in our wiki. The spec defines how the Workshop is going to be revitalized in the next Moodle version, how the UI will look like, how grades will be calculated etc.
Would be very helpful for me right now if you find a minute to go through the spec and post any questions, comments, ideas or contra-proposals here into this thread (please do not use the wiki talk page).
Also, I would like to gather examples of some real workshops you ever tried to run. There is a change proposed to merge the current Criterion and Rubrics grading strategies. Examples of various evaluation criteria and scales you ever used would help me to prove the concept of change.
Thanks in advance!
It's awesome that you're revitalising this module!
I just have one comment - at some point I worked on a project using this where the client wanted to use something very similar to the 80/20 submission/assessment grade method, except that the 80% submission grade should come only from the teacher, ignoring the peer assessment.
In fact, the peer assessment was formative only, no grade and the 20% came from the quality of the formative assessment. The teacher was 100% responsible for grading the submission itself.
I don't think that is possible in the current system, and I tried to look through the code and recoiled in horror and ended up using a combination of the assessment module and rated q/a forums instead. Which actually worked very well!
I've been already thinking about such feature reading an old post here in the forum. Some thoughts:
- There is No grades strategy where peers just comment the submission. We should let a teacher (it est an user with a required capability, to be precise) to be always able to grade submissions.
- It is planned that a teacher can always override the grade calculated from the peers' suggestions before the final grade is pushed into the Gradebook.
Yes, I think that would cover the case.
How will the UI work for the teacher overriding the grade?
I guess I still want to make sure that the grade is made up of 20% of the grading grade and 80% of the submission grade. That is, when the teacher overrides the grade before it goes into the gradebook, it's not just the final grade, but both parts of the composite grade, if that makes sense.
I think from the way you phrased that above ('override the grade calculated from the peer's suggestions) that that is the case. In which case, yay!
the UI for this has not been proposed yet. However I can imagine it as a general subform displayed at the bottom of a page with the report of the submission assesments.
Yes, the teacher can override the grade for submission independently on the grade for assessment. So even if the aggregation would lead to 67/80, teacher can say no no no, this will be 80/80. The similar applies to a grade for assessment. Even if the module would calculate the grade for assessment as 18/20, the teacher can say no no no, this will be 10/20. Then, 90/100 is pushed into the Gradebook. Note that the teacher can override this final grade in the Gradebook, if she wants to.
Excellent! We actually switched to using forums or the glossary for doing peer review and grading, but one feature the workshop module used to handle was the random assignment of peer work for review to students that has not been replicated by any other tool yet.
It would be great if the user-interface for setting up the workshop were a little friendlier, too, because the old interface (we last used in version 1.6) was a little overwhelming to figure out without assistance (group huddle).
I haven't looked at the specs yet - but will try to. Just wanted to add my encouragement that you're reviving this feature.
The option for the teacher to manually end one stage and start the next instead of working to set time frames would be very useful as working with different classes they can't all use the same time settings and changing them between classes then changing them again was a big problem.
I have moved to using forums for peer feedback but pupils tend to ignore instructions on who to assess and assess their "friends" I spend more time getting pupils to follow the instructions and stay on track than checking the peer assessments and constructive feedback this way.
thanks for you comments. In the new version, I propose to have the submission phase and the assessment phase non-overlapping. It means no student can start assessing while some of the peers could still submit their work.
Now, I am not sure if I understand your needs correctly. Do you want to allow late submissions for students who did not manage to do so before the deadline? Or you just want to let them assess others' work while excluding them from the submission duty?
Manual phases (stages) control is supported. Teacher can always change the phase manually if she wants/needs to. The automatic handling based on the deadlines is an optional feature.
Thank you, David, for taking on this crucial work. The Workshop Module represents the epitome of Moodle possibilities, in my opinion. Refining our work via the group is a powerful thing at any level.
Editing writing has always been very tedious, labor intensive work. The Workshop module applies the power of the microprocessor in way that has not been done in education, yet. It can be useful in an elementary classroom and in post graduate or professional arenas, which is also part of the problem with the current version, I think. The variety of uses for the module make it complicated. (A great tutorial for the new version will go along way to fixing lots of things - inexperience and limited understanding have many levels.)
Rather than continue tweaking and adding to the already extensive feature list of the module, I suggest making more than one module, or maybe a sub set of modules each with a different focus.
There is a method of writing instruction that employs the technique of looking at one or two things in each draft or version of a piece of writing. I realize that it is not necessary to use all of the options in the Workshop module, but just going through the check list to set up an activity is a daunting task, especially as a new or inexperienced user of the module.
Regarding the posts above, I need to be able to have different submission deadlines of the same assignment for different students or groups of students. Some students may never get done and I can't keep the whole class waiting. I would also like to be able choose how many reviews of other students' work each student does. Some students should only do one while others, and their peers, will benefit from doing six or seven.
Thanks for pushing ahead with this,
> I need to be able to have different submission deadlines of the
> same assignment for different students or groups of students
Workshop module will not support different deadlines for different groups. No Moodle module does, actually. There is a way how to achieve this using groupings, actually. However, the Workshop will allow (if the teacher says so) students to access others' work even if they did not manage to submit their own work.
> Some students should only do one while others, and their
> peers, will benefit from doing six or seven.
The module itself will prepare the allocation as balanced as possible. Teacher will be able to manually re-allocate assessments to other peers or simply delete the allocated ones. So you will be able to manually assign more or less assessments to some students, if you want to. However, the calculation of the grades is more reliable with more data available.
I have tried Groupings but the process of making copies of the assignments for each grouping then resetting the dates is very time consuming and with more complicated workshops this is only going to get worse. Using backup and restore was complicated but worked for a course that only had the assignments but no other content. A copy feature would be useful, we can then just make as many copies as we need and change the dates unless there is an easy way to do this that I have not spotted.
You are right that this is not just specific to the Workshop, it affects quiz, assignement, ... as well.
However, we should come up with a proper solution to it one day, but any solution will be more complicated to implement.
Pupils would benefit from the chance to give feedback to others as part of their learning even if it does not form part of the cycle for the other pupils.
It also means that if they are in the next session where we are reviewing work they have work to review. In the past because they have not submitted work they are not allocated work to review. Maybe they could still gain marks for reviewing even without any submission.
You might want to have an extra option so if for example you want 4 peer reviews
- Included in set number of reviewers total 4
- Extra to set number of reviewers total 5 for some pupils (but maybe not counted in marking)
Okay, there will be "Allow late submission" setting for those students who have not submitted the work within the submission period. However, such submission do not get allocated to peers for assessments and teacher must either assess or allocate them manually. This is because the Workshop will automatically allocate submissions for review at the end of submission phase. So students who have not submitted before the allocation is done can't have their work having allocated for reviews. Automatic re-allocation is possible only before the assessment starts.
I have also tried the Calibrated Peer Review web application (http://cpr.molsci.ucla.edu/) and was surprised when it simply did not work for me. Even though the Moodle workshop doesn't calculate grades correctly and has other flaws it is clear to me that it was created with more built in flexibility than CPR.
I hope that the new Workshop module will allow images, audio files, and multimedia files to be submitted. The workshop tool, although usually used for writing projects, should be able to accommodate multimedia projects, submitted in common file formats. This would be of great benefit to me and my students.
I look forward to finding a way to contribute to this project.
yes, students are able to submit files in the Workshop module. The submission form consists of title (single line input field), text (wysiwyg editor) and optionally number of file attachments (teacher defines the maximum number of allowed attachments).
Can you please describe, how did you expect the Workshop grading work? Let us see if your needs fit the current functional specification. Together with the module, new documentation and teacher manual/tutorial is necessary to release to avoid similar disillusions.
Interesting you mention Calibrated Peer Review.A colleague of mine, Janet Russell, used CPR several years ago with her class at Earlham and found that though the theory sounded good, in practice it was inflexible and the students disliked it. On the strength of that experience I used Workshop in Moodle 1.6 for two successive classes. I found that both CPR and Workshop suffered from making unwarranted assumptions about workflow. The first principle is that there will always be one or more students in a class who will not submit the work on time, if at all. The second is that teachers always need to tweak processes; whether submission dates, grading parameters, or group composition. I ended up overriding the student's peer assessment grades because a significant minority were totally out of wack.
I was readig
and thought: hey, maybe it's only because the UI shows rounded numbers but uses the real quantities for calculations. Could that be the problem? (Maybe it's just related with the fog around the grades calculation formula.)
Anyway, is it going to be possible to set how many decimals (of grades) will be shown? Maybe as "advanced options"?
In the new version (2.0), peers will use non-decimal grades and/or scales only. However, calculations of grade for submission and grade for assessment result in a decimal number - internally it is NUMBER (10, 5) if you undertsand SQL data types.
The number of decimals to be displayed should be configurable, indeed.
Thank for the comment!
No, I knew about the rounding issues. I have tried to find my spreadsheet with the workshop scores and calculations and my spreadsheet calculations so I could report back on the details. I haven't been able to find them and so I will have to rely on my (possibly unreliable) memory.
The calculation problems that I remember seemed to be based on the way the Workshop tool selected a single evaluation as the standard that all the others were compared to rather than averaging the results of the evaluations and using that as the comparison data. I remember that many of the calculations were exactly what I expected and other grade calculations were more than 20% off of what I calculated to be the correct grade.
...students call so ta ta for now.
I was using the workshop back in Moodle version 1.5, or 1.6. The bug report you linked to looks like it could be an explanation of what was going on but I am unsure. I wish I could be more help here.
I appreciate very much the discussions, planning and (presumably) work that you (and others) are putting into this rewrite. I only used the workshop tool a couple of times but am sure that the proposed rewritten version would be something I would use on a more regular basis.
How many times has there been been a announcement of a Workshop rewrite? Two or three at least in the past 3-4 years? Looking through this Forum it seems that each time someone new starts this project there's an outpouring of support and ideas, but then after a while the project seems to dissipate into a fog.
So, David, I would encourage you not to overreach with the initial version that you release. For what it's worth, here's a list of suggestions which you might want to consider:
- don't bother with backwards compatibility. You'll never get out of the starting block with a new version if this is a requirement.
- Simplicity. Aim for a simple conceptual framework, User Interface, workflow.
- Single rubric -- in keeping with the Simplicity principle above.
- Direct control of all processes by the Teacher (ie someone with appropriate role):
- manual switching between submission, assessment and marking phases
- manual override to distribution of work for peer assessment. For students it's anonymous, for teachers the process need not be anonymous.
- clear options and consequences for exceptions
- Transparent formula for calculating grades -- implement John Isner's grading matrix?
- Self assessment -- no grade. (Add teacher grade later)
- Peer assessment -- peer grading
- Peer assessment -- teacher grading
thanks for your wise advices. Yes, I know about the the previous attempts to rewrite the module - one of them was mine actually... I do not pretend the project is easy. After having written several smaller Moodle modules I know how it is to design, write and mainly maintain such piece of code.
Regarding backwards compatibility - well, I am going to use the same module name, tables/functions names prefix etc. so at least the new module will express itself as a current module successor. However, I do not bother with BC too much at the moment. To be honest - how many running Workshops are there in the world? Maybe it would be faster for me to manually convert their data into new version than writing a general upgrade script...
I like the John Isner's matrix. I am little bit afraid of its scalability, however. What if there are 100+ students in the course? Then the matrix is very sparse and it occupies a lot of screen space - mainly in the width. Of course, teacher can always display a single student view but still... I'll have to think about it more.
Transparent formulas - IMO there is everything quite clear and simple but the grade for assessment. Please see the spec if the proposed calculation is OK for you.
However, I will use the idea and let the teacher to download such matrix in a spreadsheet format if the matrix fits into 256 columns limit.
By the way, in my mental model I would sum grades for submission per row so the last column contains the final grade for submission. The final grade for assessment would be the last rows of the matrix. IMO this is more intuitive as the grade for submission is expected to be the major one (making eg 80% of the final grade).
We could have more report plugins, similar to what Gradebook has at the moment. This one would be the Matrix report (suitable for small classes) while the current one (some name suggestion?) would stay the similar as it is now - well, little bit improved.
- Student name (there will be a checkbox "hide names" so the teacher can show this table to students in case of anonymous mode)
- Submission - title, date, link to see details/download
- Received peer assessments - the list of grades given by peers. The format is the same as in the Isner's matrix:
80 (20) means "Peer gave grade 80 and got 20 for such assessment"
- Grade for submission - average of the received grades
- Given assessments - the list of grades that this student gave to his/her peers
- Grade for assessments
- Final grade - the sum of grade for submission and grade for assessment
Again - I really love the simplicity and clarity of the Isner's matrix. But it does not scale well, it stops being usable with more than 20-30 students.
Please let me know what you think about this UI mockup. TIA
To aid clarity, tooltip info for the final grade might also be handy e.g.
((submission (75) x70%) + (assessment (65) x 30%)) / maxpossiblegrade = whatever
Also, it's not immediately clear from the column headings what the numbers represent. Is it percentage (real)? Seems not, but this is what the gradebook does, so it may cause confusion without further guidance.
the tooltip is good idea! Maybe, if teacher checks "Display reviewers" or something similar, the names could appear directly in the table .
Headings will need more clarification, indeed. I am going to make the UI similar to the Gradebook. In the mockup above, the figures are real values, max 80 for submission, max 20 for assessment.
Right now, it would be very helpful for me if somebody experienced with the module could check the proposed way of grading. Please see the spec in the wiki and, if you can, let me know your opinion. Thanks in advance.
thanks for this idea! The current behaviour has got its reasons in the background theory. If students are given a chance to see how their work will be assessed, it helps them to focus on the relevant aspect of the work while they work on it. Have you got some strong use case why students should be given the assessment form after their submission?
I can see this implemented later for Moodle 2.1 or so (as it is new feature in fact and I am focused on the migration of the current behaviour now).
Can you please share some of your workshop assignments and/or evaluation forms? It would help me a lot right now. Thanks in advance!
thanks for the comment. I'll keep it in mind during the work.
There is no easy way to share your assignments and evaluation forms yet (the module in version 1.9 does not support backup without students data, although I have never understood why). Would you be so kind to just describe or copy/paste the assignment and the structure the form here into the forum? Or, screenshot would be fine as well. Thanks.
just a little note to say how much I appreciate all your work on workshop 2.0! I don't understand everything on the specification page; everything I do understand though seems absolutely fine for my needs and I cannot wait to test the module with my students.
I wish you strength, patience and fun while coding the module.
David here is a screenshot of the unmodified specimen assessment form used in a work on in law school.
Here is a screenshot of a speciman form that has been modified to include detail information. This information can't be given ahead of time because it essentially gives the student the answer to the problem. On the other hand, They need the details for assessment in order to assess submissions accurately. I would like to have some solution that allows me to hide the specimen assessment form if I chose. I can provide the general assessment form as a link in the body of the course. Right now the work around solution is time consuming. Because I have to go back and modify every specimen assessment form after submissions but before assessments.
thanks for these examples. In the proposed future behaviour you will be able to change the specimen assessment form during the workshop. So you will basically prepare two versions of the form. The first (light) one can be used for the assessment of examples. Then, after submission phase, you can change the assessment form to the second (full) one.
The time needed for the evaluation form preparation will be the same. But you should be able to prepare these forms in advance and then just re-use them in your workshops.
I plan to implement a feature similar to the presets in the Database module and/or a feature allowing you to export the form definition into XML file and import it later or in another workshop instance.
I hope these improvements will help you.
Good to see that this module is being revised. I'm new to Moodle, but have spent a lot of time recently shifting one of my courses onto it, and thus far I've only got one request - I hope a simple one!
I teach AQA A level Sociology, and whilst the marking scheme for essays is complex for students, its been broken it down into something that should be easier for students to both understand and grade. They end up with a different total of marks from the AQA scheme, but it does get them to assess on the criteria of knowledge, analysis, evaluation etc. Each of these is unpacked to several sub-sections and a mark given to each - this means that I would need 25 Dimensions/Criteria etc, rather than the 20 at present.
Is this possible?
Further, would it be possible to alter the current workshop module in-house (meaning our college - given a sufficient knowledge of PHP and SQL) to accomodate this change?
in the new version of the module, the number of evaluation criteria (now known as "assessment dimensions") is unlimited. See the attached picture of how the form for editing these dimensions looks like.
As any other Moodle code, the Workshop 2.0 will be open for your local customizations.
Thanks for the comment!
"assessment dimensions" - perhaps others could comment but I would favour "assessment criterion" or just "criterion" instead. I've not encountered "assessment dimensions" as a term in widespread use.
As ever, it's difficult to find terms that are universally acceptable so to make localisation simpler may I propose that "Dimension description" and "Dimension weight" are reduced to "Description" and "Weight" respectively. I think their locations make their purpose clear and the longer version redundant.
"assessment dimension" is mostly technical term used across the source code and the database structure. In general, it is one of the aspect that can be evaluated by a grading method. Multi-dimensional assessment is one of the key workshop features as opposite to the one-dimensional assessment used in the Assignment module.
I have no problem calling dimensions "criteria", "elements" (as in pre-2.0) or "aspects" or whatever else comes to our creative minds. There will be some default translation for each of the grading form subplugin. So "dimension" can be called "criterion" in Accumulative grading strategy and "aspect" in Error baded strategy, for example. As usually these default terms can be locally customized by server administrator.
Thanks again for the point.
Agreed, however, better to settle on a term that can be understood by teachers without the admin's intervention.
Do I understand for your post that the default name of the criteria, elements or whatever can be renamed by the teacher for each workshop instance in a similar way to roles renaming?
>understood by teachers without the admin's intervention
Sure. As I tried to explain in the previous post, the default English pack will come with "criterion", "aspect" or something like that.
And no, I do not plan to have this term customizable per-instance. Not at this stage of the project. Anyway, thanks for the idea - feel free to put such improvement into the tracker if you consider this an important issue.
I've been reading the specs last week and here there are some thoughts, numbered for easier reference:
E1 - BIG Thanks for your effort! Workshop is one of those great tools that MUST be there.
E2 - Also, great specs! They really help to "peers" to review, comment and discuss about the whole thing.
E3 - Along the specs I've seen than then submission/assessment grades are weighted 80%/20% to get the final grade. Are those weights immutable? Is there any magic in that number? If not, couldn't have sense to allow configuration of those weights on each Workshop? I think it's possible to plan activities where one teacher want to add more weight to the assessment or the opposite.
E4- All the assessment grading (automatic) is performed based in one entity called the "best" assessment (1 per criteria I guess), and divergence to that "best" is used to perform grading, clever and cool! And that best (or bests if multiple) assessment is calculated as the assessment nearer to the mean. Is that rule to calculate the best assessment there because it's the way old workshop used to work? Or is also some sort of "magic" best calculation? Similarly to my previous point I think it could interesting to have some alternatives in order to get calculated those best assessments. For example, on of the more obvious, for me, is that, in activities where the teacher is also performing assessments, it could be possible to define the best one (for ulterior calculations) as exactly the one introduced by the teacher. Just an example, but could have sense to have that feature. Or perhaps the "best" could be the "worst" and apply inverse distance measures, surely causing different - but also correct - grades. In any case, think about the "best" detection as something that could be calculated in different ways (I just have exposed 2).
E5- In any case, that "best" assessment... should be marked in the corresponding assessment/grades tables IMO, just for reference to be showed in reports or wherever necessary. Also that "mark" could help speeding-up recalculations and so on.
E6- I guess that grade calculations only happen at the end of the activity, when all the submissions and own/peer/teacher assessments have finished, correct? Then, one process performs all the calculations and generate submission/assessment grades and the "official" total to go to the gradebook. And, in that phase, is when teacher can override the final submission grade or particular assessment grades, correct?
E7 - If prev exposition was correct, then I also guess that the "best" assessment won't be influenced by any override, and will be calculated only using "original" assessment grades. Correct?
E8 - Another thing I think we must be really carefully is about settings and their changes. The beginning of each phase implies that some setting won't be modified any more and we must be hard (rigid) here. I remember the old workshop exhibiting problems with the change of settings/configuration once the assessment phase (examples or real) had started. Just a call to be careful with that.
E9 - I've found some setting in the workshop table that, perhaps, should be considered permissions instead of settings, things like "anonymity" or "hidegrades" have sounded me as capabilities, not settings.
E10 - Where in the DB structure is the final grade stored? In the submissions table there is one place for the grade (submission grade) and the gradinggrade (assessment grade) but I don't see the weighted final to be there. Not 100% necessary as you can push it straight to the gradebook... but personally I love to have those grades on each module.
E11 - In the assessments table, I've seen a bunch of fields (grade, gradinggrade...) defined as number 10... are they missing their decimals part? Or are they really integers? Same aparently in rubric_levels.
E12 - I think we should "name things in a official way" since day 0 (and everywhere: here when discussing, in docs, in php code and in DB) . I must confess that the "dimensions" concept caused me REAL troubles when trying to understand all the grading strategies, just to discover, laaaaater, that they are, simply, criteria, or grading elements. I must say that I thought that you had became crazy, talking about dimensions and spaces, and galaxies and all sort of math things in the specs! Just to discover they are simple criteria, grrr.
E13 - Also about "naming" there is another (internal) thing that always makes me to think twice, the "grade" word (used for submission grades) and the "gradinggrade" word (used for assessment grades), couldn't we name them that way (submission/assess grade) ? Specially the gradinggrade one causes my mind to round and round and round!
E14 - Finally, one "silly" thing... is there any reason to be so "upload-based" in the new workshop? Couldn't we add support for "online-based" or "url-based" workshops? i.e. it only means that the upload won't be necessary/but fulfilling the corresponding text field will. I think it isn't technically complex and can open the cool peer-grading to different types of assignments. Just a quick thought, surely teachers will know if that is an useful feature or no.
And basically that's all I can comment about the specs. I think they sound HIGHLY PROMISING and cannot say anything but: go, go, go!!!
Great work, David, sincere kudos4you, thanks! (muscle)
PS: Sorry for the long post. I hope it helps and don't cause too much headaches to all you, dear moodle-peers.
and thanks a lot for your post (not so long compared to the spec itself ).
My comments to your feedback elements (or shall I call them dimensions? ) follows.
- re E3: 80/20 are not weights, but the maximum grades for the whole workshop. 80/20 are just defaults. They are not percents, but real values (points). Teachers can set them to 10/1, 3/10, 100/100 or whatever they want. See the discussion with Penny above on how this can be used.
- re E4: thanks for the point with "pluggable" methods on how to find the "best" assessment. I am going to use an appropriate design pattern to be able to change such method. Yes, the main reason for this approach was to keep the current behaviour. However, I have found it quite clever so I decided to stay on this track. Yes, we should provide a feature to manually "tag" an assessment as the best - or maybe better the "referential" one. Although I do not like the idea of the "worst" assessment approach, I will keep it in mind so the whole procedure can be easily replaced in the future.
- re E5: Yes again, highlighting the referential assessment within reports is important.
- re E6: Yes, you are correct. Please note this is new in Workshop 2.0 as the current version allows the workshop phases to overlap. Which oin turn leads to some problems (over allocation issue).
- re E7: Right, the referential ("best") assessment is always computed from the original values, not from overrides.
- re E8: thanks for pointing this
- re E9: I always prefer capabilities over instance setting as the concept is more powerful and flexible. You are right, the "anonymity mode" setting can be replaced by a combo of capabilities "View names of assessed submissions' authors" and "View names of reviewers of own work". "hidegrades" can be replaced by "View grades before agreement" capability, too.
- re E10: The final grade for the given student/submission is always "grade" + "gradinggrade" in the workshop_submissions table. Please note, the "gradinggrade" is not the natural property (attribute) of the submission itself but the author of it. However, authors are 1:1 to submissions so I have decided to store it here.
- re E11: This was a typo in the spec. Thanks! Calculated grades are always "NUMBER (10, 5) SIGNED DEFAULT -1". In the workshop_forms_rubric_levels the grade is correctly INTEGER (10) as the assessors are to use non-decimal points or a scale.
- re E12: I will try to advocate for the term "dimension" here. I understand it as a general umbrella term covering assessment elements, criteria, aspects or whatever. In various grading strategies, the dimension can have various meaning. Imagine we will come with a new fantastic grading strategy where the term "criterion" is not appropriate. For example hypothetical "Point of view" grading strategy, where one submission can be evaluated as seen by several people with different vocations: What would a programmer, an artist, a musician and a dustman say about this work? I have found using the word "dimension" in a paper dealing with rubric assessment tool (wikipedia page referenced in the spec) and I like it. The word "dimension" should not appear at the user interface at all, dimensions will be referenced as "criteria", "aspects" or "POV's" according to the selected grading strategy. I hope this explanation helps you to safely land from you cyberspace math trip
- re E13: I used to have problems with "grading grade", too. Especially when I was translating the very early versions of the Workshop. "Grade for submission" and "Grade for assessment" are the forms that will be presented to the users. However (hey - there are lot of however's in my reply!), at the DB and the source code level, "grade" and "gradinggrade" is a bit better than "gradeforsubmission" and "gradeforassessment"
- re E14: Firstly, please note there is no need to upload anything. Teacher can just set the number of required attachments to zero and voilà - we have "Online text" instead of "Upload a file" workshop. I expect that the new repository API should deal with this automagically so students can submit any form of their work - video, Mahara view, URL, file etc. But maybe I do not understand the repository API well yet...
Actually, this is a proposal we conceived thinking specifically about the Moodle workshop module. We think the method per se is not very difficult to implement. The difficult part was doing it with the Workshop module in its previous state. Now that we are so lucky that an enthusiastic and competent developer like David has decided to revamp this module, maybe it is a good oportunity to make use of some rather simple statistical methods to make the assessment workflow more efficient and more reliable.
I have really enjoyed reading your paper! Thank you very much for it. Your ideas are very interesting. Let me ask some questions or put some comments.
When comparing the assessment given by teacher (Tg) and the grade given by peer (g) to calculate the initial measure of quality, you seem to use the aggregated rubric grade. Let there be two criterions A and B. Let the teacher give 100% for A and 20% for B. Let the peer give 20% for A and 100% for B. Then they both result in 60% average grade but the difference is huge. So, we have to compare "sub-grades" distance, not the aggregated one. In my proposal, the method of least squares is used.
The idea I really like is how the quality of previous assessments influence the weight of a grade given by peer. The question is, should we take all workshops in the course into account, all workshops across the site, only 5 most recent workshops etc.?
In your method of assignment distribution (allocation), couple of students could quite easily guess who their reviewers are. This may be an issue if the intention would be to have anonymous reviews. Also, have you considered separate/connected group modes being involved?
In any case, I am going to prepare the workshop internals so that all this calculation methods are plugable. So it should be possible if not easy to write a new quality measurement plugin based on your paper. Thanks again for this input!
we got one student that implemented the methodology, and these doubts appeared also to us.
I think, that only workshops on a course should be considered (let "bad" students redo their lives) and maybe teacher's criteria and students skills change too much between courses, also not all students will be enrolled in the same courses, or will have the same past.
the idea of "forgetting" old workshops is in the formulas
Yes students can recognize theirs peers, but If the disagreement is high then they are penalized, so ...
I agree that the global grade may be too coarse, but it was to simplifty... It's ok to think that the fine sub-grade could be better.
Finally, I think, that the workshop should not exists as it is... Let me explain. There are other activities where P2P evaluation is performed (forums and glosary) so, maybe it could be interesting to split the workshop into two different modules,
The first one allows everything except the computation of the grades. So there is an interface of the workshop that gives the sub-grades given by teacher and the one given by the students (maybe he p2p grading may be disabled, and the teacher only uses the very usefull gideline for grading the different aspects.
There is a different module that takes all the mess of grades and weights and computes the final grade.
this module then could also be applied to forum and glossary
About the change proposed to merge the current Criterion and Rubrics grading strategies (GS). Why do we need GS anyway? What about merge all GS (i.e., eliminate the concept of GS)?
IMHO, GS are a limitation, add more unnecessary concepts to be defined in the documentation, and it role can be replaced by adding scales.
For example, without GS we can define a single formula in order to calculate grades:
For the formula above:
Let be weight of ith, and gi the evaluation given to the ith dimension (in terms of the respective scale; for example, if the first scale is Yes/No then , if the second scale is Score out of 100 then , etc.). Let represent a factor used by the assessor to adjust the suggested grades by up to 20% either way.
Also, it will be possible to have both subjective and objective dimensions in the same assessment form (i.e., some dimensions having quantitative grades and some other dimensions only with qualitative grades).
I do not agree. Grading strategies represent the pluggable method of how the assessment of the work is done. They are not limitations at all.
What you have described is just a case of Accumulative grading strategy. Note that, for example, "Number of errors" strategy (also known as "Error banded") can't be described by such a formula.
IMO Moodle teachers/course creators do not have problem to understand the concept of modularity as every Moodle course is a set of modules. Grading strategy is like an extension of this concept. Plugins are everywhere in Moodle - gradebook, admin reports etc.
Also note. From a developer's point of view, there are even more pluggable components of the Workshop, like allocator (assignes submission to reviewers), auto-assessor (computes the grades for assessment) etc. So sorry, no, the concept of grading strategies is going to remain.
I have used the module many times but have had some problems in the past. If I were composing a wish list I would want the following:
1. Flexible and changeable due dates / peer editing dates. I have made the mistake more than once of using the wrong due date and tried to change it only to have the Workshop become tangled up. Setting dates in stone is a recipe for disaster. It would be great to change a due date as you see how long students actually need or to accommodate a student that has been absent.
2. Open editing: I would like students to have the chance to edit work up until I mark it. Half and hour of editing time is just not enough. Often students realize a mistake the next day and want to change it. If I have not marked the assignment I do not see why students cannot go in and edit it. It would be great if they could edit without having me delete their work manually for them. This is a bottleneck. I like to have students start the assignment at school, and then work on it at home. A button that indicated they were ready to have it marked would be great.
3. I would like to choose from rubric that I have already made. Re-typing and or tweaking rubrics each time is time consuming.
4. I would like to be able to save comments that I use often with the rubric so that I do not have to retype them for each class.
5. A major issue is where I can comment on a piece of writing. I really like the 'Marginalia' module (is it even a module?) which allows comments to be made at any given point in a piece of writing. Anyway, it is really important to be able to comment right where a mistake is made rather than copying and pasting.
6. The formatting (CSS) of a teacher created rubric needs to be redone in the new workshop module. Some better thought out CSS would go a long way. I would love to see a tabbed rubric which would help avoid laborious scrolling. Basically the rubric needs to be more compact.
7. The comment boxes need to be bigger or at least the last 'general comment' box needs to be bigger.
8. Bold, font, italics, and color need to be made available when making comments. The option to enlarge editing boxes like you can do elsewhere in Moodle would be great.
My apologies if any of these has already been covered. I do not have time right now to read through all the comments.
Best of luck David.
thanks a lot for your feedback. You experiences are very valuable for me.
re 1 - yes. By default, the teacher switches phases manually. She can use per-date deadline as an optional feature.
re 2 - yes. That is how I and my students work with Moodle, too. They start working on a submission at the school, upload it at the end of the lesson, download again at home and upload the final version when they finish it. I like the idea of marking the submission as the final version.
re 3 - I am going to write an Export/Import feature for all assessment forms. It should solve the retyping issue
re 4 - yes, they are called stocked comments in the recent version
re 5 - thanks for pointing at Marginalia - I did not know about such great tool! I will definitely consider including it in some future release!
re 7 & 8 - this should be per-user option as others (including me) may prefer plaintext-only smaller boxes. IIRC, Moodle 2.0 ships new embedded HTML editor that allows switching from WYSIWYG to plaintext form and back and allows resizing as well.
Thanks again for your participation!
I don't know if my following question is already answered elsewhere (I can't find it, sorry):
I want to leave a large group of students working alone in a workshop, and I want to participate (as teacher) only when peer assessments present a large dispersion, more specifically, when some student is having very different evaluations on his/her work.
Supposing that the situation is clear... Is it going to be any automatic mechanism to inform about this lack of consensus (so teacher can go and check on students)?
yes, you have exactly described the proposed behaviour. I plan to have some sort of status message with a colour highlighting to inform teacher about the reliability of the grade for submission.
Sorry to ask such a puny question, but I was curious as to whether peer-review will have the option to be anonymous or to be known. I saw anonymous in the spec, but didn't easily see a way for the students to know who is grading their paper (this may be atypical, but I thought I'd ask).
Thanks and we're all looking forward to it!
thanks for asking this. Yes - there will be a way how to control anonymity of reviewers and/or authors. In the first versions of the spec, there was a module setting controlling this. After some discussions with Eloy and Sam, I have modified it a bit according to the Eloy's comment E9. The anonymity will be controlled by capabilities.
Basically, there will be three capabilities:
- view the name of the author of the reviewed submission (allowed for role Student by default)
- view the names of peers reviewing one's submission (not set for Student by default)
- view all participants' names (allowed for Teacher and Editing teacher by default)
Teacher can easily change the anonymity mode by overriding permissions locally for the given Workshop.
In my institution we're working over the 1.94 version (I think...). Can I supose that with your new Workshop, this kind of problems will be finished?
Thanks and sorry for my bad English ()
- Peer-assessment (optional)
- the module randomly selects a given number of submissions to be reviewed/commented/evaluated/assessed by student.
Again, my apologies if I have misunderstood the documentation - it's just the Workshop comes so tantilizingly close to providing the functionality I need - but is potentially not nearly so useful if it "randomly selects" work for peer review.
IMO the functionality you need is closer to the Forum than Workshop. Why not having a Forum where all your students submit their work (as an online text or as an attachment) and then all other peers can see and comment it (by posting a reply). You may want to use "Every can start one thread" or "Question/Answers" forum type. The later one will allow to see others' posts after the student submitted her/his own work.
Given that, I do not plan to implement this "fire at will" feature into Workshop 2.0. On the other hand, such thing can be quite easily plugged-in as an additional allocator in the future.
Have a good luck with Moodle!
Thanks very much for your response. I will certainly experiment further with the Forum functionality to see if it will meet my needs. My only concern here is that there doesn't seem to be any way by which students can be limited to rating and commenting on only the first post in a particular thread (i.e. the post in which the student uploads the exercise which is to be commented on). As I understand it, Forum will always allow students to comment on earlier comments (as you would expect of a Forum module) - so rather than being confined to commenting on the submitted work, students can comment on the comments of their classmates (obviously not ideal for my purposes). Also, if I allow students to rate the posts in a Forum, they are authorised to rate all posts - i.e. not only the first post containing the exercise intended to be commented on, but all the subsequent posts commenting on that first post - again obviously not so good for my purposes. That's why I was hoping it might be possible to have the functionality I outslined in my earlier post within the new version of Workshop (i.e. a functionality where everyone member of a workshop group can comment on and rate work by all the other members of that group - but not comment on the comments/ratings of the other members of the group) - I'd assumed it would be easier in Workshop, as opposed to Forum, to make a single piece of submitted work the focus for comments and rating, rather than treating the commentary on that piece of work as an unfolding conversation/discussion. I guess I will have to look at other options, as you suggest
Again many thanks for your reply
I have a few feature suggestions:
(1) I would like -- from the student's perspective -- for the instructor-provided examples to be indistingishable from the student examples. My concern is that students will put more effort into spoof assessments than peer assessments. But, if they cannot tell the difference, this problem diminishes.
(2) I currently use Forum for peer assessment. It's set up as the "Lakeview Journal of Sociology". Students not only gain from reviewing their peer's research reports, but they are also being socialized into professional practices. I'm looking forward to the added rigor and automation that Workshop 2.0 promises, but...I don't want to lose the public aspect of student-published work.
Will it be possible, in Workshop 2.0, for a selection of student work to be publicly visible to the entire class? Ideally, Workshop would select the top five scoring submissions to that particular assignment (or whatever number the instructor specifies) to publish as exemplars of first-rate work.
This gives students a social motivation to excel, which I find more productive than a points-based motivation. And my Moodle journal will function more as a real scientific journal, in which researchers compete to get published.
re (1) - I can see your point. However, example submissions in Workshop are intended for reviewers to train the rating procedure, to play with assessment form (eg rubric) and to get to the common understanding of the scale concepts meaning. As the research results show, training of raters on such examples increases the reliability of the assessment. Also note, there are actually no "student examples" so the chances are I misunderstood your comment (?)
re (2) Yes, in the last Workshop phase, teacher will be able to publish selected submissions. Not only the best ones, whatever submission she/he selects will be available. IIRC, in the current version, this feature is called League table.
re (1) My concern is that students will apply themselves to the spoof assessments, because they know they are being graded on this performance. They may then slack off during the peer assessments if they believe that they are not graded on the quality of peer assessment but only on completion of peer assessment. However, the training/calibration feature is so crucial that I would not want to compromise it with excessive tinkering.
re (2), capacity for publishing student work -- great news, thanks.
I am so looking forward to seeing the final results of your efforts. Thanks for taking this on. It will prove a major contribution to the overall Moodle project. I think it has the potential to be Moodle's signature feature.
IMO this is a matter of Workshop assignment and detailed explaining of what the whole activity is about to students. Also, the teacher can set the ratio of grade for submission to grade for assessment.
This seems like a reasonable place to plug in a comment, so here goes.
1) The value of workshop exercises for 'training' cross-institutional raters has so far taken second place to training the students. I think there is tremendous scope in Moodle for strengthening shared understanding of assessment, particularly here.
2) A small plea for a version of the new module that allows me to require students to rate a number of my exemplars (preferable a 'random selection' or 'random selection from categories', and tells me how they did, but WITHOUT having to submit anything at all.
The objective is to prepare students for their own submissions by looking at top middle and bottom of previous work and forming a judgement.
you will be able to use workshop 2.0 for this, I am sure. You can either use the functionality of example submissions where students may/must assess given examples before they actually start working on their submissions or before they start real assessments.
Alternatively (which might fit your needs more) you can have several fake accounts and submit examples using these, then allocating the submissions and letting your real students to participate just in the assessment phase of the workshop. Note that students do not need to have own work submitted to become a reviewer. This is a new feature in Moodle 2.0
Sounds good - very pleased with first attempts on moodle 1.9.7, so looking forwards to getting the keys for the new version.
Thanks for all your work on this - I really appreciate the contribution made by you folks with the better tech skills.
Would it be possible to include customised grade schemes in the workshop, i.e. not 1-100 but maybe A - E or Pass, Merit, Distinction?
Thanks in advance
of course. You can define such scales in your course and then just use them in your assessment form (which is actually standard feature of all graded modules in Moodle).
Thanks David, but it doesn't seem possible in the current Workshop. I have custom scales defined but it doesn't let me use them. Is that what you would expect?