Submission grades inaccurate

Submission grades inaccurate

by Rebecca Critz -
Number of replies: 5

This is a followup to a previous discussion in which this exact issue was already addressed. Regardless of "yes/no" answers by reviewers, the submission scores are always calculated to be the maximum amount.

The "number of errors" grading type is selected, and I manually set the grade mapping table, as described by David Mudrak. I then re-calculated grades, also as described. The submission grades are still at the maximum amounts. 

What are my next steps?

Average of ratings: -
In reply to Rebecca Critz

Re: Submission grades inaccurate

by David Mudrák -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers Picture of Plugins guardians Picture of Testers Picture of Translators

Hmm. That's weird. Can you please past a screenshot of the mapping table?

In reply to David Mudrák

Re: Submission grades inaccurate

by Rebecca Critz -
In reply to Rebecca Critz

Re: Submission grades inaccurate

by David Mudrák -
Picture of Core developers Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers Picture of Plugins guardians Picture of Testers Picture of Translators

Right, so you have the mapping table defined. Now as per the original post you should switch the Workshop to the assessment evaluation phase and you should see the button "Re-calculate grades". Once it is pressed, workshop should calculate the grades using this mapping table.

In reply to David Mudrák

Re: Submission grades inaccurate

by Rebecca Critz -

Yes, I had tried that step also, as the original post described. It did not recalculate submission grades. When I did my own assessment of each workshop, THOSE grades were factored in, but the assessments that had already been saved did not change.

I did see (in another thread, I think?) the recommendation to use the "Clear assessments" tool from the Evaluation phase. However, I did not try that because I was not 100% confident it wouldn't erase how the forms were filled out, and I didn't want to ask the students to go back in and re-save. They were already frustrated and I just didn't want to add any more stress.

I ended up just using my own assessments to determine submission scores. The assessment scores seemed to calculate as expected, so I did factor them in to the students' final workshop scores.

I see that this has been an issue for a while. Do you think this activity will ever be "friendly" enough to actually use? I had planned to present it to our faculty tech team as a possible alternative to traditional discussion forums in online classes, but as it stands now, I don't think the advantages are worth the complex setup and potential problems that may be encountered.