We’ve been using the Feedback module for some time. Up to now, the Feedbacks are set for single submission, using:
- · ‘Required’ ‘Multiple choice (rated) single answer allowed (dropdown list); and
- · Optional ‘Longer text answers’.
This setup works entirely reliably and the exported Excel workbook provides calculated averages for the numerically rated questions – which is what we need.
The calculation of averages is based on the ‘requirement’ for each respondent to give a numerically rated value before the Feedback will be accepted when ‘Submit’ is selected. If a ‘required’ answer has been omitted, the individual’s responses are no accepted, and the user is prompted to supply a numerically-rated answer.
For every numerically-rated answer, the average is calculated as the sum of each numerical value x the frequency of that value/the total number of people submitting responses to the Feedback.
I’ve just introduced a different style of Feedback set to allow ‘Multiple submit’.
This means that students can complete one Feedback progressively through an 8-week residential course. When they select ‘Submit’, their answers are recorded and saved. They can then re-enter the Feedback to add new responses and, if they wish, amend earlier responses in the light of course developments.
Again, the Feedback employs a mixture of ‘Multiple choice (rated) single answer allowed (dropdown list); and ‘Longer text answers’.
- · However, it is not practical to set the Multiple choice (rated) questions to ‘Required’.
- · If that were done, then in, say, week one, all the students would have to give ratings for every week covered by the Feedback in order to ‘Submit’, even though the course had not yet dealt with the topics to which the questions related.
Unfortunately, the Feedback calculation of averages does not cope with unreliable human beings!
For each numerically-rated question, the average is the sum of each numerical value x the frequency of that value/the total number of people submitting responses to the Feedback. The algorithm does not take account of the number of people answering the question. The two are not necessarily then same, and therefore the automatically-calculated averages are not correct.
Our need for this new style of using the Feedback module will increase in importance.
Is it possible to modify the Feedback module to calculate averages based on the number of respondents to each question, rather than the maximum number of respondents to the Feedback as a whole?