Fourth Friday: Moodle Research Review for January 2020

Re: Fourth Friday: Moodle Research Review for January 2020

by Tim Hunt -
Number of replies: 1
Picture of Core developers Picture of Documentation writers Picture of Particularly helpful Moodlers Picture of Peer reviewers Picture of Plugin developers

First, can I say thank you very much for sharing this. Please keep it up.

There has been some interesting work done recently by Sally Jordan and collaborators at the OU, trying to unpick the difference in performance we see for male and female students on our physics degrees, which is a worrying problem. See some of the recent papers here: http://oro.open.ac.uk/view/person/sej3.html. Some of that is using data about what goes on in our Moodle.


Obviously, given my background, I am going to comment on the "Effects of the perceptions of online quizzes and electronic devices on student performance" paper you linked to.

I am afraid I am not very impressed with their results. "participants who [engaged most with our ed tech] also obtained significantly better examination scores" is what everyone always find in these sorts of studies where students self-select their use of the technology. It does not mean "the use of online quizzes helps to enhance students’ examination scores". It means some students were always going to do better than their peers. The students who are probably going to get an A for the course are probably going to knock off all the during-the-course activities without breaking a sweat too.

To actually show that your educational technology helps student, you are probably going to need a proper experimental design for your study. Often that is not feasible, in which case you may get somewhere if you can control for the student intake in some way (e.g. do you study in a second year course, and control your finding by the first-year exam results.) I have seen some convincing data along those lines at internal OU seminars, but I don't think the most of it has every been published. Also, even if you find that students who spend time with your technology genuinely do learn more, well, the main contributor to any learning is time on task, so if spending time with your tech helps students, well that is good, but it is not necessarily better than whatever else the student could be doing with there time. Anyway, for the 'strongest' result of the paper, let's all say it together: "Correlation does not imply causation."

The other thing about the paper that annoys me is that the authors seems to think that students can only use "online multiple-choice questions to check their knowledge" recall. That is the lowest possible level of Blooms taxonomy, and not really the point of university education.

In my opinion (based on observing what works at the OU) the most effective use of online assessment is in practicing skills (so, maths, science calculations, languages). There, being able to practice the skill with immediate feedback is very powerful for learning.

It is also the case that multiple choice is the least useful sort of question type for practice. Rather than selected response, it is much better to set open ended questions (numerical, short-answer, calculated, pattern-match, stack, ...), ideally with lots of variants of each question, so when students repeat the quiz to practice, they get different questions. This paper was about accounting after all. Numerical question should have been easy.

The paper worries about what happens if "quizzes are used to learn course content rather than to test knowledge". This does not worry me at all. Generally active learning > passive learning. Read Richard Lobb's rant "How programming quizzes should work" in the CodeRunner documentation:  https://github.com/trampgeek/moodle-qtype_coderunner/blob/master/Readme.md#appendix-how-programming-quizzes-should-work. Amen to that say I.

Anyway, given that their quizzes seems to have been quite weakly designed, I am not that surprised that many of their other findings were weak. Also, it is very well established in lots of literature that what students like, of what they think will be effective, often bears no relationship to what actually helps them, further weakening the results of this paper.

Average of ratings: Useful (4)
In reply to Tim Hunt

Re: Fourth Friday: Moodle Research Review for January 2020

by Elizabeth Dalton -
Hi Tim,
Thanks for the link to OU research! I encourage anyone who is doing well-designed research in Moodle to post to our research repository. smile Maybe I’ll do a special edition of “Fourth Friday” all about OU research!

Regarding the paper you critiqued, yeah, I agree with all your points (which probably won’t surprise you). It wouldn’t have occurred to me that someone would seriously be worried about students using quizzes as a study tool (even limited ones like these). But then, I always feel any quiz or other assessments should be testing real application, not memorization, and if people want to start with a quiz to find out which material they need to spend time on, why not? Evidently some colleagues of these researchers objected to that idea, so their research was designed to show that for the students who were likely to retake quizzes, there was no harm done. (They didn’t mention the usual problem that the highest performing students seem to also over-study, if anything, and the weakest students don’t realize they need to study more.)

At least, that was my take on it. It wasn’t the strongest paper I’ve read in the past year by a long shot, but hey, it was a paper published in the past month in a relatively significant journal that used Moodle in the study... as you can see, I only found 4 such papers this month. ;) (I need to finish up some of my papers and submit them!)