When people have different uses for Moodle - what are the consistent concepts that can be analysed?

When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Gavin Henrick -
Number of replies: 11
Picture of Plugin developers

I remember when i wrote my blog post on rollovers and how people did them, it was based on experience with many organisations - all different and all who pretty much had their own local culture of how they
a) used a moodle course
b) enrolled people
c) various level of what their coursework existed in Moodle

So some of the following were things I noticed:

  • course start as a field in a course may exist but was not the actual dates at all of the specific module for all users in it - often was just the date of a course being created manually or by external bulk create
  • end date probably does not have a value, - its new in 3.2 and so most SIS integrations (manual or automatic) would not have it, yet.
  • a course may have same shortname/id year on yearost do not use activity completion
  • assignment activities are liable to be hidden after end dates, and sometimes unhidden not always


About course enrolments

  • There may be multiple sets of students from different modules in the course at same, or back to back without resets - some students left with access for 3-4 years while in college
  • a course may be reused year on year with just a reset
  • a course may exist for years in the moodle site, and just used for reference while each course is cloned for new running of module


Grades

  • Final grade for overall course is most likely not in moodle (as most are blended not fully online)
  • Gradebook may have formative and summative items mixed up - as many institutions export the summative or actual serious graded activities out into their Student record system


what does this mean for doing generic analytics?

  • Can we rely on the start/end date reliably- or enrol start date /end date
  • Can we use activity completion- or is view / submit / graded more dependable
  • the gradebook, is only whats in it - and all we have to go with - how to know
  • some users may be in one assignment, some in others if groups are used
  • some gradebook entries may not actually contribute to a real final grade


So what can an analytics engine potentially and reliably use across most types of course?

Dates

  • Perhaps ongoing "periods of time" can be used, looking at who/what is happening within a period of time
  • Students who are active during a set period (if analysing on a week, then only those active in that week for example) - and comparing only within that window
  • Grading of activities - can only go on whats there or not
  • Activity on activities (or lack of)
  • the details with Rubric responses of assignments
  • the details with Marking guide responses of assignments
  • overall access patterns to non activity items (messaging, gradebook, calendar)


in short, what is reliable consistent data withint inconcistent course structures, design and implementation.

My 2 cents.

What do you think?

Average of ratings: Useful (5)
In reply to Gavin Henrick

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Elizabeth Dalton -

Thanks, Gavin.

All of these points are valid. We do have a way of estimating course start and end for courses that are used once (based on majority of activity in course), but this gets much harder with courses that are reused without reset. We have also been hoping to develop a bulk grade import tool so we have some valid measures of success (for academic courses) on which to base predictions.

We are looking into adding "expect start" and "expect complete" date fields to enrolment records as one way of improving predictions, but obviously these fields won't help if they aren't filled in.

There is an unavoidable tradeoff between data quality for predictions and effort involved in maintaining the data. Much of this effort can be automated if courses and enrolments are generated from SIS data, but changing the automation scripts is an effort in itself.

Our question in the analytics team is, what tradeoff are people willing to make? How much data would site admins, teachers, etc. be willing to enter to improve predictions, and conversely, how low an accuracy rate on predictions are people willing to accept if their data is incomplete?

We have no assumptions about this in the Learning Analytics team. We need to hear from the Moodle community on this.

In reply to Elizabeth Dalton

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Mary Cooch -
Picture of Documentation writers Picture of Moodle HQ Picture of Particularly helpful Moodlers Picture of Testers Picture of Translators

From a K12 point of view, I can definitely relate to much of what Gavin says. In my former establishment, course start dates were indeed meaningless as they were the date the admin made the course . They'd enrol the students then or later (eg at the start of a term) For courses with younger, non-exam students, the courses would simply be reset each year, previous students kicked out and new classes brought in and none of the data saved. For older, exam class students, the students would be retained in an archived group (Class of 2015) and the new students brought in meaning Moodle courses with years of students together.

The gradebook was used inconsistently across departments and (as this was a school, much grading was done offline and not synced with any Moodle grades.) The greatest use of Moodle assignments was (and still is) merely to record that work had been submitted. One department made a custom scale - handed in/not handed in, for that purpose. They hardly ever use(d) activity completion and that's a shame because I think such reports are really useful. Course completion, however, is kind of meaningless in such environments.

During the time I was there and we had Moodle, final exam results went up. Management was often asked "what evidence do you have that Moodle helped improve the grades?" but there wasn't any, really. It would have been/will be excellent to have something easily used by schools and colleges. As a by the way, my old school, and I suspect many others, are in love with SISRA Analytics - but I doubt anything Moodley is taken into account.

In reply to Gavin Henrick

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Marcus Green -
Picture of Core developers Picture of Particularly helpful Moodlers Picture of Plugin developers Picture of Testers
My background is in UK FE (mainly technical education possibly similar to Australian TAFE) and University degree courses. I don't recall if start and end dates were stored in FE, but they would not have been particularly meaningful as a large percentage of students would trickle on to courses up to weeks after the start and a small percentage would hang around for up to a month after the official course end.

In both organisations courses were not re-used and a new course was created automatically, typically this was close to the course start date. Final course grade was not stored in Moodle but was hand entered into the School Management System. The SMS was the final arbiter of a students overall grade.

Gradebook contained graded activities that were formative, summative and sometimes just a way of getting the students to submit stuff that was never expected to be graded

To address Gavins final question about "what is reliable data within inconsistent course structures" is that there is none and to get it you need to change organisational policies and individuals attitudes.

I rather like activity completion as a measure of progress as it is turned off by default, meaning that where it is turned there has been a deliberate decision to address it and also it is binary, i.e. you have either completed or not completed an activity. This is in contrast to other areas of grading which could vary wildly between two different teachers.
Average of ratings: Useful (1)
In reply to Marcus Green

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Elizabeth Dalton -

"a large percentage of students would trickle on to courses up to weeks after the start and a small percentage would hang around for up to a month after the official course end"

Do you think there's any correlation to be found between these later starts and eventual success in the course? I'm not familiar enough with that kind of institution to be able to offer a guess.

We know that different institutions will use Moodle differently, and I think that should be fine, as long as we know HOW each institution is using Moodle so we don't make predictions based on some other use. If we have a way to store that value at the site or course level, I think it would greatly help in making predictions.

In reply to Gavin Henrick

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Tom Murdock -

The questions you raise make perfect sense to me as conversations I always want to have with Moodlers.  

  • Do you know what settings are necessary to make the best use of a Moodle dashboard?
  • What learning outcomes do you want to address within Moodle?
  • How will you know if outcomes have been fulfilled?
  • What elements are consistently designed across a set of courses so that they can be compared?
  • What have your teachers/educators/trainers agreed upon as "consistent practice"?
  • What factors make certain elements more or less successful than other elements? 
  • Do the various course designs create value for students, or are the different patterns confusing?
  • If you teach hybrid courses, what activities do you want to save for face-to-face meetings?

Backward Design (Wiggins) can't be ignored in online courses, especially when you intend to compare apples to apples with analytics.  Plus, it is really liberating to find some agreement among peers on what they want to see in online instruction.

-Tom  

In reply to Tom Murdock

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Dan McGuire -

I agree with your points, Tom. I do think this discussion is in the wrong forum, however. The points you make are best addressed and analyzed (thereby producing learning analytics) with competencies and thorough and detailed reporting on the competencies.

IMO, analyzing behaviors and course characteristics that aren't necessarily (a normal and necessary part of the course) related to authentic evidence of student learning is not learning analytics even though the term 'learning analytics' in current parlance usually does mean analyzing behaviors and course characteristics. The problem that Mary points out that faculty don't frequently use Moodle to record assessments, especially of off-line activities, even though doing so is actually quite efficient, illustrates the problem with analyzing learning. it is only partially a software feature development issue. It is more of a professional services / professional development issue which will be made easier as competency reporting gets easier to use. Software developers will need to listen long and frequently to professional services people, anathema for some developers.

And one other, somewhat related point, all teaching is either online or hybrid (certainly for our purposes on this forum;)  it's just a matter of how much time is face to face. Here's an example from my days in the classroom which I will be missing this time of year, for a week or so. http://dangerouslyirrelevant.org/2010/09/writing-the-elephant-in-the-living-room.html


In reply to Dan McGuire

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Elizabeth Dalton -

There are certainly "best practices" we can offer to institutions about how to improve their predictions by more complete capture of data, e.g. competencies, but I think we also have to acknowledge that not every institution works the same way, and competencies are not always appropriate. For example, in a Social Reconstruction curriculum using Social Constructivist pedagogy, bringing students into participation with the community of practice may be the main goal, and participation patterns in the forums, etc. may be the most important predictor.

In reply to Gavin Henrick

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by David Monllaó -

Hi,

Thanks for reminding us the chaos that reigns rollover processes in Moodle Gavin smile It is important to consider all these scenarios during the analytics API design. Lack of course structure, missing course start and end dates, and no standard procedures to rollover courses... are problems we face not only when designing analytics models but also when designing the analytics API itself. We are coding the API according to those use cases but there are some limits. I don't think we can have a good solution that works for all courses regardless of their structure or the correctness of their start and end dates. Simply because the information we need to make predictions is not available. About what can an analytics engine reliably use, I agree that user activity seems something we can rely on, I don't think marking guide responses or rubrics are elements that are available on most types of courses and I also agree that we can not rely on Moodle's gradebook either.

The prediction model we currently have in 3.4 (master branch) for students at risk of dropping out of courses is based on student's activity and it does not look at student grades, it is aligned with what you think an analytics engine can use across most types of courses. The indicators we calculate and use to get predictions are based on the community of inquiry model, there is info about them in https://moodle.org/course/view.php?id=17233 (I don't know much about it) the summary is that they are based on students activity on course activities, posting stuff, reflection, updating submissions.... I am personally not a fan of limiting prediction models' indicators to specific paradigms and I would be keen to test different indicators (e.g. rubrics if available) or whatever the community members propose; in any case I think that this set of indicators is a good start and we can expand the list of indicators in future releases, of course contributions are more than welcomed.


To give some more info about the API design and the included students at risk of dropping out model design:

  • Activity (in different shapes, grouped...) is the most reliable indicator across courses; it is important to consider that to deal with activity logs is not cheap in system resources terms. We can not just check each student activity in a site in weekly basis performing complex calculations including other course members averages that we need to set some context for each student, at least not in a mid-size site, we should aim to have a solution for all site sizes. The API allows each model to control what should be analysed and what should be ignored.
  • In the students at risk of dropping out model included in 3.4 we are ignoring all courses that do not have a start date and an end date. We do it because without a proper start and end it is impossible to know which activities a student should have interacted with. There are some cases where we could do something (availability is set, assignments with due dates...) but again, we want a solution that works for as much courses as possible. We considered that in most of the cases people have not cared about setting the proper course start dates because who cares if it is not used, the same with the course end date. Now they have 2 reasons, the dashboard and this. I would even vote to include a message on the top of the main course region if course start and end are not set, if the start date has not yet arrived or if the end date has already passed.
  • We are aware that there are courses what do not really have a start and end like Mary said and I can agree that we should try to have a solution for them, I have been thinking a lot about it since I started reading this chain of messages and I will propose something here during this week; sorry, no guarantee that it will be ready for 3.4
  • The prediction model for students at risk of dropping out of courses that is included in master is an experimental prediction model that serves as an example of what you can do with this new analytics API. The API is composed by different entities that can be extended separately without changing core code so anyone can overwrite these behaviours with their own. For example, you want to get predictions in weekly basis since the beginning of a course regardless of how much you will stress your servers, just code a new time splitting method for it, all indicators will continue working as they do splitting the course in 4 parts; you want to look for students that will fail the course instead of students that will drop out of it (good luck with it) just extend students at risk target and change the calculation. More info in https://docs.moodle.org/dev/Analytics_API
Average of ratings: Useful (1)
In reply to David Monllaó

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Elizabeth Dalton -

The problem with trying to make predictions for a course with no start and end dates is: what are you trying to predict? If you are making a prediction about student success within a certain point in time, you need to know what that point in time is.

There are two cases in which a course has no end date, but we want to make some kind of prediction of student success.

In the first case, if the course stays open perpetually, but the students are enrolled for fixed points in time, those times (enrollment start and end) should be used as a basis for the predictions. Moodle supports this by allowing the teacher to exclude students who don't have an active enrollment from course reports, so it would be safe to leave the students enrolled. You can reset a course and leave students enrolled while removing their data from the course, I believe: https://docs.moodle.org/33/en/Reset_course

The second case is for self-paced courses. In those courses, there really is no end date. We can develop a different sort of predictive model for that case: is the student likely to ever complete the course, based on their activity to date? This kind of model would need very clear completion criteria (preferably using Course Completion).

In reply to David Monllaó

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by David Monllaó -

The best I can think of for self-paced courses to get an effective insight is to use the new action events API (currently used for the dashboard) to retrieve student's last week action events that are displayed in the dashboard (attempt an open quiz, submit an assignment which due date is in xxx, activities that require completion...) and see how many of them are still actionable, which means that there are still actions the student is supposed to do. I would add some other basic indicators to the model that are already coded to "try to" give some insight if there are no activities with action events, which is perfectly possible.

This model would work in parallel to the current students at risk of dropping out model based on the community of inquiry.  There shouldn't be any changes required at analytics API level. I have been looking at the action events API and talking with Cameron Ball (who participated in the design) and the action events API supports what I am proposing I just need to check how this use of it will perform (in speed terms) because the API is quite single user focused and iterating through potentially all site student enrolments in weekly basis is not a quick task, we need this to work in all site sizes. In any case this model, if this does not perform well we could restrict its use to self-paced courses.

In reply to David Monllaó

Re: When people have different uses for Moodle - what are the consistent concepts that can be analysed?

by Elizabeth Dalton -

This is a good strategy. Some members of MUA would like the Dashboard display to be guided more by which activities must be completed as part of course completion, but if I understand you correctly, that enhancement would already feed into the Action Events API. So the prediction would be: the student will finish the remaining required Dashboard events (at some undefined point in the future), and probably the number of actions completed will be the best predictor, though we may find that some actions predict completion better than others (I'm betting that submission of an artifact for feedback will still be a stronger predictor than reading a page).

There would be no "quarterly" or "tenth/decile" timesplitting. Could we set up a timesplitting method of "weekly"?