Sounds like a tough position you're in! - If your supervisor isn't giving you any support you may just need to look at a different project.
To run the training of the analytics you will need a copy of a moodle database
with a number of courses and historical logs - ideally the courses will have start and end dates set in the table already, however you can run some scripts that will set/guess the end-date of some of the courses.
The "prediction" process currently requires active logs - or a way to generate user activity after the training process has been run (which is not all that easy to do on a testing site.)
There is a research data set here which contains a logsstore and user data export,
But - it doesn't contain the course table in that data set which is needed for the analytics processes - you might be able to fake it by generating a bunch of courses that correspond to the courseid in the logstore table - if you're lucky it might even just be one course in that data-set but I haven't looked at using it myself.
Depending on what your skillset is and how much time you have to work on this i have a number of "development" related ideas that would be good:
One task that might be useful from a research perspective and generate something useful for Moodle would be to implement a better process to test a Model in Moodle - at the moment the training process collects 100% of the available data for training and sends it to the ML backend. When we were testing this we'd store a copy of that full data-set on the tensorflow server
, and then train a model manually using 80% of the data in that set, and then send the 20% remaining as a "prediction" run to see what sort of response we got and to see how accurate the model was.
If there was a way to do that in Moodle it would be really nice - you could make some changes to a new model/add some indicators etc then run a "test" process which would send 80% of the data collected from Moodle as a training run and then run a prediction process on the reamaining 20% and give some feedback on the accuracy of the model.
There is quite a bit of work involved there though - you'd need to set up the system/understand it and have some php development skills that would allow you to develop this.
Other more simple tasks might be to look at the 3rd party activity plugins used by your organisation and see if you can add machine learning analytic indicators to those plugins (not many 3rd party plugins include support for the community of inquiry analytic indicators, but here's a couple I've written for 3rd party plugins recently: