So, I'd be interested in community feedback (Derek Chirnside's views in particular) of the following:
We need some means of comparing our current LMS, Moodle, (and remember that we're moving to 2.8 over the summer which will introduce new feature and streamline others) with 'hosted' LMS (that is an offsite system, maintained by a 3rd party -- ie Canvas) in objective and replicable ways which compare apples to apples as far as possible.
Firstly I think that feature lists are irrelevant. Every system that we look at will claim to offer the same or equivalent features. Comparing lists of features doesn't get us very far.
Here's my suggestion:
Compare ease of completing common tasks
Make a list of reasonably frequent tasks that different stakeholders execute, and compare the execution in terms of # mouse clicks, cognitive overhead ('easy' -> 'difficult'), flexibility (options to configure to personal taste),
1. Stakeholders
- Students
- Teaching Faculty
- Administrative Assistants
- System administrators
2. Experimental setup
a) Design tasks
- Each task should have a single clearly prescribed goal which mimics a commonly executed task performed by a stakeholder in the real world.
- The description should give sufficient details that are normally needed to complete the task. The subject should not have to spend time making up dates, times, and other inputs required.
- The tasks should be sufficiently generic but useful and not focussed on any particular characteristic of an LMS.
- Different tasks for different stakeholders (total # of tasks ?)
b) Equipment Setup
- Mac & PC with latest browsers (Firefox & Chrome), and screen recording software.
- Access to all LMSes being compared
c) Methodology
- Screen recording s/w switched on and subject logged in to LMS.
- Subject reads the task (online & paper copy available) and attempts to complete it within a certain time.
- Online help documentation available open in a separate browser window
- Repeat same task with other LMSes under comparison.
3. Comparison metrics
- Level of Expertise (self assessed). Subject self assesses their level of expertise with the LMS at hand on a Likert scale (expert, confident, need help, utter novice / Not used LMS before)
- Task completion. Self assessed - were you able to complete the task to your satisfaction ? And assessed by experimenter - % of task goals actually completed.
- Cognitive overhead. Ease of accomplishing task rated by subject - Very easy, Easy, OK, difficult, very difficult. Compare to experimenter's rating.
- Flexibility - number & usefulness of options to configure to personal taste. Rate by subject & experimenter.
- # clicks / drags required to complete task (assessed by experimenter from recording)
- Total time taken to complete task (assessed by experimenter from recording)
4. Statistics & Outcomes
- Test each task with a number of subjects.
- Aggregate each metric to mean +/- Standard deviation.
- Assign a relative weight to each metric and compile an overall Usefulness value for the Task per LMS.
- Compare Usefulness values between LMSes and determine level of difference required to be significant.