Incidentally this type of model is pretty much exactly what I've done in my own Master Thesis which will be published in just three weeks (most likely on August 24), so I feel kind of beaten to the punch, but ok.
The good news for Moodle: They gave some example ratings including Moodle and Sakai, and Moodle scores well (Sakai only moderate):
Moodle's BRR (Business Readiness Rating) is 4.19 out of 5 with a functionality rating of 5 out of 5 (Functionality is included in the BRR but for some reason also mentioned seperately)
Sakai scores 3.23 (beware, they obviously used Moodle's sheet and redid the names except for one sheet, it still says Moodle), with a functionality rating of 3
My thesis also gives Moodle high honors, more on that in three weeks
Edit: this should be in comparisons and advocacy. Can threads be moved to other forums? If not, I'll repost it.
Good find, Karin!
Looks like Moodle has the highest rating of all (Mambo, Web GUI, etc.). Or am I mistaken about that?
-- Art
Anyway, don't want to give my whole thesis away, so I'll get back to you on that
How is the category ranking determined in OpenBRR. Our selection of Moodle for the NZOSVLE project had very different weighting. Architecture, modularity, potential for scalability, performance, quality of code - these things were paramount yet have a zero rating on OpenBRR (or is it simply a sample?).
Functionality and documentation were lesser crietria - they're more easily fixed!
The problem with this sample as well as a lot of other comparison reports I think is that though the scores are given to quite some detail, how the scores where determined is hardly or not explained. This makes it difficult to reproduce or reuse the method. The value of examples is that it makes you understand the method. That's why, in my thesis, I've explained how I came to a certain score, what I have observed, etc.
I've recently given an impromptu talk on this matter, when one of the Debconf5 speakers failed to show up: http://dc5video.debian.net/2005-07-15/
It covers the criteria we used to evaluate LMSs last year, in a document you can find here:
"LMS Technical Evaluation"
http://eduforge.org/docman/?group_id=7
I have applied similar techniques before. Another good published example is the Midgard CMS for an e-govt portal, though it is quite dated. We used similar criteria in the evaluation process:
http://www.midgard-project.org/midgard/1.6/casestudies/govtnz.html
(Edit: removed the direct video link as it triggered the multimedia filter and wanted to show the 50MB video inline.)
I used the NZ project as one of the examples that justify my results for Moodle.
Hi Karin,
They're sort of sister projects. NZOSVLE is very much focused on code development and system integration work. Catalyst did the technical evaluation work and is a Moodle partner - the evaluation work was commissioned as part of the NZOSVLE project. OSCINZ is more focused on contextualising the VLE for cultural requirements in New Zealand, specifically Maori and Pacific Island communities. They've been working on graphical intefaces and Maori, Tongan and Samoan language packs.
NZOSVLE now goes into Phase II with funding through to July 2006. OSCINZ has finished its work but now the same team is focused on FOSS Learning Object Repository infrastructure. See OSLOR.
Also, we've upgraded Eduforge, which is an output of the NZOSVLE project. Anyone is welcome to use and contribute to this resource.
regards
Richard Wyles
Project Leader
NZOSVLE