" This is the most interesting news I learned at the conference. I inquired Rory McGreal, Athabasca's Associate VP Research, for the story. It was rather dramatic. Athabasca faculties were using three different LMS systems at the same time. The University finally decided that it would only support a single system. The questions was which one. Sounds familiar? Selection committee was formed and it decided on WebCT. Bear in mind that Alberta has province wide license on webct. Faculty member didn't like the decision and consequently formed its own committee and conducted its own evaluation to rate the three systems - webct, moodle, and Lotus Notes Learning Space.
Moodle won head and shoulder. The University accepted faculty's decision.
Now Moodle is the choice at Athabasca."
Can anyone confirm it? I especially like the part about the teachers rising up and requesting moodle...
Very interesting news. Thanks for sharing it and following up on it.
Wow, that is good news. I wonder if any of the professors from that committee are active here at Moodle.org.
Hey, I just realized I am in the middle of a good book about distance learning published by Athabasca University: http://cde.athabascau.ca/online_book/index.html
That would be me.
I, along with several colleagues, was instrumental in leading the charge against our CIO's choice of WebCT Vista.
We formulated our objections in a position paper that challenged the choice of WebCT Vista on several levels.
The paper was presented to AU's Academic Council and the support it received convinced our Vice President Academic to direct the CIO to re-open the LMS selection process, make his research into LMSs public, and conduct a broader evaluation. We were particularly concerned that open source LMSs had not been given serious consideration.
Our problem was one of how to showcase Moodle's strengths and flexibilities without the "dog and pony show" vendors such WebCT Vista employ to impress CIOs and the like. Fortunately, Michael Penney agreed to spend 3 days at AU to assist us in our struggle to promote Moodle. Michael did an outstanding job of showcasing Moodle's features.
We still have a lot of work ahead of us, in terms of integrating Moodle with our other systems, and transitioning from our existing LMSs, but the move to Moodle is, indeed, underway.
You are a good man. And you owe Michael a beer.
This is indeed wonderful news. I've been forwarding the news about a bit as Athabasca has a high profile here as it's about the same size as The Open Polytechnic which is also a specialist distance learning provider. It's nice 'validation' in a sense. If there's anything the NZOSVLE project team can do to help just let us know - e.g. how we've set up the hosting architecture, PostGres vs Mysql, single sign-on for webmail etc. Actually probably more importantly, the actual migration plan...I need to write a paper or report on this one day
All the best,
Beware everyone, the loss of such a huge contract will wake up sleeping giants. Expect all out "Revenge of the ..." Derek, have you heard any response from WebCT?
Oh and by the way, if Athabasca Univ. could send in $50 to moodle.org, Irene will send you a cool badge, "I support Moodle!". Feel free to add a few zeros to that amount.
that would be grate! thnks Richard.
On the performance side, Postgres requires a bit more up-front configuration than MySQL. A well tuned Postgres is pretty close on SELECT performance to MySQL with small databases. With large tables MySQL has some bad performance problems, and Postgres performs much better.
Write performance is also an issue with MySQL -- with a lot of traffic, it has serious problems with concurrent writes. Under heavy load, Postgres performs much better.
But to tell you the truth, the real reason for choosing Postgres is reliability. We maintain a lot of databases, and Postgres is rock-solid reliable and has a focus on ACID-correctness: when it returns from a commit, the data is safely on disk and won't be lost -- barring actual disk problems, which we offset using RAID-1.
No matter how hard we try, MySQL databases with a lot of usage have recurring index corruption issues. If you look at the startup scripts for MySQL on most Linux distributions, they check for data corruption on every startup -- this is to mask the fact that it is a frequent occurrence.
And while this is passable with small installations where the data isn't mission critical, you have to consider how much you can trust such approch. And with large datasets, runing isamchk/myisamchk can take hours -- we cannot afford that.
The clustering solution for MySQL is being touted a lot, and I think it is a red herring. My main concern about is that it writes "asynchronously" -- that is, there is no guarantee that your data is on-disk safely. It'll get to the disk sometime. It'll get to the slaves... sometime. Hmmm.
Given that the MySQL cluster uses async writes, splitting read/writes between the master and the slaves breaks down in cases where we write some data, and read it back in immediately (or soon after). And this does happen in quite a few places.
And you also have to consider the performance boost of using async writes: if you tell a standalone Postgres or MySQL to use async writes, it'll run scale much better (should be able to handle up to 3-4 times more simultaneous writes). Once you do that, the performance advantage of the MySQL cluster mostly vanishes. It still has semi-hot takeover in case the master goes down, but Postgres can do that using Slony, and with better guarantess of consistency of the data in the slave.
In a nutshell, MySQL isn't normally very solid when it comes to ensure my data is safely stored on-the-disk, even if it theoretically guarantees that it's been saved. And MySQL Cluster says up-front that there isn't a guarantee any more. Riiiiiight
Michael is talking about having UPSs. We have a car-sized UPS and a container-sized on-site generator that auto-starts. And yet, I wouldn't depend on that for my DB consistency on a large installation. So many things other than power can (and do) go amiss. If a process has a problem storing the data, the right thing is to tell that back to the user. With async writes, you end up with a queue of data that hasn't been stored yet, but you already told the user it was.
That's not what a database is supposed to do.
I am currently exploring some techniques similar to those being used in livejournal and slashdot. We should be able to increase Moodle scalability by cutting down on DB load by about 50%. This is happening slowly in the gaps between more urgent projects. Feel free to ping Richard or me if you're interested in that track.
(Wow! Did I say "brief outline"?
This is a gem of a server strategy guide for large installations that *has* to go into the Moodle documentation--easy to understand and full of valuable experience. Przemek, how do we integrate this into the docs?
It may well be that MySQL is spinning this and it doesn't really work that way, but the as I understand the concept it makes sense to me as a way to scale the db in a similar way to how apache scales. Otherwise your cluster data transfer is always going to be limited by the speed of your disk subsystem.
But this brings up another question. Don Hinkleman reports adding servers to scale Moodle to handle hundreds of concurrent quiz takers. Was this done with a single db server, unconected db servers, or were they using a db cluster?
Otherwise your cluster data transfer is always going to be limited by the speed of your disk subsystem.
I agree. Call me old-fashioned... it makes sense to me to have the speed of safe disk writes (on a RAID-1 based on fast SCSI disks) be the unavoidable bottleneck for a DB -- the RDBMS is the safe, reliably data store. All this "in-memory" stuff is good for a cache, but not for a data store.
So I'm looking into caching smartly at the app level (using memcached and/or turkmmcache), and touching the DB the least possible.
There are some things we can do to break writes down and make it more scalable, without losing any reliability. For example, I'm wondering about splitting the log table out. MD will probably hunt me down and kill me, but it makes sense -- writing to the log table is a major expense in large installs.
They were four unconnected Moodle servers--stock dual Xeon/ 2GB memory Linux servers for $1500 each. Each had a different URL, different db. We estimate each could handle about 120-150 concurrent heavy audio quiz users. We would like to use a db cluster or something more sophisticated, but that is beyond our ability. ML's comments suggests a db cluster is not necessarily the best approach.
My server guys are hot to try this anyway, so we'll probably be setting up a dummy cluster with a dupe of our Moodle site for testing, and hammer it for a few weeks with some load testers, probably not until the fall of this year though.
By all means, play with it. It may well be wrong on some counts, and it's always fun to tinker. Don't forget to test what happens when it things break. Let us know how it goes!
> MySQL Cluster wasn't designed to be faster than a single machine.
> It was designed to be more reliable. If you read the main page
> for MySQL cluster you will see that the strengths listed are
> centered around high availability not raw speed (although it's
> still very fast).
Perhaps we're all confused by the spin? It's true, their "cluster" page talks a lot about "five nines", but then goes away talking about performance async writes and in-memory databases, which is the wrong thing to say if you're trying to convince people that you're "reliable first".
Hi - now, in 2011, are these arguments still valid?
Thanks for your attention,
I am not a data base person, but your comments concerning the greater stability of Postgres over MySQL lead me to ask the question: how difficult is it to migrate a live Moodle from MySQL to Postgres?
Is this publicly available? I'd love to see it if so.
A number of us were certainly supportive of an open-source solution, but we avoided framing the debate as proprietary vs. open-source to avoid reducing the debate to an ideological mud-slinging match.
The key was finding a flexible solution that would meet our everyday LMS needs but also foster pedagogical creativity and support innovation in research. If we could have found such a solution in a proprietary product, we would have given it serious consideration.
It wasn't about cost, because the money to implement WebCT Vista was already on the table.
The choice of an open source product that could meet our needs as opposed to a proprietary one seemed obvious to some of us, but others were not so easily convinced, so there was certainly a political dimension to the debate.
We should probably talk more about this, perhaps by email?
Congrats again and could you please share as much as you can with the forum. Last year we lost our little campaign to get Moodle as the centrally supported LMS, but we are going to have another opportunity soon and it would be great to get some tips from you. BTW, I think getting Michael Penney on campus was a great idea.
Mark: I'd be happy to share information re: the LMS evaluation process we went through at AU.
The most challenging task was convincing the administration to give serious consideration to an open source LMS.
We framed the open source argument in terms of licensing costs, flexibility (access to code), pedagogical focus, and a dynamic development community. We're already using some open source products at AU (UPortal, and PLONE, for instance), and these products have already been integrated with our existing proprietary systems (e.g., Banner), so the worth of open source couldn't be dismissed out of hand.
Once the administration agreed to evaluate an open source product, the challenge was identifying and promoting the best candidate. I promoted Moodle over other products such as ATutor because I thought it the all-around strongest open-source candidate available.
The evaluation process involved 2 teams: a test group and a core group. The test group comprised hands-on users--course designers, web developers, technical support, helpdesk, administrators, instructors, etc.; the core group comprised managers and administrators. Representatives from across the University were invited to participate. The test group were tasked with piloting the LMS under consideration; they then reported to the core group, who considered the technical strengths/shortcomings of the LMSs in a more political light.
The test group took on WebCT Vista first, because negotiations between the administration and WebCT were already underway. The WebCT Vista test drive helped the test group develop a more explicit needs assessment, which were further refined into a list of evaluation criteria. Each of the 3 LMSs was then evaluated in terms of those criteria.
I don't want to give the impression that this was a neat, technical exercise: it wasn't. There was a lot of sometimes contentious debate/disagreement re: the evaluation criteria and process. Our CIO did an excellent job of keeping the evaluators on task and moving the evaluation ahead--something that had been missing in past efforts to resolve the LMS issue. In some respects, the CIO's promotion of WebCT Vista served to hold people's feet to the fire, forcing the compromise and consolidation necessary to promote alternatives. In fact, the most difficult/trying part of the whole ordeal was identifying the evaluation criteria and process, not choosing the LMS.
The choice of criteria and an evaluation process had the side benefit of introducing those in administration to the concerns of those in course production and delivery and vice versa, so the evaluators ended up learning a lot about aspects of the University they hadn't been familiar with.
Although we ended up with a list of criteria to evaluate the competing LMSs, all the parties involved realized this was not simply a matter of comparing LMSs in terms of feature sets. The spreadsheet of criteria we ended up with was more properly a complex tapestry than a simple tool: the front of the tapestry looks to tell a simple, uncomplicated story; the reverse, however, contains the untidy and complex mass of threads that make the simple tale possible.
Consequently, although it may be tempting, it would be a mistake to take the list of criteria we developed for AU and simply apply that list in another contextthe choice of an LMS is fraught with not just technical but also political, economic, cultural, and personal factors. These concerns need to be aired and worked through--they should not be avoided simply because they can't be accommodated in a spreadsheet of techncial features. My point, I suppose, is that although our choice of LMS was ultimately based on a list of discrete criteria, those who were scoring the list were well aware that this was much more than a purely quantitative exercise. Even when we introduced weightings to represent the greater or lesser significance of some criteria, the scoring was based more on the discussions and debates that arrived at a particular criterion than the criterion in and of itself.
I can share more with those who are interested but that's probably more than enough for now.
Michael, you are a Johnny Appleseed in the best sense of that word.
Thanks for taking the time.
Would you be willing to share your position paper/report? I am presently engaged in a similar situation, to replace WebCT with Moodle. The decision will be made hopefully before the end of the year. (2005)
Did u get an answer from Derek on sharing his position/report ?
I also am engaged in your situation : to replace WebCT with Moodle and would like to have this report.