We are planning on rolling out on Linux, flavour as yet undecided.
Our best guess at the present time is 250 concurrent users, but this is pure guess work, we currently have 10 departments each running in the region of 25 courses in the lecture rooms, how many will be ported into moodle we don't know, our student base is 12,000 though many are enrolled through external universities and have no part in actual courses run here, so potentially we have 6000 users to sign up.
We have been given a budget to roll it out and to maintain it for up to 4 years, we will not be given additional funds 2 years down the road if we need to scale it up. So with that in mind we were thinking of the following setup.
1 load balancing server HP DL360 with Intel® Xeon® X5260 Dual Core Processor 3.33 GHz and 4gb RAM
2 web servers of similar spec to the load balancer
1 database server HP DL380 with two Intel® Xeon® X5450 Quad Core Processors 3 GHz and 8gb RAM
I'm not yet 100% about the disk layout in each of the boxes but was considering going with raid 1+0 as much as possible for resilience.
Any feedback and/or suggestions would be greatly received.
Thanks
Fil
Where do you plan to hold moodledata common fileset ? Note that web clusters do no need a huge disk size, just for holding Moodle code and log files... but consider sufficiant size on the commonly accessed file volume where user files are stored.
Load balancer host may be not so highly provided as clusters, load balancer just acts as an immediate proxy. No need of huge disk space and memory demand is not so high. We are using HAProxy as a load balancer but we have no available results on effective results on load.
We planned, using a similar system than yours on 1000 concurrent users, validated by a recent Catalyst analysis. Passing to a 3 WEB + 2 Clustered DB would have raised us to 2500 concurrent users, Catalyst said.
the key point may be the speed of your backbone internal network, specially if you federate moodle code on a network bound file volume. For what we tried, code should be physically replicated on each server, because acceeding php code through a NFS mount looses a lot (observed physically with an experimentation on phpmyadmin...). If only user files are stored through an NFS mount, it does not impact a lot the global performance.
Cheers.
I am more from the IT strategy side, than from the teaching side. The general trend for Internet based data-centers, which is what I understand you will be hosting, and to be continuously expandable as the load builds up, is to go in for "Cloud Computing", or grid computing as was an earlier buzzword. The advantage is that you could start off with just one physical server, to start with, and as the load builds up,keep adding more servers. You can add upto 25 servers, in Amazon EC2 Cloud, and you pay only for the load and time you actually use ther servers. So say if you have a peak load of say 10 servers during the day and and an off-peak load of 5 , then you could setup the Cloud accordingly add/remove servers depending upon the load, and pay accordingly.
Currently, as I understand, Amazon has a low end sever, equivalent to one P4, with 1.5GB of RAM, which costs about 16cents (US), per hour of being kept on, to a dual-core AMD, a quad core and even a mamonth 16 processor server. The rates etc. go upto, I think 70c. Then you pay for the bandwidth, out of the Cloud, to your users, incl. your staff, is about 17 c/GiGa Byte, andthen there is storage costs. This is just to give you/list an idea off the costs, pl. google and look up Amazon EC2 etc.
You will just need a local LAN and good Internet connections to the Cloud, no need for running your own data center, scaleable etc.
Though the above scenario is possible,I have not done it. But it might be a good joint project, which after making it operational, can become a global resource.
Vipen Mahajan
New Delhi/Chicago.
In other words, Amazon is not application-transparent (you have to reprogram your application to some degree to make it work on Amazon.) This may be OK if you're writing it yourself. If you want application transparency, cloud computing services that implement virtual private data centers, such as those from my company, ENKI, make more sense. A virtual private data center allows you to deploy Moodle in a data center architecture that it is designed for, while enjoying automatic transparent failover when hardware faults occur, and easy scaling. There are an increasing number of VPDC cloud computing providers which will make putting Moodle on the web a lot easier than Amazon. They simply give you your own data data center composed of scalable compute and storage servers, and you install Moodle as though it was physical hardware. Or, as in the case of ENKI, they may be able to install it for you.