I have a dual 3.0 with 4G of RAM running LAMP with APC for caching. The
school has recently made a big push with the faculty and I am happily
seeing usage at record highs. Currently, the one server handles
everything for Moodle (apache, php, mysql, and postfix). I am curious
what kind of load a veteran system administrator would expect this kind
of configuration to handle. Thankfully, I am not seeing any performance
at this point. We have about 1000 students and 100+ faculty in a
secondary education environment. So far today we have had 500,000+ hits
and a little more than 3G of traffic (according to webalizer). I'm just
looking for ball park numbers to use as a guide or anything else that I
might watch for and when I should start pushing for load balancing. I
think as reliance on this has increased it is prudent to start building
in a little protective redundancy. Thanks for sharing your experience
with me. Peace.
(IANASA, I am not a system administrator, but monitoring our server usage and tweaking Moodle performance though...) How many page loads do you have? We are not logging every hit anymore (logs were getting rather large), but we have around 100,000 page loads per day on a similar specc'd server, no acceleration even, and it's peaking at around 30% system usage. I'd estimate that we can go up to 500,000 page loads per day without slowing down, more with acceleration. In Webalizer terms that's usually around 1,500,000 hits. Obviously there can be peak times that slow down the system, but I'm mostly insterested in having the "load average for the last 15 minutes" under 1.00 per (virtual) processor.
We are running SUSE Enterprise Linux 9 with ReiserFS, which seems to perform very nicely for web server purposes. Also, MySQL is faster than PostgreSQL, though less robust (a whole lotta backing up going on).
Our strategy is in having a similar specced server serving as a live mirror for both load balancing (if and hopefully "when" required) and failover purposes.
We are running SUSE Enterprise Linux 9 with ReiserFS, which seems to perform very nicely for web server purposes. Also, MySQL is faster than PostgreSQL, though less robust (a whole lotta backing up going on).
Our strategy is in having a similar specced server serving as a live mirror for both load balancing (if and hopefully "when" required) and failover purposes.
I think it depends a lot on the type of usage. If you don't have lots of heavy filters running in Moodle (mimetex, algebra, auto-glossary, to name a few), CPU usage won't be an issue unless you get high concurrent access (250 users trying to submit a quizz at once, for example).
Having a look at memory usage, mysql tends to have all of its tables in core memory, which can be an issue if you have really large tables (we had an issue with the logs table in the past). Tunning mysql and apache/php to make most out of your memory can lower CPU and disk usage (less forking and swapping). And you should try to get your disk usage to a minimun, as disks are really the bottleneck these days.
I guess using some king of monitoring tools, like sar, iostat and the like should get an accurate image of the real usage of your server, at least more accurate than just web hits If you couple that with systems like zabbix/jffnms/cacti/nagios/put-your-preferred-monitoring-system-here, you'll get an accurate trend analysis of your server performance, which can let you estimate when your server isn't going to perform as you need.
Saludos. Iñaki.
Having a look at memory usage, mysql tends to have all of its tables in core memory, which can be an issue if you have really large tables (we had an issue with the logs table in the past). Tunning mysql and apache/php to make most out of your memory can lower CPU and disk usage (less forking and swapping). And you should try to get your disk usage to a minimun, as disks are really the bottleneck these days.
I guess using some king of monitoring tools, like sar, iostat and the like should get an accurate image of the real usage of your server, at least more accurate than just web hits If you couple that with systems like zabbix/jffnms/cacti/nagios/put-your-preferred-monitoring-system-here, you'll get an accurate trend analysis of your server performance, which can let you estimate when your server isn't going to perform as you need.
Saludos. Iñaki.
Thank you both for your responses which have pointed me in the
direction of useful tools to continue monitoring the performance and
also given me the rough idea of capacity that I might be able to
expect. I do have several filters running TeX, auto-linking, and
algebra so we shall see but this information has been very helpful.
Peace.