I was wondering what methods are used to optimize the delivery of Moodle. Moodle runs fine for us on our shared server — we rarely have more than 20 users active at the same time — but as we work with Moodle I want to make sure it stays this snappy.
I've enabled CFG->dbpersist and internal caching, but they really don't seem to make a noticeable difference in our situation.
Two things that do make a very noticeable difference for us are zlib compression and a script I created that minifies all of Moodle's CSS and JS files before delivery. I had worried that these were increasing our server load (it hadn't been a problem, though) so I decided to test it today.
With "performance info" on, I loaded five pages three times each and recorded the Document Size from the FF Web Developer plug-in, the tick count, and the average load. I cleared the cache before each first load to make sure I was getting new data. Here are the results:
Avg Page Size | % reduced | Avg Ticks | % reduced | Avg Load | % reduced | |
Raw (no zlib or minification) | 358.4 KB | - | 80 | - | 1.67 | - |
zlib (only .php extention) | 165.0 KB | 54.00% | 70 | 12.00% | 1.41 | 16.00% |
zlib + minification | 66.2 KB | 82.00% | 74 | 7.00% | 1.32 | 21.00% |
I expected a big reduction in page size, but I was really surprised that zlib and zlib with minification both out-performed the default setup.
Another thing I have done it to create archive MySQL tables (of "archive" format instead of MyISAM). I have a weekly script that dumps all of the records from mdl_log and various grade history tables into the archive tables. Then I set Moodle to only keep records in its tables for six months — this way I keep the active table fairly clean, but still have the records if I need them. We've only used Moodle for just over six months, but just to give you an idea of the space savings, our mdl_log table has 387,820 rows which take up 35 MB of space and archive_log has 400,711 rows and takes up 7.8 MB (about 1/5 the original data size).
The same script that dumps all our log and grade histories into archive tables also checks all of our tables for excessive un-reclaimed space (Data_free) and runs OPTIMIZE TABLE if it's beyond a certain threshold.
I also tried something only as a test that didn't save nearly as much space as I thought it would. In my Moodle install nearly every ID field is a BIGINT(10). There are also a number of places where large VARCHAR fields are used for mostly replicated data. I made big changes to the Moodle install code so that the data type was more appropriately scaled for the purpose, used ENUMs in other areas were possible, and changed all VARCHAR fields to CHAR. This resulted in usually smaller table sizes, smaller indexes, and more fixed-length tables — I thought it would lead to a nice boost in speed. I didn't go all-out with the installation (perhaps it would have made a bigger difference at a certain threshold) but at least with a few dummy courses dumped in and and about 2000 user accounts the speed difference seemed non-existant.
I'm curious what other non-standard optimization methods users have employed and how they've worked.