Dear all
I'm aware of 4.6. Parameters for limiting the resources used by the requested tasks and Re: possible to have more than 1GB in MAXMEMORY (vpl-jail-system.conf). Still, after the observations made in Collecting the VPL usage and displaying in a running graph I would like to understand the parameters better to get the last bit out of our jail server.
Let us say, the jail server config has:
#MAXTIME set the maximum time for a request in seconds
MAXTIME=1800
# Maximum memory size in bytes
MAXMEMORY=2000 Mb
# Maximum number of process
MAXPROCESSES=500
#SHMSIZE The size of the "/dev/shm" directory he default value is 30% of the system memory
#This option is applicable if using tmpfs file system for the "/dev/shm" directory
SHMSIZE=30%
and the VPL Moodle plug-in set to:
Maximum execution time mod_vpl | maxexetime = 16 minutes
Maximum memory used mod_vpl | maxexememory = 1 GB
Maximum number of processes mod_vpl | maxexeprocesses = 600
Maximum default execution time mod_vpl | defaultexetime = 4 minutes
Maximum default memory used mod_vpl | defaultexememory = 128 MB
Maximum default number of processes mod_vpl | defaultexeprocesses = 200
and the individual VPL activity is set to:
Maximum execution time = Default (4 minutes)
Maximum memory used = Default (128 MB)
Maximum number of processes = empty, Default (200)
Scenario 1. An idling VPL server with plenty of CPU and 20 GB free RAM is hit with burst of 100 jobs in the very same second, each of the size of 100 MB.
Result: Given that the CPUs manage it, the server will open 100 processes, each consuming 100 MB, giving a total of 10 GB.
- When the server can process them in less than 4 minutes, all end well.
- What happens, when they need 10 minutes? Since maxexetime is 16 minutes, will they finish without an incident?
- What happens, when they need 20 miniutes, or more? Our maxexetime is 16 minutes. They'll be killed?
Scenario 2. An idling VPL server with plenty of CPU and 20 GB free RAM is hit with burst of 400 jobs in the very same second, each of the size of 100 MB.
Result: Given that the CPUs manage it, the server will try to open 400 processes, each consuming 100 MB, would mean a total of 40 MB. Since there's only 20 GB available, only half the processes will be started.
- Those 200 started will run to end if finished before 16 minutes
- If needed longer, all get killed?
Having started the topic, I see that it is getting "theoretical". If so, then you don't have to torture yourselves. As mentioned in the beginning, the idea is to tune these parameters to get more from the server. We've hit a ceiling as it is. Planning to touch them, but need some understanding on how they react. How was surprised by how little RAM the (our) VPL processes need but the peak CPU bursts they create.
