Scheduling the Middle Ground

As more and more financial services firms make the jump to HPC, some consideration needs to be made in getting the most out of the cluster resources employed. Financial applications are typically embarrassingly parallel jobs that only account of a few seconds each of computing time. HPC scheduling engines typically rely on longer running jobs taking hours or days to complete since they operate by polling the resource managers. Nitro solves this problem by giving the scheduling engine one or several tasks to schedule as opposed to thousands or hundreds of thousands of short-lived jobs. But what do you do if your jobs are somewhere in the middlebetween 5 seconds to 2 minutes in length?

Here’s the problem: Let’s say you have six nodes that you’re going to schedule jobs on, but the jobs are shortsay 30 seconds long. The schedule iteration takes 10 seconds, and the resource manager polling interval is set to 45 seconds. You can see a representation of this scenario below.

SchedulingSmallJobs_WithoutNitro

As you can see in the graphic, if the job completes after the resource manager polling begins, the resource manager will report that the node is busy and the job has not completed yet. Normally with lots of very large jobs you will get one or a few nodes that are idle between scheduling iterations. However, with shorter jobs you may get a large portion of the cluster waiting for a job because it’s taking a long time to schedule many thousands of jobs. Compare this to short jobs running on Nitro as shown below:

SchedulingSmallJobs_WithNitro

In this case all Moab has to do is schedule the one Nitro job and Nitro will continue running until all of the jobs in the job set are completednone of the nodes will be waiting around for a job to work on. While Nitro specializes in running short jobs and is tuned for extremely high throughput, there is no reason Nitro couldn’t be employed to manage a set of jobs each taking up to one or two minutes in length. To get the best use of Nitro the jobs should be fairly homogeneousof a reasonably similar duration and using the same number of resources (typically a single processor).

If you have jobs that lie in the middle ground between short and long you could use one of the following strategies to make the most of your resources. First, if you have jobs that can be grouped together and they fit Nitro’s requirements then use Nitroit will give you the best throughput with no lags between jobs.  Second, if you have jobs that cannot be grouped, but can be chained one after the other, then chain the scripts together into a single job. Third, if you can’t chain jobs together either, set Moab’s RMPollInterval to a small number (perhaps 10 seconds) so that scheduling iterations happen close together. Use Moab’s new “UIMANAGEMENTPOLICY” setting so that Moab will respond to client commands during the scheduling iterations. And keep an eye on the average job time and average schedule iteration time. Watch for iterations lasting longer than the average job lengththat’s a sign that you need to shorten the poll interval or use one of the other two strategies.

Facebook Twitter Email

Speak Your Mind

*