Understanding Moab Scheduling: Part II

This entry is part 2 of 3 in the series Understanding Moab Scheduling

Why is the system slow?

In Moab Scheduling: Part I, I discussed the Moab scheduling iteration and the different stages it passes through. Today, we are going to look at answering the above question by learning about the command that allows the administrator to monitor these different stages. Additionally, I will provide some of the common issues seen with the scheduling iteration and how to identify and troubleshoot them.

The “magic” command:   mdiag -S -v -v

If you were to run mdiag -S -v -v from the commandline as an administrator, it will provide the very verbose diagnostic information for the scheduler. You will have something that looks similar to the following:

Moab Workload Manager Version ‘7.2.6’ License Information:
Current License:  Max Procs   = 13200
Current License:  Valid Until – Tue Sep 23 08:00:00 2014

Moab Server ‘moab’ running on headnode:42559  (Mode: NORMAL)
Version: 7.2.6  (revision 12, changeset e6e0bd4a2c0d82147fab5b8b6d07b2f03d9383a0)
Build date: Thu Nov  7 10:58:52 MST 2013
Build Info:  THREADSAFE,64,MCOMMTHREAD
Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
Activity[JobStart]         Count=4   Duration=4s
Activity[RMResourceLoad]   Count=3   Duration=4s
Activity[RMWorkloadLoad]   Count=1   Duration=3s
Activity[Schedule]         Count=5   Duration=15s
Activity[UIProcess]        Count=5   Duration=0s
Activity[UIWait]           Count=1   Duration=0s
  PollInterval: 00:01:00  (Avg Sched Interval: 00:00:52  Iterations: 13286)
  JobStarts: 44722  (Avg Starts/Iteration: 3.37  Last Iteration: 4)
  Hist. Max Number Waiting Client Connections: 4

Let’s look at some specific parts of this output.

Licensing Information

Moab Workload Manager Version ‘7.2.6’ License Information:
  Current License:  Max Procs   = 13200
  Current License:  Valid Until – Tue Sep 23 08:00:00 2014

Moab Server ‘moab’ running on headnode:42559  (Mode: NORMAL)
Version: 7.2.6  (revision 12, changeset e6e0bd4a2c0d82147fab5b8b6d07b2f03d9383a0)
Build date: Thu Nov  7 10:58:52 MST 2013
  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off

Information about how Moab is currently licensed for your site is contained in the top part of the output.

Moab Workload Manager Version ‘7.2.6’ License Information:
  Current License:  Max Procs   = 13200
  Current License:  Valid Until – Tue Sep 23 08:00:00 2014

Moab Server ‘moab’ running on headnode:42559  (Mode: NORMAL)
Version: 7.2.6  (revision 12, changeset e6e0bd4a2c0d82147fab5b8b6d07b2f03d9383a0)
Build date: Thu Nov  7 10:58:52 MST 2013
  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off

The host name of the server on which Moab is running and the port to which it is bound is displayed here.

Moab Workload Manager Version ‘7.2.6’ License Information:
Current License:  Max Procs   = 13200
Current License:  Valid Until – Tue Sep 23 08:00:00 2014

Moab Server ‘moab’ running on headnode:42559  (Mode: NORMAL)
  Version: 7.2.6  (revision 12, changeset e6e0bd4a2c0d82147fab5b8b6d07b2f03d9383a0)
  Build date: Thu Nov  7 10:58:52 MST 2013

Here is the total number of processors from which the system is licensed. This number should be as large or larger than the number of physical processor cores in your cluster. Hyperthreading will double that number.

Moab Workload Manager Version ‘7.2.6’ License Information:
Current License:  Max Procs   = 13200
Current License:  Valid Until – Tue Sep 23 08:00:00 2014

Moab Server ‘moab’ running on headnode:42559  (Mode: NORMAL)
  Version: 7.2.6  (revision 12, changeset e6e0bd4a2c0d82147fab5b8b6d07b2f03d9383a0)
  Build date: Thu Nov  7 10:58:52 MST 2013

Finally we have the licenses expiration time and date. Make sure you have renewed and installed a new license before this date and time. As Moab will look both in the /opt/moab/ and the /opt/moab/etc/ for its license file. Make sure you haven’t installed it to both, as this can cause confusion as to which one is being used by Moab.

Usage Information

Moab Scheduling Cycle

Moab Scheduling Cycle

As was discussed before, Moab passes through several different phases as part of the scheduling iteration. The high-level phases are:

  1. Update Information from Resource Managers
  2. Manage Workload
  3. Refresh Reservation
  4. Update Statistics
  5. Handle User Requests

Moab keeps track of the amount of time it spends in each one of these phases. They are represented in the output of mdiag -S, and it is important to understand how this information is displayed. It should also be noted Moab is more specific here for some of the phases. In other words, some of the high-level phases are split into multiple stages for display purposes. Here is the relevant section of the output:

  Version: 7.2.6  (revision 12, changeset e6e0bd4a2c0d82147fab5b8b6d07b2f03d9383a0)
  Build date: Thu Nov  7 10:58:52 MST 2013
  Build Info:  THREADSAFE,64,MCOMMTHREAD
Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
Scheduler FeatureGRes: GPUS:off
  Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
            Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
  Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
            Triggers: 0.21%  User: 0.01%  Idle: 59.04%
  Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
            Triggers: 0.24%  User: 0.25%  Idle: 58.43%
Activity[JobStart]         Count=4   Duration=4s
Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s
  Activity[Schedule]         Count=5   Duration=15s
  Activity[UIProcess]        Count=5   Duration=0s

Here we see this information is presented in three different sections:

  • Time(sec) – Average amount of time in each sub-phase measured in seconds
  • Load(5m) – Percentage of time over the last 5 minutes associated with each sub-phase
  • Load(24h) – Percentage of time over the last 24 hours associated with each sub-phase

As we are mainly concerned with troubleshooting in this particular article, I’m going to focus on the Load(5m) metrics, as I’ve found they are the best to use in this case. When debugging a problem, the Time(sec) metric is a bit difficult (for me) to get a good feel regarding exactly what is going on. If you’ve noticed the problem soon enough, the Load(24h) metric many not have been sufficiently effected yet to be useful.

Let’s go through each sub-phase individually. Notice, they will not necessarily be displayed in the same order as noted above.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

The Sched sub-phase relates to the amount of time Moab is spending time doing the actual scheduling. This is how much time is being used to determine which jobs are going to run where. Spikes here are uncommon, though they can occur when overall system policies are changed or there is a major change in the workload patterns coming in to the system.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

The RMLoad sub-phase is the amount of time Moab is spending reading information from the resource managers. If Moab is having a problem contacting the resource managers (a blocking operation), the value for this sub-phase will spike. If that’s the case, the troubleshooting investigation should look at what is happening with the resource managers and the communication channels between them and Moab.

The other instance where this number may spike is if there is a significant increase in the number of resources being reported by a resource manager. This is very, very uncommon in an established system.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

The RMProcess sub-phase records the amount of time it takes Moab to move the information received from its resource managers into Moab’s internal data structures. It is very uncommon for this metric to spike, as it would only happen in the case where the resource managers start reporting significantly larger number of resources.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

The RMAction sub-phase is the amount of time Moab spends sending its scheduling decisions to the resource managers (e.g., “Start Job X”). Spikes in this metric are generally caused by there being some issue either with the resource managers or the communication channel between them and Moab. Generally, a spike seen here will correlate with a spike in RMLoad.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

While triggers are handled by a separate scheduling cycle, their use does have some overhead, which is recorded here with the Trigger sub-phase. Generally, most HPC systems do not have a significant number of triggers, and this number remains fairly small and steady. Only in specialized cases where a large number of triggers (i.e., hundreds or thousands) are being constantly added and deleted is it likely for this metric to spike. If it does, trigger debugging is the next step.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

The User sub-phase is a very common one for problems, as it represents the amount of time Moab is spending answering end-user requests, including those coming through web portals and MWS. Spikes can be caused by users having their scripts or jobs constantly issuing commands to Moab, such as continual checkjob requests to find job status.

If this metric is found to have spiked, the logs can be consulted to determine the source of the unusually high number of requests.

  Build Info:  THREADSAFE,64,MCOMMTHREAD
  Process Info:  pid:10917 uid:0 euid:0 gid:0 egid:0
  Scheduler FeatureGRes: GPUS:off
Time(sec) Sched: 19.751  RMLoad: 2.137  RMProcess: 3.015  RMAction: 0.028
Triggers: 0.201  User: 0.021  Idle: 60.956 Total: 86.109
Load(5m)  Sched: 32.57%  RMLoad: 3.46%  RMProcess: 4.68%  RMAction: 0.02%
Triggers: 0.21%  User: 0.01%  Idle: 59.04%
Load(24h) Sched: 32.68%  RMLoad: 3.66%  RMProcess: 4.55%  RMAction: 0.19%
Triggers: 0.24%  User: 0.25%  Idle: 58.43%
  Activity[JobStart]         Count=4   Duration=4s
  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s

The final one is Idle. This one signifies the amount of time Moab is just waiting around for user requests. Some might think we want this number to be 0, but in fact that would indicate there was a problem. When Moab can’t fulfill everything it needs to do in a scheduling iteration, it first sacrifices the Idle time and then sacrifices the User time if the former wasn’t sufficient. One always wants the Idle metric to be positive and not particularly close to zero.

Scheduling Iteration Tuning

Just a little farther down in the mdiag -S -v -v output, one will find the following section. Right off, there are two important numbers here.

  Activity[RMResourceLoad]   Count=3   Duration=4s
  Activity[RMWorkloadLoad]   Count=1   Duration=3s
  Activity[Schedule]         Count=5   Duration=15s
Activity[UIProcess]        Count=5   Duration=0s
Activity[UIWait]           Count=1   Duration=0s
PollInterval: 00:01:00  (Avg Sched Interval: 00:00:52  Iterations: 13286)
JobStarts: 44722  (Avg Starts/Iteration: 3.37  Last Iteration: 4)
Hist. Max Number Waiting Client Connections: 4

  Time spent processing triggers last iteration: 201 (ms)
  Num. triggers evaluated last iteration: 245599
  Num. triggers started last iteration: 1

The first highlighted number is the configured time for the scheduling iteration (RMPollInteral). The second is the actual/effective average time for each iteration. It is very common that these two numbers do not match. Let’s look at the two different possibilities:

  • Actual is less than Configured – This is generally not a problem. There are a number of different things that may cause a scheduling iteration to start early. As long as the other sub-phase metrics are good, there isn’t a problem.
  • Actual is more than Configured – This may be a problem. The first several scheduling iterations after Moab is initially started will generally be extraordinarily long, as Moab is doing additional “start-up” activities. After these are completed, one should see the average value moving back down towards the configured value. On stable, long-running systems, we really shouldn’t see the average be more than configured. This will usually be accompanied by the Idle metric being at or near zero. If this does occur, one needs to determine which of the other sub-phases are a problem. Alternatively, in some cases the configured scheduling iteration time simply needs to be increased, as there just plain isn’t enough time for Moab to complete all of its tasks within the allotted time.

Correlating all of this information will provide a good starting point for understanding what is going on when things appear to be running slowly from an end-user command point of view.

Steps to Take Today

There is one final step that is very important. In order to know when one of the sub-phase metrics are out of whack, one needs to know what the stable baseline is for each of these.

ACTION: Run mdiag -S -v -v today to get your baseline sub-phase metrics. Write them down and keep them in a safe place.

Every system is unique with different numbers and types of resources with a different workload pattern. Your numbers will be unique to your system. Then, if a slowdown is noticed, you’ll be able to run this command to compare with what is normal for your system.

Now you know, and knowing is half the battle.
~ G.I. Joe

Series Navigation<< Understanding Moab Scheduling: Part IUnderstanding Moab Scheduling: Part III >>
Facebook Twitter Email