TORQUE provides several unique advantages in the world of open source resource management. It is integrated closely with both a free scheduler, Maui, and a world class commercial scheduler, Moab. The TORQUE community is broad and active, spanning multiple industries that use high performance computing. Finally, TORQUE is backed by a tremendous amount of industry experience.
Resource managers provide some utility independently, but obviously their utility is maximized when used in conjunction with a scheduler. TORQUE has the unique benefit of integrating well with both a free and a commercial scheduler. Together with Maui, TORQUE satisfies the needs of smaller installations; between TORQUE and its ancestors this combination has been a reliable solution for about 20 years. For larger or more demanding installations TORQUE and Moab provide a powerful solution. If you are trying to get the best Moab has to offer, it offers the clear choice as a resource manager. All features of Moab are developed and tested with TORQUE. So if you want to get the most out of Moab’s vast set of features you want to go TORQUE.
In previous posts, I have explained how broad and diverse the community we have is. Ultimately, the reason that matters is that TORQUE is actively used against all kinds of systems. The feature set that TORQUE provides isn’t made for a narrow amount of use cases; it isn’t developed to run on only a few machines. The diversity of viewpoints and use cases gives us a great vision for the future and constantly verifies that we’re moving in the right direction.
Finally, TORQUE has the advantage of being backed by a tremendous amount of HPC experience. We have people who know the industry backwards and forwards, giving us the conviction and confidence to go after things that we know are needed for the future. A prime example of this is making pbs_server multi-threaded in the 4 series of TORQUE. At the time we announced this initiative, we could tell it was going to be very difficult to achieve, and a lot of people speculated that it couldn’t be done; however, we knew that it was the only way to make pbs_server viable for newer systems. Our experience made us confident we could see where the industry was headed, and knowing what had to be done helped us stick to it, despite the challenges. Currently, pbs_server runs on several of these machines it wouldn’t be able to handle whatsoever without being multi-threaded. We will continue to lean on our experience, community, and vision for scheduling to guide us forward.