The Dawn of Parallel Moab

One of the great ironies of the HPC industry is that the scheduling technologies that keep massively parallel supercomputers busy are, themselves, mostly serial. Moab and its competitors have their roots in theoretical work first productized in the 1980s and 1990. At that time, computer science mainly used parallelism for enormous matrix math problems—not for multithreading the daemons that managed that computation.

As we mentioned previously, a few months back Adaptive Computing launched a major initiative to overhaul the foundations of Moab and TORQUE, bringing them fully into mature usage of the latest hardware. This effort has already borne a lot of fruit; the scale and performance of scheduling is improving dramatically. Come to the Moab or TORQUE Ascent progress report sessions at MoabCon for more details.

Today I’m pleased to announce that we’ve just completed another major initiative in this process: Moab is now (in our unreleased builds) truly parallel at its heart. Significant portions of the core logic has been rewritten or refactored to make this possible. For example, last week I completed some major improvements to const correctness that allow the compiler to tell us where concurrency is safe. Dozens of other tweaks have also been committed.

Parallelism doesn’t just make symmetrical designs–it means that subsets of Moab’s scheduling computations can run independently and concurrently, all feeding into a final, optimized answer. Image credit: aloshbennett (Flickr).

Short-Term Benefits

This has two immediate and delightful consequences for customers:

  1. Moab will run faster.
  2. Moab will scale up effectively, so you can run it on a beefier machine to improve its performance.

How much faster, exactly, is the newly parallelized Moab? We are still quantifying that, and since this refactor has made many future enhancements feasible, we expect to push up well beyond early numbers. But even at this early stage, we can say that we’ve seen improvements > 300%.

Ramifications for Big Workflow

Besides being excited about the performance and scale improvements for familiar use cases, I’m pleased at what these changes enable in the context of Adaptive’s overall push for Big Workflow. Tomorrow’s computing problems will require a scheduler that can solve problems far more complex than the ones we’re used to. We’ll need to not just schedule nodes and computation, but data movement, transformation, replication, and the workflows that turn raw information into insight with deep business value.

The dawn of a truly parallel Moab is a major step in that direction.

Come join us at MoabCon 2014 to learn more about our progress, our Big Workflow vision, and the exciting features that are slated for our Kilby release later this spring.

Facebook Twitter Email