Tag: Ascent initiative

Returns From Lincoln Ascent

Some of the numbers have started to come in from work done for Lincoln Ascent and things are looking promising to this point. The parts that are rounding up right now focus on speeding up Moab’s scheduling iteration and reducing the network overhead between TORQUE and Moab. One of the excesses in network communication between […]

Announcing the Beta Release of Moab 8.1

This entry is part of 7 in the series Adaptive Computing at SC14

This entry is part of 7 in the series Adaptive Computing at SC14We’re proud to announce the beta release of Moab HPC Suite – Enterprise Edition (Moab 8.1) at the SC14 conference in New Orleans. The Adaptive Computing team is providing sneak peek demos of Moab 8.1, Viewpoint admin portal, elastic computing functionality and advancements […]

Game Changer for HPC Scale and Performance

The exascale wave in today’s HPC market is creating an inflection point, where familiar solutions are simply inadequate. Modern supercomputers have so many internal network interconnects and coordinate so many calculations at such a rate that traditional scheduling cannot keep up. Jobs sit idle when they should be running; policy constraints are lost in the […]

Tuning The Engine: Moab Speed Improvements

Did you ever ride one of those single-piston two-stroke motorcycles? We used to call ours “Thumper” —it had a lot of torque and did a great job getting you up a hill, but wasn’t necessarily the fastest from zero to 60MPH and wasn’t very pleasant on the highway. Modern sport motorcycles have four or six […]

The Dawn of Parallel Moab

One of the great ironies of the HPC industry is that the scheduling technologies that keep massively parallel supercomputers busy are, themselves, mostly serial. Moab and its competitors have their roots in theoretical work first productized in the 1980s and 1990. At that time, computer science mainly used parallelism for enormous matrix math problems—not for […]

Put Some Nitro In Your HPC Engine — Announcing MTM “Early Availability” Preview

All of us need to get more work done, faster. This is certainly true in traditional HPC, and it’s also imperative in map/reduce clusters, render clusters, and clouds. The dawn of exascale computing and the juggernaut of Big Data both demand that we get comfortable with millions of jobs in a queue, and thousands of […]