[email protected]

Advanced Workflows with the Nitro API

Nitro includes an application programming interface (API) that allows users to create interactive clients (clients that submit tasks based on an external stimulus) or to create clients that can execute a complex workflow – all while using a single Moab job. If your workflow contains thousands of tasks, you’ll avoid the scheduling overhead of running […]

Nitro Use Cases

Ok, so you’ve got a few of your clusters users that, if they’re allowed to run wild, will fill up your schedulers queue with thousands or millions of jobs.  But you can’t let them do it, because putting a million (or even several thousand) jobs into the queue causes a lot of scheduling overhead and […]

What’s Better Than Catching Bad Guys?

Before coming to Adaptive Computing a couple of years ago I worked as a cyber research engineer.  I had the opportunity to do some crazy things that helped to catch some seriously bad people.  As much as it looks like great fun taking down bad guys on TV shows, what is even more fun (at least […]

High Throughput Computing: Bigger isn’t always better

I recently had the opportunity to play with a bunch of hardware to test and optimize Nitro to blast as many tasks per second through as possible. What I found led to some interesting insights in selecting servers for HTC computing. I ran tests on five different Intel Core i7 systems.  Here’s the configuration of the […]

Next Generation High Throughput Computing – Nitro Version 1.1

We’ve been busy this Summer giving Nitro it’s first upgrade and doing a lot of performance testing on various systems. We’ve added some features, and also found some significant performance improvements. Here are some highlights from our Summer of development: Multi-core Tasks Nitro was originally conceived to launch single processor tasks as quickly as possible. […]

FPGA’s for High Throughput Computing

A few years ago I attended a week long seminar on programming FPGA’s.  The training left me with a lot of questions, but I gained an appreciation for the power of this flexible little chip. What is an FPGA? OK – here’s the official definition from Xilinx (one of the two big manufacturers of FPGA’s): […]

High Throughput Computing Toolset

A couple of years ago I visited the Arlington National Cemetery. It was a humbling experience to see so many graves of those who so bravely fought for and defended the freedoms that enabled a once oppressed people to found and build a great nation. I was impressed with the precision and respect on display […]

What Doesn’t Need to be Said When Moving Forward

Looming over the skiing village of Kleine Scheidegg on the northern edge of the Swiss Alps is an iconic mountain of brittle limestone covered with snow and ice and blasted by the wind of oft-occurring ferocious storms. The Eiger is a legendary climbing destination that has seen triumph and tragedy, heroism and heartbreak, daring feats […]

A New Age in Supercomputing: The Manufacturing Compute Co-Op

The National Digital Engineering and Manufacturing Consortium (NDEMC) has been doing some great work pioneering support for small to medium size manufacturing enterprises (SME’s) access to sophisticated simulation and modeling programs. SME’s can hardly afford to build their own supercomputer but these businesses gaining access to computational tools have a large number of benefits. Because computational […]

Scheduling the Middle Ground

As more and more financial services firms make the jump to HPC, some consideration needs to be made in getting the most out of the cluster resources employed. Financial applications are typically embarrassingly parallel jobs that only account of a few seconds each of computing time. HPC scheduling engines typically rely on longer running jobs […]