It’s been little over a month since Adaptive Computing announced our newest workload management solution, Big Workflow. Internally, we’ve been raving about this solution and its ability to solve big data challenges for a while now, but now that the news is public, the industry is taking notice.
Take a look at the recent articles featuring Big Workflow and how the solution accelerates insights:
In this article, HPCwire takes a closer look at how Big Workflow aims to “balance, stabilize, and shatter siloed environments.” The enterprise no longer needs a solution for HPC, and another one for cloud, and yet another one for big data—Big Workflow is primed for custom workflow models that incorporate all three.
Adaptive Computing Senior Architect, Daniel Hardman said, “Most people in IT think about equilibrium—keep things humming-if things get broken, they get fixed. The problem is that big data is not friendly to that; it has an interesting relationship to storage in that it may not be convenient to think about those silo boundaries anymore.”
EnterpriseTech: Adaptive Computing Spans the DigitalGlobe
In this article, EnterpriseTech identifies the reasoning behind why customers—such as DigitalGlobe—would need data management software to support homegrown tools. The article outlines the basics of DigitalGlobe’s systems and the management issues they’re trying to solve with the help of Adaptive Computing.
Just like the headline says, Adaptive Computing’s CEO, Rob Clyde, discusses the lessons that experienced HPC professionals can bring to a new generation of big data problems. This article gives a brief overview of the situation and how the Big Workflow solution fits in. In addition, Paul Miller from Cloud of Data provides a podcast discussion with Rob Clyde.
Don’t miss the dialogue: