Tag: hadoop

What’s This “Big Workflow Mindset” Thing?

Adaptive Computing’s announcement about Big Workflow claims that we need a fundamentally new approach to compute problems at the nexus of Big Data, Cloud, and HPC. Why? IT’s already made huge progress since the days of rack-and-stack datacenters. This is the age of software-defined everything. Hadoop’s picking up steam. HPC folks are exploring cloudbursting. So […]

Quiz: Do You Really Know Big Data?

As a part of the HPC community, it’s not too surprising that Adaptive Computing has been boning up on our big data. Trends are emerging where we see big data applications and HPC starting to merge, which is one reason Adaptive announced a partnership with Intel Hadoop to integrate Moab and TORQUE with their distribution […]

SC13: HPC Evolving

As I mentioned last week, this afternoon I was able to speak to a wonderful audience at the SC13 Exhibitor Forum. This particular track at the conference is rather interesting, as it really is put on as an opportunity for vendors to pretty much brag about what they are doing. There’s the perfunctory nod to […]

Cloud, meet HPC. HPC, meet Cloud. Cloud and HPC, meet Big Data.

Like three strangers who meet and discover that they’re long-lost identical triplets, the “separate” domains of supercomputing, cloud computing, and big data mining have vast amounts in common, but seem not to know it yet. Cloud’s value proposition is that you can abstract away the low-level details of hardware and treat all your resources as […]