Creating a Better Infrastructure to Manage Big Data

At Supercomputing 2013, Adaptive Computing Principle Solutions Strategist, Trev Harmon, discussed where the HPC industry is going and where it has come from.

“While traditional HPC is at the core of supercomputing, recent advancements are allowing HPC centers to offer new and exciting compute services to their users and other customers,” said Trev Harmon.

Adaptive Computing announced two such advancements—topology-based scheduling and HPC cloud—and how together they provide an infrastructure that allows HPC centers to provide big data services along with traditional offerings.

Trev Harmon provided more insight into how HPC is evolving identifying these predictions:

  • In the future many of our HPC centers will evolve into technical computing centers. The hardware and software within the center will be dynamically re-purposed to suit the needs of the workload being run.
  • Convergence to workflow will become more commonplace. When we do calculations in an HPC job and then hand things over to a visualization system provided on the cluster, we’ve created a basic workflow. As we move forward these workflows will become more complex, pulling in things from big data, cloud computing, etc.

Watch more videos about Adaptive Computing announcements and speaking engagements on our YouTube channel.

Facebook Twitter Email
  • http://www.neteffects.com.au/ Cloud Computing

    Its starting to look like super more cloud computing 2014. With all the innovation and system infrastructure that we have, it’s not impossible that this year we will need to manage a lot of BIG DATA on the ground.