High Performance Computing is a very established industry, yet we still see so much advancement in technology, speed and size. HPC continues to be extremely relevant and cutting edge, even though the industry at large is decades old. I wanted to approach some of the key buzzwords that are heard around the HPC industry now, and how they are changing and evolving.
- Big Data: For such a simple name, Big Data is more of an ambiguous and complicated problem than a marketing term. It was also designated as the top tech buzzword that “everyone uses but doesn’t quite understand.” Global Language Monitor said, “Soon Human Knowledge will be doubling every second. ’Big’ does not begin to describe what’s coming at us.” So ‘big’ might be a bit of an under exaggeration, maybe we can lobby to change it to Gargantuan Data—a marketeer can only dream. Many HPC industry experts agree that the biggest (pun intended) obstacles to reaching exascale is the storage and movement of Big Data—more in the form of opportunity cost. That’s right—it’s not just a descriptive adjective, or a buzzword—it’s a Big problem.
- Green Computing or Energy-Efficient Computing: We’ve noticed a rise in the use of Energy-Efficient Computing, especially when it comes to HPC and Datacenter. A key factor in the future of large-scale HPC systems, energy efficiency is emerging as likely a second big obstacle to reaching exascale. The reason is the cost of powering an exascale system is exponentially higher than the current petascale systems, and power isn’t getting any cheaper. As an industry, HPC will have to weigh the benefits, or need, for exascale, versus the cost to house and power such systems. The good news is that many systems in Europe are already thinking green because of the higher energy costs. And, we’re seeing a stronger presence for Green Computing in the United States, with systems like NICS’ Beacon reaching the top of the Green500, a list that has picked up significant steam since its initial release in 2007.
- Exascale: This buzzword was preceded specifically by Green Computing and Big Data, because as stated, they happen to be huge challenges to reaching Exascale, even if indirect challenges (e.g. the opportunity cost of focusing on Big Data). Neither of those should be surprising pain points, since jumping from one quadrillion floating point operations per second (FLOPS) to one quintillion FLOPS is no insignificant task. We’ve just started hearing about this buzzword, but we’ll continue to hear more about as it gets closer to becoming a reality. Intel predicts that we’ll start seeing exascale systems by 2018, and our very own founder David Jackson predicted the release some time between 2017 and 2020 While buzzwords like Big Data might fade over time, exascale won’t—that is until the first system is rolled out. That being said, the way we measure performance on HPC systems could perhaps make exascale a moot point…
- Petaflop Race: Of all the buzzwords on this list, the petaflop race would probably least qualify to be here. So why give it credence? With NCSA Blue Waters’ decision to opt out of the Top500 by not submitting their Linpack benchmarks, we could start to see a significant shift in the way supercomputing performance is measured. Many estimated that Blue Waters would easily take the top spot in the Top 500 (naysayers argue they knew they wouldn’t, and therefore bowed out), but by not submitting, NCSA left that designation to Oak Ridge’s Titan. Blue Waters’ Director, Bill Kramer’s ideas for more aptly judging the performance of a supercomputer include I/O and the use of real-world applications. Given the weight the project carries, we could see some changes in the way the petaflop race is quantified. That being said, the petaflop race will eventually evolve into the exaflop race once the first exascale systems are implemented.
- HPC Cloud: By my estimation, HPC Cloud could become the future of HPC. HPC Cloud relies heavily on virtualization, and many workloads cannot fully be virtualized… yet. Looking to the future, having workload go from bare metal to virtualized is only an eventuality, but not for everyone—research and government will probably hold out the longest, while commercial and industries like bio-informatics might see an earlier adoption since their workload can be very parallel and not rely too heavily on I/O. We see interest more in public cloudbursting for HPC cloud, but there seems to be a lot of interest in private HPC cloud as well.
Comment and share: Even though buzzwords can come and go, and even perhaps evolve, High Performance Computing is still showing its muster as being very relevant and cutting edge. What HPC buzzword would you add to our list? How do you see the future of HPC evolving?