It’s “Flexascale,” not just “Exascale”

If that title raises your eyebrows, have no fear—I believe the HPC industry is right to pursue extreme scale. Some computational problems will remain stubbornly intractable until we have tomorrow’s horsepower at our disposal.

However, as an outsider who recently jumped into the HPC space, I’ve come to the conclusion that as an industry, we don’t spend enough time talking about alternate dimensions of scale. We want our horsepower! :-)

We never have the horsepower we’d like. But we find clever ways to succeed despite this limitation. Image credit:

Raw horsepower isn’t an adequate answer, by itself. Besides scaling up, we need to think seriously about scaling over, scaling out, and scaling down. Scaling needs to be a dynamic and ongoing characteristic of our systems, not just a hurdle that we cross once, when a system first runs linpack. This is what I mean by the coined term “flexascale” (which, you’ll note, contains “exascale” inside it).

What do I mean by phrases like “scale over” or “scale down”?

To answer, let me do a thought experiment with you. Have you ever considered how much computational power is devoted to social networking on smartphones? We know that Facebook has roughly 1B users [1]. Between 50% and 80% of them access the site with smart phones [2]. The average mobile user checks FB 14 times per day [3]. The average smartphone may be capable of roughly 100 mflops [4] of compute power. If we guess that by the time those users reshare “Charlie bit my finger” and respond to their messages, each of those brief visits adds up to 5 minutes, then we get math that looks like this:

14 visits / day * 5 min per visit = 70 FB min / day / mobile user
70 FB min / day / mobile user * 60 secs per min = 4200 FB sec / day / mobile user
4200 FB sec / day / mobile user * 500M mobile users = 
    2,100,000 M FB sec / day = 
    2.1 trillion FB secs / day
2.1 T FB sec / day * 100M floating point ops / sec = 
    210 quintillion floating point ops per day
210 quintillion floating-point ops per day / 86400 secs / day = 
    2.43 quadrillion floating point ops per second = 
    2.43 petaflops

This sort of compute power is enough to equal the #12 computer on the current TOP500 list. And we haven’t even added in the compute power of desktops and laptops…

Of course, these smartphones are not doing supercomputing, and this is not an HPC cluster—but here’s my point: All of the computation being done by smartphones is offloading work from Facebook’s central datacenters. Because smartphones can render CSS and execute javascript and manage TLS-encrypted sessions themselves, big iron somewhere is spared that task.

Some of Facebook’s “big iron” that gets to offload its work to smartphones. This is in FB’s Prineville datacenter. Image credit: ramereth (Flickr)

Think about the scaling implications of that for a second. If FB needed to do centralized compute for all the users that today get their experience through smartphones, we’d need a massive supercomputer to compensate. Facebook’s scale is enabled by throwing the compute problem over the wall for that subset of the problem where it’s feasible.

That’s “scale over.”

“Scale down” is about going small to go big. Without going into a lot of detail, I’ll just say that YouTube is able to serve 1.8 B views of “Gangnam Style” not because it builds massive datacenters, but because it pushes out its most popular videos to little datacenters that it has built near your local ISP. The savings in bandwidth and central compute is massive.

Perhaps you’re saying to yourself, “Well, those examples don’t really tell me how to scale a pure HPC system; I’m not sure how they apply, if at all.”

Fair enough. Not every internet scale technique can be brought to bear directly on HPC. But how about this:

  • Like Facebook, can we offload from a centralized, intelligent scheduler the responsibility for thousands of short-lived tasks that clog up the pipeline without needing perfect prioritization? This is the value proposition of Moab Task Manager, which we’ve blogged about recently. Stay tuned for an announcement about early availability…
  • Like YouTube, can we become more aware of how placement inside a supercomputer is affected by network topology, and in the process make big gains in the efficiency of interconnect traffic? Adaptive announced its topology-aware research project at Supercomputing 2013…
  • Can we build a better grid feature, and do integrations with hadoop and OpenStack, to allow work to flow toward the home that is most able to handle it efficiently? Adaptive announced hadoop and OpenStack integration work at Supercomputing 2013 as well…

What other ways can you see to get to exascale by flexibly realigning our perspective, to make maximum use of all the technological arrows in our collective quivers?

Facebook Twitter Email

Speak Your Mind