As computers become cheaper and more powerful, researchers worldwide are coming to grips with the reality that they need a second supercomputer just to run their analysis and visualization. InsideHPC points us to a nice presentation from Adaptive Computing pitching their new Moab suite specifically tuned for Remote Visualization Workloads.
“Moab HPC Suite – Remote Visualization Edition significantly reduces the hardware, network and management costs of visualization technical computing. It enables you to create easy, centralized access to shared visualization compute, application and data resources. These compute, application, and data resources reside in a technical compute cloud in your data center instead of in expensive and underutilized individual workstations.”
Remote Visualization tasks are very different from typical HPC tasks. They are “immediate” need, and therefore not friendly to typical batch-processing systems, and they require interactivity at runtime, which can cause lots of problems with networking firewalls. They also tend to have a significant “spiky” CPU utilization, sitting idle for long periods while the user simply looks at what’s going on. Add in the complexity of graphics cards, multiple operating systems (Windows vs Linux), and the various requirements of specific analysis packages, and you can almost throw everything you knew about HPC workloads out the window.