Oil and Gas

Oil and Gas

There are only a few industries as renowned for massive investments in the highest end of computing technology as the oil and gas sector. High Performance Computing (HPC) capabilities, matched with sophisticated top-of-the-line modeling and simulation tools to manage and analyze unprecedented amounts of diverse data mean that infrastructure complexity mounts quickly. However, despite these bleeding edge technologies backing everyday HPC, big data, and cloud-based operations, there is still a great deal of outdated manual work required to orchestrate priority, access, and outcomes, especially at the world’s largest oil and gas companies. Within these organizations, productivity is hindered due to cluttered, hand-managed workflows, siloed clusters that go underutilized or oversubscribed, and the lack of resources due to budgetary constraints.

The pressure of lower margins will necessarily focus energy producer’s attention on efficiency and productivity. Although we don’t know when demand will again overtake supply and raise prices again, some OPEC and non-OPEC insiders suggest that lower prices are not temporary. This leaves Upstream Oil and Gas operations with a need to adapt (perhaps only for a year or two) to the reality of lower margins on existing wells and higher stakes for new drilling sites. HPC is already used extensively to maximize efficiency of existing reservoir operations and to reduce risk in identifying new wells. More data will be collected, analyzed and simulated to maximize efficiency as different O&G organizations leverage their competitive advantages.

In Oil and Gas, High Performance Computing provides a significant ROI in seismic analysis, reservoir simulation, visualization and related fields. Although investment in new HPC deployments may slow somewhat, existing HPC systems will be leveraged heavily. Petroleum Engineers and Data Scientists will be told to produce more with their existing computational resources as purchasing cycles lengthen, but efficient workload and resource orchestration will ensure existing HPC resources are used for maximum effect.

In all cases, the budgetary squeeze will shift focus from traditional metrics of HPC such as utilization and queue time to the more valuable ones: throughput and accountability to business goals. This is the direction Moab  HPC Suite has been heading for quite some time. By using the most capable resources, Moab ensures that the most important jobs are completed first. This approach can trim branches from complex workflows, reducing the need to run jobs that simulate scenarios that are unlikely to be fruitful. Additionally, Moab’s intelligent workload and resource orchestration platform can reduce the burden on HPC users and system administrators, automating common tasks and ensuring that they are able to focus more on their well and depletion plans, and spend less of their precious time interacting with the HPC resources.

To speak to an Adaptive Computing solutions advisor, email us at info@adaptivecomputing.com or call us at +1 (239) 330-6093.