10 years on, and HPC is taken for granted
In recent weeks we’ve looked back at several areas of IT. In many, we concluded that while today may look technically very different from yesterday, that’s just superficial: the underlying business issues and principles remain the same. When it comes to High Performance Computing (HPC), the changes are much broader, but also more subtle.
As our contribution to the bit of fun that is Throwback Thursday, we’re taking a weekly stroll in the Freeform Dynamics archives.
Things have moved forwards enormously in terms of raw processing power and memory, of course, and the compute capabilities available today make those of more than a decade ago look somewhat pedestrian. And apart from the very leading-edge applications, we can quickly and easily deploy software to support a greatly expanded range of workloads.
Indeed, this is where the biggest, but perhaps less obvious, changes have occurred. Back in 2010, Freeform Dynamics carried out a survey of over 250 IT professionals with direct or indirect experience of high-end server computing environments. The results showed that many organisations were running HPC workloads in batch mode, as the compute resources were not available ‘on demand’. Perhaps more importantly, we also saw that many of the compute-intensive needs of small and mid-size organisations (i.e. those with under 5,000 staff) were simply not being met.
Compute-intensive work is now mainstream
Today, things have quietly changed. IT systems capable of running compute-intensive workloads such as data analytics, modelling and simulation, and security and forensics, are readily available off the shelf.
Sure, some tasks still require specialist platforms, such as high-end video work that demands ever-higher resolutions, definition and frame rates. But when combined with the growing adoption of GPUs as compute accelerators, most tasks that were challenging back in 2010 can now be handled by packaged and pre-configured mainstream systems.
More significantly, they can also be provided by cloud services, which both changes the economics of HPC and dramatically lowers the barriers to entry. The cloud model won’t suit all HPC tasks, but it makes it easier to experiment and do proofs-of-concept. It also makes feasible some applications that would not be cost-effective on-site
HPC as a tool for business insight
Perhaps one of the most interesting consequences of this mainstreaming of HPC is the way it’s being embedded into standard enterprise applications. For example, ERP and CRM can now perform complex historical analysis of large data sets without requiring special software, or indeed specialist business analysts. Those analysts can instead focus on more strategic, forward looking, enterprise analytics.
Indeed, we are at a point where the use of real-time analytics to make operational business decisions can genuinely impact how organisations work from day to day. When we add the potential of Machine Learning – a workload that was almost unknown in mainstream business in 2010, not least because training an ML model is highly compute-intensive – it is clear that HPC has changed dramatically.
All in all, HPC has changed so much in the last decade that much of it today isn’t even thought of as compute-intensive. This is one Throwback Thursday where the changes are significant, if not always glaringly obvious.
If you want to take a giant leap back in time to see how much HPC has changed in a decade, please seek out the original report.