Building Optimized High Performance Computing (HPC) Architectures and Applications

New technologies and software development tools unleash the power of a full range of HPC architectures and compute models for users, system builders, and software developers.

Building Blocks of an HPC System

  • Designing your HPC system may involve a combination of parallel computing, cluster computing, and grid/distributed computing strategies.

  • A hybrid cloud approach that combines your on-premises infrastructure with public cloud resources lets you scale up as needed, reducing the risk of lost opportunities.

  • Intel® HPC technologies include processors, memory, Intel® high performance networking, and software, providing a foundation for high-performance, incredibly scalable systems.

  • oneAPI open, standards-based, cross-architecture programming enables HPC applications that run optimally across a variety of heterogeneous architecture types and distributed computing models.

  • Intel® libraries and tools help customers get the most out of our systems through efficient use of code and optimizations.

author-image

Oleh

In today’s accelerated business environment, the foundation for successful HPC technology adoption begins with a well-defined HPC architecture. Depending on your organization’s workloads and computing goals, different HPC system designs and supporting resources are available to help you achieve productivity gains and scalable performance.

Designing HPC Systems

HPC architecture takes many forms based on your needs. Organizations can choose different ways to design HPC systems.

Parallel Computing across Multiple Architectures

HPC parallel computing allows HPC clusters to execute large workloads and splits them into separate computational tasks that are carried out at the same time.

These systems can be designed to either scale up or scale out. Scale-up designs involve taking a job within a single system and breaking it up so that individual cores can perform the work, using as much of the server as possible. In contrast, scale-out designs involve taking that same job, splitting it into manageable parts, and distributing those parts to multiple servers or computers with all work performed in parallel.

As intensive workloads such as simulations, modeling, and advanced analytics become more commonplace, HPC systems are being designed to incorporate accelerators in addition to CPUs. These accelerators have introduced a wider, heterogeneous range of possible configurations that developers need to support.

Developers can use oneAPI cross-architecture programming to create a single codebase that can be used across CPU, GPU, and FPGA architectures for more productive and performant development. oneAPI can accelerate HPC innovation by removing the constraints of proprietary programming models, easing adoption of new hardware, and reducing code maintenance. oneAPI and Intel® oneAPI toolkits support existing open standards and languages that HPC developers need, including C++, SYCL, Fortran, OpenMP, MPI, and Python. Get more details on oneAPI and Intel® oneAPI toolkits.

Cluster Computing

High performance computing clusters link multiple computers, or nodes, through a local area network (LAN). These interconnected nodes act as a single computer—one with cutting-edge computational power. HPC clusters are uniquely designed to solve one problem or execute one complex computational task by spanning it across the nodes in a system. HPC clusters have a defined network topology and allow organizations to tackle advanced computations with uncompromised processing speeds.

Grid and Distributed Computing

HPC grid computing and HPC distributed computing are synonymous computing architectures. These involve multiple computers, connected through a network, that share a common goal, such as solving a complex problem or performing a large computational task. This approach is ideal for addressing jobs that can be split into separate chunks and distributed across the grid. Each node within the system can perform tasks independently without having to communicate with other nodes.

Common HPC Application Compatibility

Intel collaborates with the HPC community to define best practices for deploying HPC applications and cluster systems built on Intel® architecture. The Intel® HPC Platform Specification provides common software and hardware requirements that application developers can use to build foundations for cluster solutions. A system that complies with these requirements provides a defined set of characteristics to the application layer, including the Intel® software runtime components that provide the best performance paths. The platform specification includes configuration and compliance information across a wide domain of common community applications.

HPC Cloud Infrastructure

In the past, HPC systems were limited to the capacity and design that on-premises infrastructure could provide. Today, the cloud extends local capacity with additional resources.

The latest cloud management platforms make it possible to take a hybrid cloud approach, which blends on-premises infrastructure with public cloud services so that workloads can flow seamlessly across all available resources. This enables greater flexibility in deploying HPC systems and how quickly they can scale up, along with the opportunity to optimize total cost of ownership (TCO).

Typically, an on-premises HPC system offers a lower TCO than the equivalent HPC system reserved 24/7 in the cloud. However, an on-premises solution sized for peak capacity will be fully utilized only when that peak capacity is reached. Much of the time, the on-premises solution may be underutilized, leading to idle resources. However, a workload that can’t be computed because of a lack of available capacity can result in a lost opportunity.

In short, using the cloud to augment your on-premises HPC infrastructure for time-sensitive jobs can mitigate the risk of missing big opportunities.

To drive HPC innovation in the cloud, Intel works deeply with cloud service providers to maximize performance, apply technologies such as Intel® Software Guard Extensions (Intel® SGX) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and simplify onboarding. Read more about our HPC cloud technologies and how they can help enhance results.

Selecting HPC Processors for Scalability and Performance

With our breadth of expertise in HPC technologies, Intel delivers the performance requirements for handling the most-demanding future workloads. Intel® Xeon® Scalable processors provide a highly versatile platform that can seamlessly scale to support the diverse performance requirements of critical HPC workloads. Our upcoming data center GPUs based on the Xe HPG microarchitecture will make an ideal complement to Intel® Xeon® Scalable processors, helping to drive performance even further.

Working with our ecosystem, Intel prioritized efforts in creating blueprints that help enable optimized HPC system designs. For validating performance requirements, Intel® Cluster Checker (part of the Intel® oneAPI HPC Toolkit) ensures that your HPC cluster system is intact and configured to run parallel applications with incredible portability for moving between on-premises and HPC cloud systems.

With Intel® CoFluent™ technology, you can speed up the deployment of complex systems and help determine optimal settings by modeling simulated hardware and software interactions.

A Breakthrough in HPC Memory

Memory is an integral component of HPC system design. Responsible for a system’s short-term data storage, memory can be a limiting factor to your workflow performance. Intel® Optane™ technology helps overcome these bottlenecks in the data center by bridging gaps in the storage and memory hierarchy, so you can keep compute fed.

High-bandwidth memory has demonstrated success when included in the GPUs used in HPC programming and will soon be available on x86 processors as well. In many cases, high-bandwidth memory technologies can accelerate codes that are memory bandwidth bound without code changes. It can also help reduce DDR memory costs.

Scaling Performance with HPC Fabric

To effectively scale smaller HPC systems, organizations need a high-performance fabric that is designed to support HPC clusters.

Intel® high performance networking (HPN) provides optimized performance using familiar and cost-effective Ethernet technologies. This gives cluster managers an end-to-end solution that covers their HPC and machine learning training needs. Existing popular HPC and AI middleware and frameworks, including oneAPI, can be used with Intel® HPN through the OpenFabrics Interfaces (OFI, also known as libfabric) and the Intel® Ethernet Fabric Suite.

An Easier Path to HPC Success

Intel provides the expertise to deeply understand HPC applications, architectures, and what an HPC system—whether on-premises, in the cloud, or hybrid—requires to help users produce results and maximize accomplishments. With HPC architecture based on a foundation of Intel® technologies, you can be ready to meet the HPC, exascale, and zettascale needs of the future.

Plus, our oneAPI toolkits are ready to help developers simplify their HPC programming efforts, allowing them to support more hardware types and maximize business results.

Frequently Asked Questions

High performance computing architecture refers to the various components employed to build HPC systems and how they are packaged together. Oftentimes, these components include a CPU and an accelerator such as an FPGA or GPU, plus memory, storage, and networking components. Nodes or servers of various architectures work together in unison, either as parallel or clustered nodes, to handle complex computational tasks.

Three of the most critical components for any HPC system are the processor, any required accelerator such as an FPGA or GPU, and the required networking connectivity. High-bandwidth memory is another critical consideration.