Skip to content

Perlmutter Interconnect

The interconnect is HPE Cray Slingshot and consists of the network switches and the network interface cards (NICs) All node types, compute or service, are all connected via the Slingshot network fabric. The Slingshot interconnect provides fully dynamic routing with congestion management and Quality of Service (QoS) capabilities.

A network switch is a HPC Ethernet switch chip with adaptive routing and congestion management capabilities. A switch also provides fabric support for multicast and reduction and the ability to partition network traffic, either absolutely, or by traffic class.

The Slingshot interconnect uses a three-hop Dragonfly network topology. Some features are as follows.

  • Phase 1's compute cabinet is segmented into 8 chassis, each containing 8 compute blades and 4 switch blades.
  • A GPU compute blade contains 2 GPU-accelerated nodes.
  • Each node is connected to 2 NICs, allowing each node to have 2 injection points into the network. This configuration is sometimes described as dual injection or dual rail.
  • GPU cabinets contain one Dragonfly group per cabinet, with 32 switch blades, making a total of 12 groups.
  • Unlike Cori, there is no backplane in the chassis to provide the network connections between the compute blades. Network connections are achieved by having the switch blades at the rear of the cabinet, providing the interconnection between the compute blades in the in the front side.
  • A full all-to-all electrical network is provided within each group. All switches in a switch group are directly connected to all other switches in the group.
    • Copper Level 0 (L0) cables connect nodes to network switches. L0 cables carry two links and are split to provide two single link node connections. L0 links are called "host links" or "edge links".
    • Copper Level 1 (L1) cables are used to interconnect the switches within a group. The 16-switch groups are internally interconnected with two cables (four links) per switch pair. L1 links are called "group links" or "local links".
  • Optical Level 2 (L2) cables interconnect groups within a subsystem (e.g., the compute subsystem consisting of compute nodes). L2 links are called "global links". Each optical cable carries two links per direction.
  • L2 cables also interconnect subsystems - there are 3 subsystem on a HPE Cray EX system, each with its own dragonfly interconnect topology: compute, IO, and service subsystems.