Kestrel uses HPE Cray EX supercomputers. The system is made of individual compute “blades,” which carry all the components that make the supercomputer go: central processing units, fabric connections, printed circuit boards, and cooling and power components. There are also blades that hold HPE’s Slingshot switching elements. Cooling is built into each blade.
Liquid cooling loops running through the compute infrastructure cool the cabinets and components. A cooling distribution unit cools the liquid itself and removes heat from the system via a heat exchanger with data center water. The incoming water can be as warm as , which means chilling isn’t necessary and less electricity is required.
HPE manufactured the boards and blades using chips from Intel, NVIDIA and AMD. The onboard communications infrastructure and cooling system are proprietary.
“A lot of that stuff is unique to HPE,” Damkroger says.
MORE FROM FEDTECH: The HPE Aruba AP22 maximizes device bandwidth in the office.
Preparing for Kestrel’s Second Phase: GPUs
Kestrel will have 2,436 compute nodes available for high-performance computing tasks.
Phase two will begin in December with the installation of 132 graphics processing unit nodes, each with four NVIDIA H100 GPUs. Originally created for video rendering in computer games and simulators, GPUs have revolutionized supercomputing.
A CPU does serial tasks at very high speed. At best, it can handle a handful of operations at once. By contrast, a GPU uses parallel processing to do multiple calculations simultaneously and can handle thousands of operations in an instant.