Sponsored By:
This is the first time we have talked with Hammerspace and Michael Kade (Hammerspace on X), Senior Solutions Architect. We have known about Hammerspace for years now and over the last couple of years, as large AI clusters have come into use, Hammerspace’s popularity has gone through the roof..
Mike’s been benchmarking storage for decades now and recently submitted results for MLperf Storage v1.0, an AI benchmark that focuses on storage activity for AI training and inferencing work. We have written previously on v0.5 of the benchmark, (see: AI benchmark for storage, MLperf Storage). Listen to the podcast to learn more.
Some of the changes between v0.5 and v1.0 of MLperf’s Storage benchmark include:
- Workload changes, they dropped BERT NLP, kept U-net3D (3D volumetric object detection) and added ResNet-50 and CosmoFlow. ResNet-50 is an (2D) image object detection model and CosmoFlow uses a “3D convolutional neural network on N-body cosmology simulation data to predict physical parameters of the universe.” Both ResNet-50 and CosmoFlow are TensorFlow batch inferencing activities. U-net3D is a PyTorch training activity.
- Accelerator (GPU simulation) changes, they dropped V100 and added A100 and H100 emulation to the benchmarks.
MLperf Storage benchmarks have to be run 5 times in a row and results reported are the average of the 5 runs. Metrics include samples/second (~files processed/second), overall storage bandwidth (MB/sec) and number of accelerators kept busy during the run (90% busy for U-net3D & ResNet-50 and 70% for CosmoFlow).
Hammerspace submitted 8 benchmarks: 2 workloads (U-net3D & ResNet-50) X 2 accelerators (A100 & H100 GPUs) X 2 client configurations (1 & 5 clients). Clients are workstations that perform training or inferencing work for the models. Clients can be any size. GPUs or accelerators are not physically used during the benchmark but are simulated as dead time, depending on the workload and GPU type (note this doesn’t depend on client size)
Hammerspace also ran their benchmarks with 5 and 22 DSX storage servers. Storage configurations matter for MLperf storage benchmarks and for v0.5, storage configurations weren’t well documented. V1.0 was intended to fix this but it seems there’s more work to get this right.
For ResNet-50 inferencing, Hammerspace drove 370 simulated A100s and 135 simulated H100s and for U-net3D training, Hammerspace drove 35 simulated A100s and 10 simulated H100s. Storage activity for training demands a lot more data than inferencing.
It turns out that training IO also uses checkpointing (which occasionally writes out models to save them in case of run failure). But the rest of the IO is essentially random sequential. Inferencing has much more randomized IO activity to it.
Hammerspace is a parallel file system (PFS) which uses NFSv4.2. NFSv4.2 is available native, in the Linux kernel. The main advantages of PFS is that IO activity can be parallelized by spreading it across many independent storage servers and data can move around without operational impact.
Mike ran their benchmarks in AWS. I asked about cloud noisy neighbors and networking congestion and he said, if you ask for a big enough (EC2) instance, high speed networks come with it, and noisy neighbors-networking congestion are not a problem.
Michael Kade, Senior Solutions Architect, Hammerspace
Michael Kade has over a 45-year history with the computer industry and over 35 years of experience working with storage vendors. He has held various positions with EMC, NetApp, Isilon, Qumulo, and Hammerspace.
He specializes in writing software that bridges different vendors and allows their software to work harmoniously together. He also enjoys benchmarking and discovering new ways to improve performance through the correct use of software tuning.
In his free time, Michael has been a helicopter flight instructor for over 25 years for EMS.