HPE Developer Community Portal
A set of tools to guide the choice of the best hardware/software environment for a given deep learning workload.
It is a common wisdom today, that to start a deep learning exploration one needs a GPU-enabled system and one of the existing open source deep learning frameworks. But which GPU box to choose? How many GPUs to put in a system? How many systems to put in a cluster and which interconnect to use? Which framework to pick? Answers to these questions are not obvious. That’s why we decided to create HPE Deep Learning Cookbook – a set of tools to characterize deep learning workloads and to recommend optimal hardware/software (HW/SW) stack for any given workload. Our Cookbook consists of several key assets:
HPE Deep Learning Benchmarking Suite: automated benchmarking tool to collect performance measurements on various HW/SW configurations in a unified way.
HPE Deep Learning Performance Guide: a web-based tool which provides access to a knowledge base of benchmarking results. It enables querying and analysis of measured results as well as performance prediction based on analytical performance models.
Reference Designs: hardware/software recipes for selected workloads.
Recommendations with our Deep Learning Cookbook are based on a massive collection of performance results for various deep learning workloads on different HW/SW stacks, and analytical performance models. The combination of real measurements and analytical performance models enables us to estimate the performance of any workload and to recommend an optimal hardware/software stack for that workload. Additionally, we use the Cookbook internally to detect bottlenecks in existing hardware and to guide the design of future systems for artificial intelligence and deep learning.