Built on the new AMD CDNA architecture, the AMD chip enables a new class of accelerated systems for high-performance computing (HPC) and artificial intelligence (AI) when paired with 2nd Gen AMD EPYC processors.
"Engineered to power AMD GPUs for the exascale era and at the heart of the MI100 accelerator, the AMD CDNA architecture offers exceptional performance and power efficiency," the company said in a statement.
The MI100 offers up to 11.5 TFLOPS (trillion floating-point operations per second) of peak FP64 performance for HPC and up to 46.1 TFLOPS peak FP32 Matrix performance for AI and machine learning workloads.
With new AMD Matrix Core technology, the MI100 also delivers a nearly 7x boost in FP16 theoretical peak floating point performance for AI training workloads compared to AMD's prior generation accelerators.
Ultra-Fast HBM2 Memory features 32GB High-bandwidth HBM2 memory at a clock rate of 1.2 GHz and delivers an ultra-high 1.23 TB/s of memory bandwidth to support large data sets and help eliminate bottlenecks in moving data in and out of memory, the company said.