WHAT IS EXACTLY
The goal of MLPerf is to provide a wide approach to machine learning that is
supported by both business and academic research. MLPerf would like to create fair
and meaningful benchmarks for training and inference performance for hardware
and software. MLPerf continues to settle on a standard set of benchmarks for ML
processes, and it’s largely used for measuring AI workloads.
MLPERF TEST ENVIRONMENT
1Initiate a Test
2Random Query Sample ID (of 50,000 Image)
3Inference Request, Sample ID Transmission
4Transmission of Results, MAX ID (of 1,000 class)
5Transmission of Results
MLPerf is an AI Machine Learning HW Benchmarks institution
supported by Google, which receives applications from companies
by each version and conducts AI Performance tests of AI hardware,
and posts the results on its official website to show hardware
processing speed and performance.
The FPGA-based implementation of DEEPX demonstrates
about 1,000FPS(Frame Per Second) for the MobileNet
Version 1 in the MLPerf AI benchmark category. DEEPX’s first
chip released in 2022 is expected to reach over 10,000FPS
(Frame Per Second) for MobileNet Version 1 and will show
higher performance than current AI HW server solutions.
This video illustrates how fast 50,000 pictures from the Imagenet
2012 dataset in an FPGA board are processed by the DEEPX NPU.
The demo system is composed of a Windows PC and Xilinx
Alveo U250 FPGA board. The FPGA-based implementation runs
at 260Mhz and demonstrates 992 IPS (1.0 ms per inference)
for the MobileNet Version 1 in the MLPerf AI benchmark category
The ASIC’s implementation will increase FIVEFOLD without any
NPU design change due to simple clock frequency improvement
(Top-ranked in the MLPerf AI benchmark)