NeurIPS 2019 Expo Demo
Dec. 8, 2019
Industry leading performance per watt with Intel® Nervana™ Neural Network Processor for Inference
Sponsor: Intel AI
The speed and accuracy of neural networks have enabled broad industry deployments, boosting the volume of inference cycles in the datacenter. More complex models and applications are demanding ever increasing performance-power efficiency. Intel® Nervana™ Neural Network Processor for Inference (Intel® Nervana™ NNP-I) was built from the ground up to deliver on this new reality. The Intel® Nervana™ NNP-I is capable of processing 360 images per second per watt (ResNet50) achieving 4.8 TOPs per watt.
Each Intel® Nervana™ NNP-I has 12 Inference Compute Engine (ICE) cores that can work autonomously, each one solving a network or multiple networks, or running different copies of a network. ICE cores can also work collaboratively to solve a larger network or to reduce latency. In addition to 12 ICE cores, 2 CPU cores with the latest 10nm architecture from Intel®, share the same fabric and cache which enables fast communication between the two.
This demo will showcase 32 Intel® Nervana™ NNP-I M.2s running in 1U chassis for various workloads such as image classification (ResNet50), object detection (SSD-ResNet34), and recommendation system (NCF).