Demonstration
Accelerating Deep Neural Networks on Mobile Processor with Embedded Programmable Logic
Eugenio Culurciello · Aysegul Dundar · Jonghoon Jin · Vinayak Gokhale · Berin Martini
Tahoe A+B, Harrah’s Special Events Center 2nd Floor
We present a live demonstration of a mobile platform aimed at accelerating deep convolutional neural networks (DCNNs). DCNNs is a powerful way to categorize images. They have achieved state of the art performance in many visual classification benchmarks and have won many competitions. However, their computational costs prevent them from being deployed for real-time applications. We implemented a hardware accelerator on the Xilinx Zynq SoC that can run DCNNs in real-time. The platform consists of a FPGA (PL) and two ARM Cortex-A9 cores (PS). The PL and PS share the same DDR3 memory which allows us to achieve a very high throughput when transferring data between software and co-processor. We will demonstrate live applications of DCNNs on our hardware.
Live content is unavailable. Log in and register to view live content