Timezone: »

Shallow RNN: Accurate Time-series Classification on Resource Constrained Devices
Don Dennis · Durmus Alp Emre Acar · Vikram Mandikal · Vinu Sankar Sadasivan · Venkatesh Saligrama · Harsha Vardhan Simhadri · Prateek Jain

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #107

Recurrent Neural Networks (RNNs) capture long dependencies and context, and 2 hence are the key component of typical sequential data based tasks. However, the sequential nature of RNNs dictates a large inference cost for long sequences even if the hardware supports parallelization. To induce long-term dependencies, and yet admit parallelization, we introduce novel shallow RNNs. In this architecture, the first layer splits the input sequence and runs several independent RNNs. The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies. We provide theoretical justification for our architecture under weak assumptions that we verify on real-world benchmarks. Furthermore, we show that for time-series classification, our technique leads to substantially improved inference time over standard RNNs without compromising accuracy. For example, we can deploy audio-keyword classification on tiny Cortex M4 devices (100MHz processor, 256KB RAM, no DSP available) which was not possible using standard RNN models. Similarly, using SRNN in the popular Listen-Attend-Spell (LAS) architecture for phoneme classification [4], we can reduce the lag inphoneme classification by 10-12x while maintaining state-of-the-art accuracy.

Author Information

Don Dennis (Carnegie Mellon University)
Durmus Alp Emre Acar (Boston University)
Vikram Mandikal (The University of Texas at Austin)
Vinu Sankar Sadasivan (Indian Institute of Technology Gandhinagar)
Venkatesh Saligrama (Boston University)
Harsha Vardhan Simhadri (Microsoft Research)
Prateek Jain (Microsoft Research)

More from the Same Authors