Timezone: »

 
Poster
Deep Learning of invariant features via tracked video sequences
Will Y Zou · Andrew Y Ng · Shenghuo Zhu · Kai Yu

Thu Dec 06 02:00 PM -- 12:00 AM (PST) @ Harrah’s Special Events Center 2nd Floor #None

We use video sequences produced by tracking as training data to learn invariant features. These features are spatial instead of temporal, and well suited to extract from still images. With a temporal coherence objective, a multi-layer neural network encodes invariance that grow increasingly complex with layer hierarchy. Without fine-tuning with labels, we achieve competitive performance on five non-temporal image datasets and state-of-the-art classification accuracy 61% on STL-10 object recognition dataset.

Author Information

Will Y Zou (Stanford University)
Andrew Y Ng (Baidu Research)
Shenghuo Zhu (NEC Laboratories America)
Kai Yu (Baidu)

More from the Same Authors