Timezone: »

Learning From Weakly Supervised Data by The Expectation Loss SVM (e-SVM) algorithm
Jun Zhu · Junhua Mao · Alan Yuille

Mon Dec 08 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D

In many situations we have some measurement of confidence on positiveness" for a binary label. Thepositiveness" is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called \emph{expectation loss SVM} (e-SVM) that is devoted to the problems where only the ``positiveness" instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.

Author Information

Jun Zhu (University of California, Los Angeles)
Junhua Mao (UCLA)
Alan Yuille (JHU)

More from the Same Authors