Timezone: »
Most of the existing algorithms for zero-shot classification problems typically rely on the attribute-based semantic relations among categories to realize the classification of novel categories without observing any of their instances. However, training the zero-shot classification models still requires attribute labeling for each class (or even instance) in the training dataset, which is also expensive. To this end, in this paper, we bring up a new problem scenario: ''Can we derive zero-shot learning for novel attribute detectors/classifiers and use them to automatically annotate the dataset for labeling efficiency?'' Basically, given only a small set of detectors that are learned to recognize some manually annotated attributes (i.e., the seen attributes), we aim to synthesize the detectors of novel attributes in a zero-shot learning manner. Our proposed method, Zero-Shot Learning for Attributes (ZSLA), which is the first of its kind to the best of our knowledge, tackles this new research problem by applying the set operations to first decompose the seen attributes into their basic attributes and then recombine these basic attributes into the novel ones. Extensive experiments are conducted to verify the capacity of our synthesized detectors for accurately capturing the semantics of the novel attributes and show their superior performance in terms of detection and localization compared to other baseline approaches. Moreover, we demonstrate the application of automatic annotation using our synthesized detectors on Caltech-UCSD Birds-200-2011 dataset. Various generalized zero-shot classification algorithms trained upon the dataset re-annotated by ZSLA shows comparable performance with those trained with the manual ground-truth annotations.
Author Information
Yu-Hsuan Li (Software Develop Department)
Tzu-Yin Chao
Ching-Chun Huang (National Yang Ming Chiao Tung University)
Pin-Yu Chen (IBM Research)
Wei-Chen Chiu (National Chiao Tung University)
More from the Same Authors
-
2021 : MAML is a Noisy Contrastive Learner »
Chia-Hsiang Kao · Wei-Chen Chiu · Pin-Yu Chen -
2022 : An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization »
Elvin Lo · Pin-Yu Chen -
2022 : Improving Vertical Federated Learning by Efficient Communication with ADMM »
Chulin Xie · Pin-Yu Chen · Ce Zhang · Bo Li -
2022 : Visual Prompting for Adversarial Robustness »
Aochuan Chen · Peter Lorenz · Yuguang Yao · Pin-Yu Chen · Sijia Liu -
2022 : NeuralFuse: Improving the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes »
Hao-Lun Sun · Lei Hsiung · Nandhini Chandramoorthy · Pin-Yu Chen · Tsung-Yi Ho -
2022 : SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data »
Ching-Yun Ko · Pin-Yu Chen · Jeet Mohapatra · Payel Das · Luca Daniel -
2021 Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership »
Nghia Hoang · Lam Nguyen · Pin-Yu Chen · Tsui-Wei Weng · Sara Magliacane · Bryan Kian Hsiang Low · Anoop Deoras -
2021 : Live Q&A session: MAML is a Noisy Contrastive Learner »
Chia-Hsiang Kao · Wei-Chen Chiu · Pin-Yu Chen -
2021 : Contributed Talk (Oral): MAML is a Noisy Contrastive Learner »
Chia-Hsiang Kao · Wei-Chen Chiu · Pin-Yu Chen