Poster
A Surprisingly Simple Approach to Generalized Few-Shot Semantic Segmentation
Tomoya Sakai · Haoxiang Qiu · Takayuki Katsuki · Daiki Kimura · Takayuki Osogami · Tadanobu Inoue
East Exhibit Hall A-C #1810
[
Abstract
]
Wed 11 Dec 11 a.m. PST
— 2 p.m. PST
Abstract:
The goal of *generalized* few-shot semantic segmentation (GFSS) is to recognize *novel-class* objects through training with a few annotated examples and the *base-class* model that learned the knowledge about the base classes.Unlike the classic few-shot semantic segmentation, GFSS aims to classify pixels into both base and novel classes, meaning it is a more practical setting.Current GFSS methods rely on several techniques such as using combinations of customized modules, carefully designed loss functions, meta-learning, and transductive learning.However, we found that a simple rule and standard supervised learning substantially improve the GFSS performance.In this paper, we propose a simple yet effective method for GFSS that does not use the techniques mentioned above.Also, we theoretically show that our method perfectly maintains the segmentation performance of the base-class model over most of the base classes.Through numerical experiments, we demonstrated the effectiveness of our method.It improved in novel-class segmentation performance in the $1$-shot scenario by $6.1$% on the PASCAL-$5^i$ dataset, $4.7$% on the PASCAL-$10^i$ dataset, and $1.0$% on the COCO-$20^i$ dataset.
Live content is unavailable. Log in and register to view live content