Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning

Gyojin Han · Jaehyun Choi · HyeongGwon Hong · Junmo Kim


Abstract:

Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world setting which has memory and privacy issues.However, this introduces a problem in these models by not being able to track the performance on each task.In other words, current continual learning methods are vulnerable to attacks done on the previous task.We demonstrate the vulnerability of regularization-based continual learning methods by presenting simple task-specific training time adversarial attack that can be used in the learning process of a new task.Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker.Experiment results justify the vulnerability proposed in this paper and demonstrate the importance of developing continual learning models that are robust to adversarial attack.

Chat is not available.