Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2022: Optimization for Machine Learning

Optimization for Robustness Evaluation beyond ℓp Metrics

Hengyue Liang · Buyun Liang · Ying Cui · Tim Mitchell · Ju Sun


Abstract: Empirical evaluations of neural network models against adversarial attacks entail solving nontrivial constrained optimization problems. Popular algorithms for solving these constrained problems rely on projected gradient descent (PGD) and require careful tuning of multiple hyperparameters. Moreover, PGD can only handle $\ell_1$, $\ell_2$, and $\ell_\infty$ attacks due to the use of analytical projectors. In this paper, we introduce an alternative algorithmic framework that blends a general-purpose constrained-optimization solver \pygranso, \textbf{W}ith \textbf{C}onstraint-\textbf{F}olding (PWCF), to add reliability and generality to the existing adversarial evaluations. PWCF 1) finds good-quality solutions without delicate tuning of multiple hyperparameters; 2) can handle general attack models which are inaccessible to the existing algorithms, e.g., $\ell_{p > 0}$, and perceptual attacks.

Chat is not available.