Enabling Fast and Universal Audio Adversarial Attack Using Generative Model

Abstract

Recently, the vulnerability of deep neural network (DNN)-based audio systems to adversarial attacks has obtained increasing attention. However, the existing audio adversarial attacks allow the adversary to possess the entire user’s audio input as well as granting sufficient time budget to generate the adversarial perturbations. These idealized assumptions, however, make the existing audio adversarial attacks mostly impossible to be launched in a timely fashion in practice (e.g., playing unnoticeable adversarial perturbations along with user’s streaming input). To overcome these limitations, in this paper we propose fast audio adversarial perturbation generator (FAPG), which uses generative model to generate adversarial perturbations for the audio input in a single forward pass, thereby drastically improving the perturbation generation speed. Built on the top of FAPG, we further propose universal audio adversarial perturbation generator (UAPG), a scheme to craft universal adversarial perturbation that can be imposed on arbitrary benign audio input to cause misclassification. Extensive experiments on DNN-based audio systems show that our proposed FAPG can achieve high success rate with up to 214X speedup over the existing audio adversarial attack methods. Also our proposed UAPG generates universal adversarial perturbations that can achieve much better attack performance than the state-of-the-art solutions.

Publication
In 2021 AAAAI Conference on Artificial Intelligence