attack.Bpp
- class Bpp[source]
Bases:
BadNet
BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
basic sturcture for main:
config args, save_path, fix random seed
set the clean train data and clean test data
set the device, model, criterion, optimizer, training schedule.
set the backdoor image processing, Image quantization, Dithering,
training with backdoor modification simultaneously, which include Contrastive Adversarial Training
save attack result
attack = Bpp() attack.attack()
Note
@InProceedings{Wang_2022_CVPR, author = {Wang, Zhenting and Zhai, Juan and Ma, Shiqing}, title = {BppAttack: Stealthy and Efficient Trojan Attacks Against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {15074-15084}}
- Parameters:
attack (string) – name of attack, use to match the transform and set the saving prefix of path.
attack_target (Int) – target class No. in all2one attack
attack_label_trans (str) – which type of label modification in backdoor attack
pratio (float) – the poison rate
bd_yaml_path (string) – path for yaml file provide additional default attributes
neg_ratio (float) – negative sample ratio
squeeze_num (int) – floydDitherspeed’s number of Image quantization
dithering (bool) – Use dithering or not
**kwargs (optional) – Additional attributes.