attack.LIRA
- class LIRA[source]
Bases:
BadNet
LIRA: Learnable, Imperceptible and Robust Backdoor Attacks
basic structure:
config args, save_path, fix random seed
set the clean train data and clean test data
set the device, model, generator, criterion, optimizer, training schedule.
train the generator and classifier
fix the generator, train the classifier
save the attack result for defense
attack = LIRA() attack.attack()
Note
@inproceedings{Doan2021lira, title = {LIRA: Learnable, Imperceptible and Robust Backdoor Attacks}, author = {Khoa D. Doan and Yingjie Lao and Weijie Zhao and Ping Li}, booktitle = {Proceedings of the IEEE International Conference on Computer Vision}, year = {2021}}
- Parameters:
attack (string) – name of attack, use to match the transform and set the saving prefix of path.
attack_target (Int) – target class No. in all2one attack
attack_label_trans (str) – which type of label modification in backdoor attack
bd_yaml_path (string) – path for yaml file provide additional default attributes
random_crop (int) – random crop size
random_rotation (int) – random rotation degree
attack_model (string) – use which generator model
lr_atk (float) – learning rate for generator
eps (float) – epsilon for generated noise in train
test_eps (float) – epsilon for generated noise in test
alpha (float) – clean loss and poison loss ratio in train
test_alpha (float) – clean loss and poison loss ratio in test
finetune_lr (float) – learning rate for finetune
finetune_steplr_gamma (float) – gamma for finetune scheduler
finetune_steplr_milestones (list) – milestones for finetune scheduler
finetune_optimizer (string) – optimizer for finetune scheduler
both_train_epochs (int) – epochs for train both generator and classifier
train_epoch (int) – epochs for train classifier
verbose (bool) – verbosity of log
avoid_clsmodel_reinitialization (bool) – whether to avoid reinitialization of clsmodel during training
**kwargs (optional) – Additional attributes.