attack.PoisonInk
- class PoisonInk[source]
Bases:
BadNet
Poison ink: Robust and invisible backdoor attack
basic structure:
config args, save_path, fix random seed
set the clean train data and clean test data
set the attack img transform and label transform
set the backdoor attack data and backdoor test data
set the device, model, criterion, optimizer, training schedule.
save the attack result for defense
attack = poison_ink() attack.attack()
Note
@article{gu2017badnets, title={Badnets: Identifying vulnerabilities in the machine learning model supply chain}, author={Gu, Tianyu and Dolan-Gavitt, Brendan and Garg, Siddharth}, journal={arXiv preprint arXiv:1708.06733}, year={2017}}
- Parameters:
attack (string) – name of attack, use to match the transform and set the saving prefix of path.
attack_target (Int) – target class No. in all2one attack
attack_label_trans (str) – which type of label modification in backdoor attack
pratio (float) – the poison rate
bd_yaml_path (string) – path for yaml file provide additional default attributes
attack_train_replace_imgs_path (str) – where you process training images are saved
attack_test_replace_imgs_path (str) – where you process testing images are saved
**kwargs (optional) – Additional attributes.