attack.TrojanNN
- class TrojanNN[source]
Bases:
BadNet
Trojaning Attack on Neural Networks
basic structure:
config args, save_path, fix random seed
set the clean train data and clean test data and load pretrained model
find a good trigger perturbation pattern, set the attack img transform and label transform
set the backdoor attack data and backdoor test data
set the device, model, criterion, optimizer, training schedule.
attack or use the model to do finetune with 5% clean data
save the attack result for defense
attack = TrojanNN() attack.attack()
Note
@inproceedings{Trojannn, author = {Yingqi Liu and Shiqing Ma and Yousra Aafer and Wen-Chuan Lee and Juan Zhai and Weihang Wang and Xiangyu Zhang}, title = {Trojaning Attack on Neural Networks}, booktitle = {25th Annual Network and Distributed System Security Symposium, {NDSS} 2018, San Diego, California, USA, February 18-221, 2018}, publisher = {The Internet Society}, year = {2018},}
- Parameters:
attack (string) – name of attack, use to match the transform and set the saving prefix of path.
attack_target (Int) – target class No. in all2one attack
attack_label_trans (str) – which type of label modification in backdoor attack
pratio (float) – the poison rate
bd_yaml_path (string) – path for yaml file provide additional default attributes
pretrain_model_path (string) – path for pretrained model
mask_path (string) – path for mask image
selected_layer_name (string) – name of selected layer in target model
selected_layer_param_name (string) – name of selected layer’s parameter in target model
num_neuron (int) – number of neurons to be selected in target layer
neuron_target_values (float) – the target value for selected neurons, you can change to list in the yaml if necessary
mask_update_iters (int) – number of iterations to update mask
resource_folder_path (string) – path for resource folder, which contains the mask image
**kwargs (optional) – Additional attributes.