Source code for attack.trojannn

from .badnet import BadNet
[docs]class TrojanNN(BadNet): r"""Trojaning Attack on Neural Networks basic structure: 1. config args, save_path, fix random seed 2. set the clean train data and clean test data and load pretrained model 3. find a good trigger perturbation pattern, set the attack img transform and label transform 4. set the backdoor attack data and backdoor test data 5. set the device, model, criterion, optimizer, training schedule. 6. attack or use the model to do finetune with 5% clean data 7. save the attack result for defense .. code-block:: python attack = TrojanNN() attack.attack() .. Note:: @inproceedings{Trojannn, author = {Yingqi Liu and Shiqing Ma and Yousra Aafer and Wen-Chuan Lee and Juan Zhai and Weihang Wang and Xiangyu Zhang}, title = {Trojaning Attack on Neural Networks}, booktitle = {25th Annual Network and Distributed System Security Symposium, {NDSS} 2018, San Diego, California, USA, February 18-221, 2018}, publisher = {The Internet Society}, year = {2018},} Args: attack (string): name of attack, use to match the transform and set the saving prefix of path. attack_target (Int): target class No. in all2one attack attack_label_trans (str): which type of label modification in backdoor attack pratio (float): the poison rate bd_yaml_path (string): path for yaml file provide additional default attributes pretrain_model_path (string): path for pretrained model mask_path (string): path for mask image selected_layer_name (string): name of selected layer in target model selected_layer_param_name (string): name of selected layer's parameter in target model num_neuron (int): number of neurons to be selected in target layer neuron_target_values (float): the target value for selected neurons, you can change to list in the yaml if necessary mask_update_iters (int): number of iterations to update mask resource_folder_path (string): path for resource folder, which contains the mask image **kwargs (optional): Additional attributes. """