Source code for defense.nad

'''
This file is modified based on the following source:
link : https://github.com/bboylyg/NAD/.
The defense method is called nad.

The update include:
	1. data preprocess and dataset setting
	2. model setting
	3. args and config
	4. save process
	5. new standard: robust accuracy
	6. add some addtional backbone such as resnet18 and vgg19
	7. the method to get the activation of model
basic sturcture for defense method:
	1. basic setting: args
	2. attack result(model, train data, test data)
	3. nad defense:
		a. create student models, set training parameters and determine loss functions
		b. train the student model use the teacher model with the activation of model and result
	4. test the result and get ASR, ACC, RC 
'''
from defense.base import defense


[docs]class nad(defense): r"""Neural Attention Distillation: Erasing Backdoor Triggers From Deep Neural Networks basic structure: 1. config args, save_path, fix random seed 2. load the backdoor attack data and backdoor test data 3. load the backdoor model 4. nad defense: a. create student models, set training parameters and determine loss functions b. train the student model use the teacher model with the activation of model and result 5. test the result and get ASR, ACC, RC .. code-block:: python parser = argparse.ArgumentParser(description=sys.argv[0]) nad.add_arguments(parser) args = parser.parse_args() nad_method = nad(args) if "result_file" not in args.__dict__: args.result_file = 'one_epochs_debug_badnet_attack' elif args.result_file is None: args.result_file = 'one_epochs_debug_badnet_attack' result = nad_method.defense(args.result_file) .. Note:: @inproceedings{li2020neural, title={Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks}, author={Li, Yige and Lyu, Xixiang and Koren, Nodens and Lyu, Lingjuan and Li, Bo and Ma, Xingjun}, booktitle={International Conference on Learning Representations}, year={2020}} Args: baisc args: in the base class ratio (float): the ratio of training data index (str): the index of clean data te_epochs (int): the number of epochs for training the teacher model using the clean data beta1 (int): the beta of the first layer beta2 (int): the beta of the second layer beta3 (int): the beta of the third layer p (float): the power of the activation of model for AT loss function teacher_model_loc (str): the location of teacher model(if None, train the teacher model) """