defense.fp
- class fp[source]
Bases:
defense
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
basic structure:
config args, save_path, fix random seed
load the backdoor attack data and backdoor test data
load the backdoor attack model
- fp defense:
hook the activation layer representation of each data
rank the mean of activation for each neural
according to the sorting results, prune and test the accuracy
save the model with the greatest difference between ACC and ASR
test the result and get ASR, ACC, RC
parser = argparse.ArgumentParser(description=sys.argv[0]) FinePrune.add_arguments(parser) args = parser.parse_args() FinePrune_method = FinePrune(args) if "result_file" not in args.__dict__: args.result_file = 'one_epochs_debug_badnet_attack' elif args.result_file is None: args.result_file = 'one_epochs_debug_badnet_attack' result = FinePrune_method.defense(args.result_file)
Note
@inproceedings{liu2018fine, title={Fine-pruning: Defending against backdooring attacks on deep neural networks}, author={Liu, Kang and Dolan-Gavitt, Brendan and Garg, Siddharth}, booktitle={International symposium on research in attacks, intrusions, and defenses}, pages={273–294}, year={2018}, organization={Springer}}
- Parameters:
args (baisc) – in the base class
ratio (float) – the ratio of clean data loader
index (str) – the index of clean data
acc_ratio (float) – the tolerance ration of the clean accuracy
once_prune_ratio (float) – how many percent once prune. in 0 to 1