defense.mbns

class mbns[source]

Bases: defense

Pre-activation Distributions Expose Backdoor Neurons

basic structure:

  1. config args, save_path, fix random seed

  2. load the backdoor attack data and backdoor test data

  3. bnp defense:
    1. calculate the KL divergence of each norm layer between clean data and poisoned data

    2. prune the model depend on the KL divergence with the threshold u

  4. test the result and get ASR, ACC, RC

parser = argparse.ArgumentParser(description=sys.argv[0])
mbns.add_arguments(parser)
args = parser.parse_args()
mbns_method = mbns(args)
if "result_file" not in args.__dict__:
    args.result_file = 'one_epochs_debug_badnet_attack'
elif args.result_file is None:
    args.result_file = 'one_epochs_debug_badnet_attack'
result = mbns_method.defense(args.result_file)

Note

@article{zheng2022pre, title={Pre-activation Distributions Expose Backdoor Neurons}, author={Zheng, Runkai and Tang, Rongjun and Li, Jianze and Liu, Li}, journal={Advances in Neural Information Processing Systems}, volume={35}, pages={18667–18680}, year={2022}}

Parameters:
  • args (baisc) – in the base class

  • u (float) – the u in the bnp defense

  • u_min (float) – the default minimum value of u

  • u_max (float) – the default maximum value of u

  • u_num (float) – the default number of u

  • ratio (float) – the ratio of clean data loader

  • index (str) – index of clean data

Update:

All threshold evaluation results will be saved in the save_path folder as a picture, and the selected fixed threshold model results will be saved to defense_result.pt