defense.abl

class abl[source]

Bases: defense

Anti-backdoor learning: Training clean models on poisoned data.

basic structure:

  1. config args, save_path, fix random seed

  2. load the backdoor attack data and backdoor test data

  3. abl defense:
    1. pre-train model

    2. isolate the special data(loss is low) as backdoor data

    3. unlearn the backdoor data and learn the remaining data

  4. test the result and get ASR, ACC, RC

parser = argparse.ArgumentParser(description=sys.argv[0])
abl.add_arguments(parser)
args = parser.parse_args()
abl_method = abl(args)
if "result_file" not in args.__dict__:
    args.result_file = 'one_epochs_debug_badnet_attack'
elif args.result_file is None:
    args.result_file = 'one_epochs_debug_badnet_attack'
result = abl_method.defense(args.result_file)

Note

@article{li2021anti,

title={Anti-backdoor learning: Training clean models on poisoned data}, author={Li, Yige and Lyu, Xixiang and Koren, Nodens and Lyu, Lingjuan and Li, Bo and Ma, Xingjun}, journal={Advances in Neural Information Processing Systems}, volume={34}, pages={14900–14912}, year={2021},}

Parameters:
  • args (baisc) – in the base class

  • tuning_epochs (int) – number of the first tuning epochs to run

  • finetuning_ascent_model (bool) – whether finetuning model after sperate the poisoned data

  • finetuning_epochs (int) – number of the finetuning epochs to run

  • unlearning_epochs (int) – number of the unlearning epochs to run

  • lr_finetuning_init (float) – initial finetuning learning rate

  • lr_unlearning_init (float) – initial unlearning learning rate

  • momentum (float) – momentum of sgd during the process of finetuning and unlearning

  • weight_decay (float) – weight decay of sgd during the process of finetuning and unlearning

  • isolation_ratio (float) – ratio of isolation data from the whole poisoned data

  • gradient_ascent_type (str) – type of gradient ascent (LGA, Flooding)

  • gamma (float) – value of gamma for LGA

  • flooding (float) – value of flooding for Flooding