defense.i_bau

class i_bau[source]

Bases: defense

Adversarial unlearning of backdoors via implicit hypergradient

basic structure:

  1. config args, save_path, fix random seed

  2. load the backdoor attack data and backdoor test data

  3. load the backdoor model

  4. i-bau defense:
    1. train the adversarial purturbaion by the clean data using the hypergradient

    2. unlearn the backdoor model by the pertubation

    3. repeat a and b for several rounds

  5. test the result and get ASR, ACC, RC

parser = argparse.ArgumentParser(description=sys.argv[0])
i_bau.add_arguments(parser)
args = parser.parse_args()
i_bau_method = i_bau(args)
if "result_file" not in args.__dict__:
    args.result_file = 'one_epochs_debug_badnet_attack'
elif args.result_file is None:
    args.result_file = 'one_epochs_debug_badnet_attack'
result = i_bau_method.defense(args.result_file)

Note

@inproceedings{zeng2021adversarial, title={Adversarial Unlearning of Backdoors via Implicit Hypergradient}, author={Zeng, Yi and Chen, Si and Park, Won and Mao, Zhuoqing and Jin, Ming and Jia, Ruoxi}, booktitle={International Conference on Learning Representations}, year={2021}}

Parameters:
  • args (baisc) – in the base class

  • ratio (float) – the ratio of clean data loader

  • index (str) – index of clean data

  • optim (str) – type of outer loop optimizer utilized (default: Adam) to train the adversarial purturbaion

  • n_rounds (int) – the maximum number of unlearning rounds and the number of fixed point iterations (default: 10)

  • K (int) – the maximum number of fixed point iterations (default: 10)