defense.i_bau
- class i_bau[source]
Bases:
defense
Adversarial unlearning of backdoors via implicit hypergradient
basic structure:
config args, save_path, fix random seed
load the backdoor attack data and backdoor test data
load the backdoor model
- i-bau defense:
train the adversarial purturbaion by the clean data using the hypergradient
unlearn the backdoor model by the pertubation
repeat a and b for several rounds
test the result and get ASR, ACC, RC
parser = argparse.ArgumentParser(description=sys.argv[0]) i_bau.add_arguments(parser) args = parser.parse_args() i_bau_method = i_bau(args) if "result_file" not in args.__dict__: args.result_file = 'one_epochs_debug_badnet_attack' elif args.result_file is None: args.result_file = 'one_epochs_debug_badnet_attack' result = i_bau_method.defense(args.result_file)
Note
@inproceedings{zeng2021adversarial, title={Adversarial Unlearning of Backdoors via Implicit Hypergradient}, author={Zeng, Yi and Chen, Si and Park, Won and Mao, Zhuoqing and Jin, Ming and Jia, Ruoxi}, booktitle={International Conference on Learning Representations}, year={2021}}
- Parameters:
args (baisc) – in the base class
ratio (float) – the ratio of clean data loader
index (str) – index of clean data
optim (str) – type of outer loop optimizer utilized (default: Adam) to train the adversarial purturbaion
n_rounds (int) – the maximum number of unlearning rounds and the number of fixed point iterations (default: 10)
K (int) – the maximum number of fixed point iterations (default: 10)