Source code for defense.abl

'''
This file is modified based on the following source:
link : https://github.com/bboylyg/ABL.
The defense method is called abl.

The update include:
    1. data preprocess and dataset setting
    2. model setting
    3. args and config
    4. save process
    5. new standard: robust accuracy
basic sturcture for defense method:
    1. basic setting: args
    2. attack result(model, train data, test data)
    3. abl defense:
        a. pre-train model
        b. isolate the special data(loss is low) as backdoor data
        c. unlearn the backdoor data and learn the remaining data
    4. test the result and get ASR, ACC, RC 
'''

from defense.base import defense


[docs]class abl(defense): r"""Anti-backdoor learning: Training clean models on poisoned data. basic structure: 1. config args, save_path, fix random seed 2. load the backdoor attack data and backdoor test data 3. abl defense: a. pre-train model b. isolate the special data(loss is low) as backdoor data c. unlearn the backdoor data and learn the remaining data 4. test the result and get ASR, ACC, RC .. code-block:: python parser = argparse.ArgumentParser(description=sys.argv[0]) abl.add_arguments(parser) args = parser.parse_args() abl_method = abl(args) if "result_file" not in args.__dict__: args.result_file = 'one_epochs_debug_badnet_attack' elif args.result_file is None: args.result_file = 'one_epochs_debug_badnet_attack' result = abl_method.defense(args.result_file) .. Note:: @article{li2021anti, title={Anti-backdoor learning: Training clean models on poisoned data}, author={Li, Yige and Lyu, Xixiang and Koren, Nodens and Lyu, Lingjuan and Li, Bo and Ma, Xingjun}, journal={Advances in Neural Information Processing Systems}, volume={34}, pages={14900--14912}, year={2021},} Args: baisc args: in the base class tuning_epochs (int): number of the first tuning epochs to run finetuning_ascent_model (bool): whether finetuning model after sperate the poisoned data finetuning_epochs (int): number of the finetuning epochs to run unlearning_epochs (int): number of the unlearning epochs to run lr_finetuning_init (float): initial finetuning learning rate lr_unlearning_init (float): initial unlearning learning rate momentum (float): momentum of sgd during the process of finetuning and unlearning weight_decay (float): weight decay of sgd during the process of finetuning and unlearning isolation_ratio (float): ratio of isolation data from the whole poisoned data gradient_ascent_type (str): type of gradient ascent (LGA, Flooding) gamma (float): value of gamma for LGA flooding (float): value of flooding for Flooding """