Source code for defense.d_st

'''
This file implements the defense method called D-ST from Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples.
It trains a !!!secure model!!! from scratch with a poisoned dataset.
This file is modified based on the following source:
link :  https://github.com/SCLBD/Effective_backdoor_defense
The defense method is called d-br.


The update include:
    1. data preprocess and dataset setting
    2. model setting
    3. args and config
    4. save process
    5. new standard: robust accuracy
basic sturcture for defense method:
    1. basic setting: args
    2. attack result(model, train data, test data)
    3. d-st defense: mainly two steps: sd and st (Sample-Distinguishment and two-stage Secure Training)
        a. train a backdoored model from scratch using poisoned dataset without any data augmentations
        b. fine-tune the backdoored model with intra-class loss L_intra.
        (sd:)
        c. calculate values of the FCT metric for all training samples.
        d. calculate thresholds for choosing clean and poisoned samples.
        e. separate training samples into clean samples D_c, poisoned samples D_p, and uncertain samples D_u.
        (st:)
        f. train the feature extractor via semi-supervised contrastive learning.
        g. train the classifier via minimizing a mixed cross-entropy loss.
    4. test the result and get ASR, ACC, RC 

'''

from defense.base import defense


[docs]class d_st(defense): r"""Effective backdoor defense by exploiting sensitivity of poisoned samples basic structure: 1. config args, save_path, fix random seed 2. load the backdoor attack data and backdoor test data 3. d-st defense: mainly two steps: sd and st (Sample-Distinguishment and two-stage Secure Training) a. train a backdoored model from scratch using poisoned dataset without any data augmentations b. fine-tune the backdoored model with intra-class loss L_intra. c. calculate values of the FCT metric for all training samples. d. calculate thresholds for choosing clean and poisoned samples. e. separate training samples into clean samples D_c, poisoned samples D_p, and uncertain samples D_u. f. train the feature extractor via semi-supervised contrastive learning. g. train the classifier via minimizing a mixed cross-entropy loss. 4. test the result and get ASR, ACC, RC with regard to the chosen threshold and interval .. code-block:: python parser = argparse.ArgumentParser(description=sys.argv[0]) d-st.add_arguments(parser) args = parser.parse_args() d-st_method = d-st(args) if "result_file" not in args.__dict__: args.result_file = 'one_epochs_debug_badnet_attack' elif args.result_file is None: args.result_file = 'one_epochs_debug_badnet_attack' result = d-st_method.defense(args.result_file) .. Note:: @article{chen2022effective, title={Effective backdoor defense by exploiting sensitivity of poisoned samples}, author={Chen, Weixin and Wu, Baoyuan and Wang, Haoqian}, journal={Advances in Neural Information Processing Systems}, volume={35}, pages={9727--9737}, year={2022}} Args: baisc args: in the base class clean_ratio (float): ratio of clean data separated from the poisoned data poison_ratio (float): ratio of poisoned data separated from the poisoned data gamma (float): LR is multiplied by gamma on schedule. schedule (int): Decrease learning rate at these epochs. warm (int): warm up epochs for training trans1 (str): the first data augmentation used in the sd step to separate the clean and poisoned data trans2 (str): the second data augmentation used in the sd step to separate the clean and poisoned data debug (bool): debug or not """