Source code for defense.i_bau

# MIT License

# Copyright (c) 2021 Yi Zeng

# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
'''
This file is modified based on the following source:
link : https://github.com/YiZeng623/I-BAU/
The defense method is called i-bau.
The license is bellow the code

The update include:
	1. data preprocess and dataset setting
	2. model setting
	3. args and config
	4. save process
	5. new standard: robust accuracy
	6. use clean samples from training (align other defense Settings)
basic sturcture for defense method:
	1. basic setting: args
	2. attack result(model, train data, test data)
	3. i-bau defense:
		a. get some clean data
		b. unlearn the backdoor model by the pertubation
	4. test the result and get ASR, ACC, RC 
'''

from defense.base import defense


[docs]class i_bau(defense): r"""Adversarial unlearning of backdoors via implicit hypergradient basic structure: 1. config args, save_path, fix random seed 2. load the backdoor attack data and backdoor test data 3. load the backdoor model 4. i-bau defense: a. train the adversarial purturbaion by the clean data using the hypergradient b. unlearn the backdoor model by the pertubation c. repeat a and b for several rounds 5. test the result and get ASR, ACC, RC .. code-block:: python parser = argparse.ArgumentParser(description=sys.argv[0]) i_bau.add_arguments(parser) args = parser.parse_args() i_bau_method = i_bau(args) if "result_file" not in args.__dict__: args.result_file = 'one_epochs_debug_badnet_attack' elif args.result_file is None: args.result_file = 'one_epochs_debug_badnet_attack' result = i_bau_method.defense(args.result_file) .. Note:: @inproceedings{zeng2021adversarial, title={Adversarial Unlearning of Backdoors via Implicit Hypergradient}, author={Zeng, Yi and Chen, Si and Park, Won and Mao, Zhuoqing and Jin, Ming and Jia, Ruoxi}, booktitle={International Conference on Learning Representations}, year={2021}} Args: baisc args: in the base class ratio (float): the ratio of clean data loader index (str): index of clean data optim (str): type of outer loop optimizer utilized (default: Adam) to train the adversarial purturbaion n_rounds (int): the maximum number of unlearning rounds and the number of fixed point iterations (default: 10) K (int): the maximum number of fixed point iterations (default: 10) """