Source code for attack.lc

from .badnet import BadNet
[docs]class LabelConsistent(BadNet): r"""Label-Consistent Backdoor Attacks link : https://github.com/MadryLab/label-consistent-backdoor-code basic structure: 1. config args, save_path, fix random seed 2. set the clean train data and clean test data 3. set the attack img transform and label transform 4. set the backdoor attack data and backdoor test data 5. set the device, model, criterion, optimizer, training schedule. 6. attack or use the model to do finetune with 5% clean data 7. save the attack result for defense .. code-block:: python attack = LabelConsistent() attack.attack() .. Note:: @article{turner2019labelconsistent, title = {Label-Consistent Backdoor Attacks}, author = {Alexander Turner and Dimitris Tsipras and Aleksander Madry}, journal = {arXiv preprint arXiv:1912.02771}, year = {2019}} Args: attack (string): name of attack, use to match the transform and set the saving prefix of path. attack_target (Int): target class No. in all2one attack attack_label_trans (str): which type of label modification in backdoor attack pratio (float): the poison rate bd_yaml_path (string): path for yaml file provide additional default attributes attack_train_replace_imgs_path (string): path for adversarial-attacked images, since we need images to be adversarial attacked and then we add patch trigger onto them. If not provided, we will use the default path. reduced_amplitude (float): the alpha/transparency of the backdoor trigger added at corners resource_folder_path (string): where the resource folder is **kwargs (optional): Additional attributes. """