attack.SIG

class SIG[source]

Bases: BadNet

A new backdoor attack in CNNs by training set corruption without label poisoning

basic structure:

  1. config args, save_path, fix random seed

  2. set the clean train data and clean test data

  3. set the attack img transform and label transform

  4. set the backdoor attack data and backdoor test data

  5. set the device, model, criterion, optimizer, training schedule.

  6. attack or use the model to do finetune with 5% clean data

  7. save the attack result for defense

attack = SIG()
attack.attack()

Note

@inproceedings{SIG, title = {A new backdoor attack in CNNs by training set corruption without label poisoning}, author = {Barni, Mauro and Kallas, Kassem and Tondi, Benedetta}, booktitle = {2019 IEEE International Conference on Image Processing}, year = 2019,}

Parameters:
  • attack (string) – name of attack, use to match the transform and set the saving prefix of path.

  • attack_target (Int) – target class No. in all2one attack

  • attack_label_trans (str) – which type of label modification in backdoor attack

  • pratio (float) – the poison rate

  • bd_yaml_path (string) – path for yaml file provide additional default attributes

  • sig_f (float) – the parameter f in SIG attack, frequency of sinusoidal signal

  • sig_delta (float) – the parameter delta in SIG attack, the delta of sinusoidal signal

  • **kwargs (optional) – Additional attributes.