attack.Blind

class Blind[source]

Bases: BadNet

Blind Backdoors in Deep Learning Models

basic structure:

  1. config args, save_path, fix random seed

  2. set the clean train data and clean test data

  3. set the attack img transform and label transform

  4. set the backdoor attack data and backdoor test data

  5. set the device, model, criterion, optimizer, training schedule.

  6. use the designed blind_loss to train a poisoned model

  7. save the attack result for defense

attack = Blind()
attack.attack()

Note

@inproceedings {bagdasaryan2020blind, author = {Eugene Bagdasaryan and Vitaly Shmatikov}, title = {Blind Backdoors in Deep Learning Models}, booktitle = {30th {USENIX} Security Symposium ({USENIX} Security 21)}, year = {2021}, isbn = {978-1-939133-24-3}, pages = {1505–1521}, url = {https://www.usenix.org/conference/usenixsecurity21/presentation/bagdasaryan}, publisher = {USENIX Association}, month = aug,}

Parameters:
  • attack (string) – name of attack, use to match the transform and set the saving prefix of path.

  • attack_target (int) – target class in all2one attack

  • attack_label_trans (string) – which type of label modification in backdoor attack

  • bd_yaml_path (string) – path of yaml file to load backdoor attack config

  • weight_loss_balance_mode (string) – weight loss balance mode (eg. “fixed”)

  • mgda_normalize (string) – mgda normalize mode (eg. “l2”, “loss+”)

  • fix_scale_normal_weight (float) – fix scale normal weight

  • fix_scale_backdoor_weight (float) – fix scale backdoor weight

  • batch_history_len (int) – len of tracking history to compute when training is stable, so we start to attack

  • backdoor_batch_loss_threshold (float) – threshold of backdoor batch loss to compute when training is stable, so we start to attack

  • **kwargs (optional) – Additional attributes.