Backdoor learning is an emerging topic of studying the adversarial
vulnerability of machine learning models during the training stage. Many backdoor attack
and defense methods have been developed in recent ML and Security
conferences/journals. It is important to build a benchmark to review the current progress and
facilitate future research in backdoor learning.
BackdoorBench aims to provide an easy implementation of both
backdoor attack and backdoor defense methods to facilitate future research, as well as a comprehensive evaluation of
existing attack and defense methods.
This benchmark will be continuously updated to track the latest
advances of backdoor learning, including the implementations of more backdoor methods, as well as
their evaluations in the leaderboard. You are welcome to contribute
your backdoor methods to BackdoorBench.
BackdoorBench defines a realistic threat model where attackers and defenders can compete with each other under unified settings, which facilitates fair comparisons of various methods.
BackdoorBench provides a coding framework with a modular design, which facilitates the implementation of all attacks, defenses, and related evaluation processes.
BackdoorBench guarantees high reproducibility of all results on the leaderboards, by providing all necessary terms including implementation of methods, hyper-parameters, trained models, easy-to-use scripts, etc.
Here are the related papers.