Skip to the content.
Accepted Papers
- Efficient Disruptions of Black-box Image Translation Deepfake Generation Systems
Nataniel Ruiz (Boston University); Sarah Bargal (Boston University); Stan Sclaroff (Boston University)
- Bridging the Gap Between Adversarial Robustness and Optimization Bias
Fartash Faghri (University of Toronto); Cristina Vasconcelos (Google); David J Fleet (University of Toronto); Fabian Pedregosa (Google); Nicolas Le Roux (Google)
- Covariate Shift Adaptation for Adversarially Robust Classifier
Jay Nandy (National University of Singapore); Sudipan Saha (Technical University of Munich); Wynne Hsu (National University of Singapore); Mong Li Lee (National University of Singapore); Xiaoxiang Zhu (Technical University of Munich,Germany)
- Poisoned classifiers are not only backdoored, they are fundamentally broken
Mingjie Sun (Carnegie Mellon University); Siddhant Agarwal (Indian Institute of Technology, Kharagpur); Zico Kolter (Carnegie Mellon University)
- Reliably fast adversarial training via latent adversarial perturbation
Geon Yeong Park (KAIST); Sang Wan Lee (KAIST)
- Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness
Linxi Jiang (Fudan University); Xingjun Ma (Deakin University); Zejia Weng (Fudan University); James Bailey (THE UNIVERSITY OF MELBOURNE); Yu-Gang Jiang (Fudan University)
- SHIFT INVARIANCE CAN REDUCE ADVERSARIAL ROBUSTNESS
David Jacobs (University of Maryland, USA); Ronen Basri (Weizmann Institute of Science); Vasu Singla (University Of Maryland); Songwei Ge (University of Maryland)
- What is Wrong with One-Class Anomaly Detection?
JuneKyu Park (Ajou University); Jeong-Hyeon Moon (Ajou University); Namhyuk Ahn (Ajou University); Kyung-Ah Sohn (Ajou University)
- Safe Model-based Reinforcement Learning with Robust Cross-Entropy Method
Zuxin Liu (Carnegie Mellon University); Hongyi Zhou (CMU); Baiming Chen (Tsinghua University); Sicheng Zhong (University of Toronto); DING ZHAO (Carnegie Mellon University)
- GateNet: Bridging the gap between Binarized Neural Network and FHE evaluation
Cheng Fu (University of California, San Diego); Hanxian Huang (UC San Diego); Xinyun Chen (UC Berkeley); Jishen Zhao (University of California, San Diego)
- High-Robustness, Low-Transferability Fingerprinting of Neural Networks
Siyue Wang (Northeastern University); Xiao Wang (Boston University); Pin-Yu Chen (IBM Research); Pu Zhao (Northeastern University); Xue Lin (Northeastern University)
- Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
Kiran Karra (JHU/APL); Chace Ashcraft (JHU/APL)
- Non-Singular Adversarial Robustness of Neural Networks
Yu-Lin Tsai (National Chiao Tung University); Chia-Yi Hsu (National Yang Ming Chiao Tung University); Chia-Mu Yu (National Chiao Tung University); Pin-Yu Chen (IBM Research)
- Adversarial Examples Make Stronger Poisons
Liam Fowl (University of Maryland); Micah Goldblum (University of Maryland, College Park); Ping-yeh Chiang (University of Maryland, College Park); Jonas A. Geiping (University of Siegen); Tom Goldstein (University of Maryland, College Park)
- What Doesn’t Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors
Jonas A. Geiping (University of Siegen); Liam Fowl (University of Maryland); Gowthami Somepalli (University of Maryland); Micah Goldblum (University of Maryland); Michael Moeller (University of Siegen); Tom Goldstein (University of Maryland, College Park)
- Byzantine-Robust and Privacy-Preserving Framework for FedML
Hanieh Hashemi (University of Southern California); Yongqin Wang (University of Southern California); Chuan Guo (Facebook AI Research); Murali Annavaram (University of Southern California)
- Data Augmentation Can Improve Robustness
Sylvestre-Alvise Rebuffi (DeepMind); Sven Gowal (DeepMind); Dan Andrei Calian (DeepMind); Florian Stimberg (); Olivia Wiles (DeepMind); Timothy Arthur Mann (DeepMind)
- Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
Peter Bajcsy (NIST); Michael Majurski (NIST)
- Extracting Hyperparameter Constraints From Code
Ingkarat Rak-amnouykit (Rensselaer Polytechnic Institute); Ana Milanova (Rensselaer Polytechnic Institute); Guillaume Baudart (Inria Paris, École normale supérieure - PSL University); Martin Hirzel (IBM Research); Julian Dolby (IBM Research)
- Coordinated Attacks Against Federated Learning: A Multi-Agent Reinforcement Learning Approach
Wen Shen (Tulane University); Henger Li (Tulane University); Zizhan Zheng (Tulane University)
- Regularization Can Help Mitigate Poisoning Attacks… with the Right Hyperparameters
Javier Carnerero-Cano (Imperial College London); Luis Muñoz-González (Imperial College London); Phillippa Spencer (Defence Science and Technology Laboratory); Emil Lupu (Imperial College London)
- Ditto: Fair and Robust Federated Learning Through Personalization
Tian Li (Carnegie Mellon University); Shengyuan Hu (Carnegie Mellon University); Ahmad Beirami (Facebook AI); Virginia Smith (Carnegie Mellon University)
- Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla (University Of Maryland); Sahil Singla (University of Maryland); David Jacobs (University of Maryland, USA); Soheil Feizi (University of Maryland)
- Examining Trends in Out-of-Domain Confidence
Hamza Qadeer (University of California, Berkeley); Michael Chau (University of California, Berkeley); Eric Zhu (University of California, Berkeley); Matthew A Wright (University of California Berkeley); Richard Liaw (UC Berkeley)
YouTube presentation
- Doing More with Less: Improving Robustness using Generated Data
Sven Gowal (DeepMind); Sylvestre-Alvise Rebuffi (DeepMind); Olivia Wiles (DeepMind); Florian Stimberg (); Dan Andrei Calian (DeepMind); Timothy Arthur Mann (DeepMind)
- Hidden Backdoor Attack against Semantic Segmentation Models
Yiming Li (Tsinghua University); Yanjie Li (Tsinghua University); Yalei Lv (Tsinghua University); Yong Jiang (Tsinghua University); Shutao Xia (Tsinghua University)
- $\delta$-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
Dan Ley (University of Cambridge); Umang Bhatt (University of Cambridge); Adrian Weller (University of Cambridge)
- Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
Yoshihiro Okawa (Fujitsu Laboratories Ltd.); Yusuke Kato (Keio University); Tomotake Sasaki (Fujitsu Laboratories Ltd.); Hitoshi Yanami (Fujitsu Laboratories Ltd.); Toru Namerikawa (Keio University)
- Provable defense by denoised smoothing with learned score function
Kyungmin Lee (Agency for Defense Development)
- Boosting black-box adversarial attack via exploiting loss smoothness
Hoang Tran (Oak Ridge National Lab); Dan Lu (Oak Ridge National Laboratory); Guannan Zhang (Oak Ridge National Laboratory)
YouTube presentation
- PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches
Chong Xiang (Princeton University); Prateek Mittal (Princeton University)
- DEEP GRADIENT ATTACK WITH STRONG DP-SGD LOWER BOUND FOR LABEL PRIVACY
Sen Yuan (Facebook); Min Xue (Facebook); Kaikai Wang (Facebook); Milan Shen (Facebook)
- Measuring Adversarial Robustness using a Voronoi-Epsilon Adversary
Hyeongji Kim (Institute of Marine Research); Pekka Parviainen (University of Bergen); Ketil Malde (UIB)
- Accelerated Policy Evaluation with Adaptive Importance Sampling
Mengdi Xu (Carnegie Mellon University); Peide Huang (Carnegie Mellon University); Fengpei Li (Columbia University); Jiacheng Zhu (Carnegie Mellon University); Xuewei (Tony) Qi (Toyota North America Research Labs); Zhiyuan Huang (Tongji University); Henry Lam (Columbia University); DING ZHAO (Carnegie Mellon University)
- Sparse Coding Frontend for Robust Neural Networks
Can Bakiskan (University of California, Santa Barbara); Metehan Cekic (University of California, Santa Barbara); Ahmet D Sezer (University of California, Santa Barbara); Upamanyu Madhow (University of California, Santa Barbara)
- Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
Xiangyu Qi (Zhejiang University); Jifeng Zhu (Tencent); Chulin Xie (University of Illinois at Urbana-Champaign); Yong Yang (Tencent)
- Speeding Up Neural Network Verification via Automated Algorithm Configuration
Matthias König (Leiden University); Holger Hoos (Leiden Institute of Advanced Computer Science, Leiden University); Jan Van Rijn (Leiden University)
YouTube presentation
- Incorporating Label Uncertainty in Intrinsic Robustness Measures
Xiao Zhang (University of Virginia); David Evans (University of Virginia)
- FIRM: Detecting Adversarial Audios by Recursive Filters with Randomization
Guanhong Tao (Purdue University); Xiaowei Chen (Baidu X-Lab); Yunhan Jia (Bytedance Inc.); Zhenyu Zhong (Baidu); Shiqing Ma (Rutgers University); Xiangyu Zhang (Purdue University)
- On Improving Adversarial Robustness Using Proxy Distributions
Vikash Sehwag (Princeton University); Saeed Mahloujifar (Princeton University); Sihui Dai (California Institute of Technology); Tinashe Handina (Princeton University); Chong Xiang (Princeton University); Mung Chiang (Purdue University); Prateek Mittal (Princeton University)
- Detecting Adversarial Attacks through Neural Activations
Graham Annett (Boise State University); Hoda Mehrpouyan (Boise State University); Tim Andersen (Boise State); Casey R Kennington (Boise State University); Craig Primer (Boise State University)
- Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam Fowl (University of Maryland); Ping-yeh Chiang (University of Maryland, College Park); Micah Goldblum (University of Maryland, College Park); Jonas A. Geiping (University of Siegen); Arpit Bansal (University of Maryland - College Park); Wojciech Czaja (University of Maryland, College Park); Tom Goldstein (University of Maryland, College Park)
- Robustness from Perception
Saeed Mahloujifar (Princeton University); Chong Xiang (Princeton University); Vikash Sehwag (Princeton University); Sihui Dai (California Institute of Technology); Prateek Mittal (Princeton University)
- Mitigating Adversarial Training Instability with Batch Normalization
Arvind Sridhar (UC Berkeley); Chawin Sitawarin (UC Berkeley); David Wagner (UC Berkeley)
YouTube presentation
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
Dequan Wang (UC Berkeley); Evan Shelhamer (Imaginary Number); An Ju (University of California, Berkeley); David Wagner (UC Berkeley); Trevor Darrell (UC Berkeley)
- Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers
Francesco Croce (University of Tübingen); Matthias Hein (University of Tübingen)
- DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia (University of Maryland); Jonas A. Geiping (University of Siegen); Valeriia Cherepanova (University of Maryland); Liam Fowl (University of Maryland); Arjun Gupta (University of Maryland College Park); Amin Ghiasi (University of Maryland); Furong Huang (University of Maryland); Micah Goldblum (University of Maryland); Tom Goldstein (University of Maryland, College Park)
- RobustBench: a standardized adversarial robustness benchmark
Francesco Croce (University of Tübingen); Maksym Andriushchenko (EPFL); Vikash Sehwag (Princeton University); Edoardo Debenedetti (EPFL); Nicolas Flammarion (EPFL); Mung Chiang (Princeton University); Prateek Mittal (Princeton University); Matthias Hein (University of Tübingen)
- Simple Transparent Adversarial Examples
Jaydeep Jitendra Borkar (Savitribai Phule Pune University); Pin-Yu Chen (IBM Research)
- Moral Scenarios for Reinforcement Learning Agents
Dan Hendrycks (UC Berkeley); Mantas Mazeika (UIUC); Andy Zou (UC Berkeley); Sahil Patel (UC Berkeley); Christine Zhu (UC Berkeley); Jesus Navarro (UC Berkeley); Bo Li (UIUC); Dawn Song (UC Berkeley); Jacob Steinhardt (UC Berkeley)