Overview
Date | May 7, 2021 |
Location | The workshop will be held virtually. The internal ICLR workshop website is here (ICLR registration required). |
While machine learning (ML) models have achieved great success in many applications, concerns have been raised about their potential vulnerabilities and risks when applied to safety-critical applications. On the one hand, from the security perspective, studies have been conducted to explore worst-case attacks against ML models and therefore inspire both empirical and certifiable defense approaches. On the other hand, from the safety perspective, researchers have looked into safe constraints, which should be satisfied by safe AI systems (e.g., autonomous driving vehicles should not hit pedestrians).
In this workshop, we aim to bridge the gap of these two communities and discuss principles of developing secure and safe ML systems. We will bring together experts from machine learning, computer security, and AI safety communities. We attempt to highlight recent related work from different communities, clarify the foundations of secure and safe ML, and chart out important directions for future work and cross-community collaborations.
Paper Awards
Best Paper Award
- Ditto: Fair and Robust Federated Learning Through Personalization
Tian Li (Carnegie Mellon University); Shengyuan Hu (Carnegie Mellon University); Ahmad Beirami (Facebook AI); Virginia Smith (Carnegie Mellon University)
Best Paper Honorable Mention Award
- RobustBench: a standardized adversarial robustness benchmark
Francesco Croce (University of Tübingen); Maksym Andriushchenko (EPFL); Vikash Sehwag (Princeton University); Edoardo Debenedetti (EPFL); Nicolas Flammarion (EPFL); Mung Chiang (Princeton University); Prateek Mittal (Princeton University); Matthias Hein (University of Tübingen)
Travel Award Recipients
- Fartash Faghri (University of Toronto)
- Jay Nandy (National University of Singapore)
- Mingjie Sun (Carnegie Mellon University)
- Linxi Jiang (Fudan University)
- Siyue Wang (Northeastern University)
- Liam Fowl (University of Maryland)
- Seyedeh Hanieh Hashemi (University of Southern California)
- Sylvestre-Alvise Rebuffi (DeepMind)
- Ingkarat Rak-Amnouykit (Rensselaer Polytechnic Institute)
- Wen Shen (Tulane University)
- Vasu Singla (University of Maryland)
- Daniel Ley (University of Cambridge)
- Yoshihiro Okawa (Fujitsu Laboratories Ltd.)
- Chong Xiang (Princeton University)
- Kyungmin Lee (Agency for defense development)
- Mengdi Xu (Carnegie Mellon University)
- Can Bakiskan (University of California, Santa Barbara)
- Xiangyu Qi (Zhejiang University)
- Xiao Zhang (Leiden University)
- Guanhong Tao (Purdue University)
- Vikash Sehwag (Princeton University)
- Dequan Wang (UC Berkeley)
- Eitan Borgnia (University of Maryland)
- Jaydeep Borkar (Savitribal Phule Pune University)
- Mantas Mazeika (UIUC)
The workshop is sponsored by Open Philanthropy. The funding covers a Best Paper Award ($1,000), a Best Paper Honorable Mention Award ($500), and multiple travel grants (complimentary ICLR conference registrations).