Workshop on Adversarial Machine
Learning in Real-World Computer Vision
Systems and Online Challenges (AML-CV)

Workshop at CVPR 2021

Overview

As computer vision models are being increasingly deployed in the real world, including applications that require safety considerations such as self-driving vehicles, it is imperative that these models are robust and secure even when subject to adversarial attacks.

This workshop will focus on recent research and future directions for security problems in real-world machine learning and computer vision systems. We aim to bring together experts from the computer vision, security, and robust learning communities in an attempt to highlight recent work in this area as well as to clarify the foundations of secure machine learning. We seek to come to a consensus on a rigorous framework to formulate adversarial machine learning problems in computer vision, characterize the properties that ensure the security of perceptual models, and evaluate the consequences under various adversarial models. Finally, we hope to chart out important directions for future work and cross-community collaborations, including computer vision, machine learning, security, and multimedia communities.

The workshop consists of invited talks from the experts in this area, research paper submissions, and a large-scale online competition on adversarial attacks and defenses against real-world systems. In particular, the competition includes two tracks, which cover two novel topics as (1) Adversarial attacks on ML defense models , and (2) Unrestricted adversarial attacks on ImageNet

Paper Submissions
Paper submission deadline
Notification to authors
Camera ready deadline
Competition
Registration opens
Registration deadline
Challenge award notification
Submission starts
Submission deadline
CVPR conference
Registration opens
Submission starts
Registration deadline
Submission deadline
Challenge award notification
CVPR conference
Call for Papers

Topics

Topics include but are not limited to:
Real world attacks against current computer vision models
Theoretic understanding of adversarial machine learning and certifiable robustness
Vulnerabilities and potential solutions to adversarial machine learning in real-world applications, e.g., autonomous driving, 3D object recognition, and large-scale image retrieval
Repeatable experiments adding to the knowledge about adversarial examples on image, video, audio, point cloud and other data
Real world data distribution drift and its implications to model generalization and robustness
Detection and defense mechanisms against adversarial examples for computer vision systems
Novel challenges and discoveries in adversarial machine learning for computer vision systems
Submissions need to be anonymized, and follow the CVPR 2021 template described here. Submissions should include at most 4 pages, excluding the references and appendices. Accepted papers will not be included in the official CVPR proceedings.
Competition
Competition Track I:
Adversarial Attacks on ML Defense Models

Most of the machine learning classifiers nowadays are vulnerable to adversarial examples, an emerging topic that has been studied widely in recent years. A large number of adversarial defense methods have been proposed to mitigate the threats of adversarial examples. However, some of them can be broken with more powerful or adaptive attacks, making it very difficult to judge and evaluate the effectiveness of the current defenses and future defenses. Without a thorough and correct robustness evaluation of the defenses, the progress on this field will be limited.

To accelerate the research on reliable evaluations of adversarial robustness of the current defense models in image classification, we organize this competition with the purpose of motivating novel attack algorithms to evaluate the adversarial robustness more effectively and reliably. The participants are encouraged to develop strong white-box attack algorithms to find the worst-case robustness of various models.

This competition will be conducted on an adversarial robustness evaluation platform --- ARES (https://github.com/thu-ml/ares).

More
Competition Track II:
Unrestricted Adversarial Attacks on ImageNet

Deep neural network has achieved the most advanced performance in various visual recognition problems. Despite its great success, the security of deep models have also caused many concerns in the industry. For example, deep neural networks are vulnerable to small and imperceptible perturbations on the inputs (these inputs are also called adversarial examples). In addition to the small and imperceptible perturbations, in the actual scene, more threats to the deep model come from the unrestricted adversarial examples, that is, the attacker makes large and visible modifications on the image, which causes the model classifying mistakenly, but does not affect the normal observation in human perspective. Unrestricted adversarial attack is a popular direction in the field of adversarial attack in recent years. We hope that this competition can lead competitors not only understand and explore the scene of unrestricted adversarial attack on ImageNet, but also further refine and summarize some innovative and effective schemes of unrestricted attack, so as to promote the development of the field of adversarial attack academically.

More
Invited Speakers
Dawn Song
University of California, Berkeley
Yisen Wang
Peking University
Zico Kolter
CMU
Ce Zhang
ETH Zurich
Lujo Bauer
CMU
Nicolas Papernot
University of Toronto
Ding Zhao
CMU
Una-May O’Reilly
MIT
Schedule
Date: June 19                               Time in PT                               Location: Zoom
08:40 - 09:00
Opening Remarks: Dawn Song
Session 1
09:00 - 09:30
Invited Talk #1: Yisen Wang
09:30 - 09:45
Contributed Talk #1
09:45 - 10:15
Invited Talk #2: Nicolas Papernot
10:15 - 10:30
Coffee Break
Session 2
10:30 - 11:00
Invited Talk #3: Zico Kolter
11:00 - 11:15
Contributed Talk #2
11:15 - 11:30
Poster Spotlights #1
11:30 - 13:00
Lunch
Session 3
13:00 - 13:30
Invited Talk #4: Lujo Bauer
13:30 - 13:45
Contributed Talk #3
13:45 - 14:15
Invited Talk #5: Ce Zhang
14:15 - 15:00
Poster Session/Break
Session 4
15:00 - 15:30
Invited Talk #6: Una-May O'Reilly
15:30 - 16:00
Invited Talk #7: Ding Zhao
Session 5
16:00 - 16:45
Winners results showoff
16:45 - 17:00
Awarding session
17:00 - 17:45
Poster Session (cont'd)
Posters
Paper Award Recipients
Best paper
On the Benefits of Defining Vicinal Distributions in Latent Space
Distinguished papers
A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumptions
Bit Error Robustness for Energy-Efficient DNN Accelerators
Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs
Date: June 19                               Time: 14:15 - 15:00 PT
Updates: We will have another poster session at 17:00 - 17:45 PT in the breakout rooms in the Zoom room of the main workshop.
On the Benefits of Defining Vicinal Distributions in Latent Space                 
Puneet Mangla (IIT Hyderabad); Vedant Singh (IIT, Hyderabad); Shreyas Havaldar (Indian Institute of Technology Hyderabad); Vineeth Balasubramanian (Indian Institute of Technology Hyderabad)
Understanding the Role of Adversarial Regularization in Supervised Learning                 
Litu Rout (Indian Space Research Organisation)
A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning                 
Ahmadreza Jeddi (University of Waterloo); Mohammad Javad Shafiee (University of Waterloo); Alexander Wong (University of Waterloo)
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumptions                 
Ivan Evtimov (University of Washington Paul G. Allen School of Computer Science & Engineering); Russell Howes (Facebook AI); Brian Dolhansky (Reddit); Hamed Firooz (Facebook); Canton Cristian (Facebook AI)
Adversarial collision attacks on image hashing functions                 
Brian Dolhansky (Reddit); Canton Cristian (Facebook AI)
One Size Does Not Fit All: Transferring Adversarial Attacks Across Sizes                 
Jeremiah Duncan (University of Tennessee); Amir Sadovnik (The University of Tennessee)
Anti-Adversarial Input with Self-Ensemble Model Transformations                 
Jeremiah Rounds (Pacific Northwest National Laboratory); Kayla Duskin (PNNL); Michael Henry (Pacific Northwest National Laboratory)
Evaluating the Robustness of Bayesian Neural Networks Against Different Types of Attacks                 
Yutian Pang (Arizona State University); Sheng Cheng (Arizona State University); Jueming Hu (Arizona State University); Yongming Liu (Arizona State University)
Class retrieval of adversarial attacks                 
Jalal Al-afandi (Peter Pazmany Catholic University); Andras Horvath (Peter Pazmany Catholic University)
Relating Adversarially Robust Generalization to Flat Minima                 
David Stutz (Max Planck Institute for Informatics); Matthias Hein (University of Tübingen); Bernt Schiele (MPI Informatics)
Bit Error Robustness for Energy-Efficient DNN Accelerators                 
David Stutz (Max Planck Institute for Informatics); Nandhini Chandramoorthy (IBM T. J. Watson Research Center); Matthias Hein (University of Tübingen); Bernt Schiele (MPI Informatics) [Note: Same Zoom room as the paper above. The same author will discuss this paper in the second half of the session.]
Improving Adversarial Transferability with Gradient Refining                 
Guoqiu Wang (Beihang University)*; Huanqian Yan (Beihang University)*; Ying Guo (Beihang University)*; Xingxing Wei (Beihang University)
Adversarial Variance Attacks: Deeper Insights into Adversarial MachineLearning through the Eyes of Bias-Variance Impact                 
Hossein Aboutalebi (University of Waterloo); Mohammad Javad Shafiee (University of Waterloo); Michelle Karg (ADC Automotive Distance Control Systems GmbH, Continental); Christian Scharfenberger (ADC Automotive Distance Control Systems GmbH, Continental); Alexander Wong (University of Waterloo)
Unrestricted Adversarial Attacks on Vision Transformers                 
Rajat Sahay (Vellore Institute of Technology, Vellore)
Residual Error: a New Performance Measure for Adversarial Robustness                 
Hossein Aboutalebi (University of Waterloo); Mohammad Javad Shafiee (University of Waterloo); Michelle Karg (ADC Automotive Distance Control Systems GmbH, Continental); Christian Scharfenberger (ADC Automotive Distance Control Systems GmbH, Continental); Alexander Wong (University of Waterloo)
Localized Uncertainty Attacks                 
Ousmane A Dia (Facebook); Theofanis Karaletsos (Facebook); Caner Hazirbas (Facebook AI); Canton Cristian (Facebook AI); Ilknur Kaynar Kabul (Facebook); Erik Meijer (Facebook)
Semantics Preserving Adversarial Examples                 
Sanjay Kariyappa (Georgia Institute of Technology); Ousmane A Dia (Facebook)
The Triangular Trade-off Between Accuracy, Robustness, and Fairness                 
Philipp Benz (KAIST)*; Chaoning Zhang (KAIST)*; Soomin Ham (KAIST); Adil Karjauv (KAIST); Gyusang Cho (KAIST); In So Kweon (KAIST)
Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs                 
Philipp Benz (KAIST)*; Chaoning Zhang (KAIST)*; Soomin Ham (KAIST)*; Adil Karjauv (KAIST); In So Kweon (KAIST)
Is FGSM Optimal or Necessary for L∞ Adversarial Attack?                 
Chaoning Zhang (KAIST)*; Adil Karjauv (KAIST)*; Philipp Benz (KAIST)*; Soomin Ham (KAIST); Gyusang Cho (KAIST); Chan-Hyun Youn (KAIST); In So Kweon (KAIST)
Backpropagating Smoothly Improves Transferability of Adversarial Examples                 
Chaoning Zhang (KAIST)*; Philipp Benz (KAIST)*; Gyusang Cho (KAIST)*; Adil Karjauv (KAIST); Soomin Ham (KAIST); Chan-Hyun Youn (KAIST); In So Kweon (KAIST)
Organizers
Faculty OrganizersStudent Organizers
Dawn Song
UC Berkeley
Faculty Organizers
Bo Li
UIUC
Faculty Organizers
Jun Zhu
Tsinghua University
Faculty Organizers
Hang Su
Tsinghua University
Faculty Organizers
Hui Xue
Alibaba Group
Faculty Organizers
Yuan He
Alibaba Group
Faculty Organizers
Xinyun Chen
UC Berkeley
Student Organizers
Fan Wu
UIUC
Student Organizers
Chulin Xie
UIUC
Student Organizers
Boxin Wang
UIUC
Student Organizers
Yinpeng Dong
Tsinghua University
Student Organizers
Qi-An Fu
Tsinghua University
Student Organizers
Program Committee
Adam Kortylewski
Johns Hopkins University
Aniruddha Saha
University of Maryland Baltimore County
Anshuman Suri
University of Virginia
Chang Xiao
Columbia University
Gaurang Sriramanan
Indian Institute of Science
Hongyang Zhang
TTIC
Jamie Hayes
UCL
Jiachen Sun
University of Michigan
Lifeng Huang
SunYat-sen university
Maksym Andriushchenko
EPFL
Maura Pintor
University of Cagliari
Muhammad Awais
Kyung-Hee University
Nataniel Ruiz
Boston University
Rajkumar Theagarajan
University of California, Riverside
Sravanti Addepalli
Indian Institute of Science
Wenxiao Wang
Tsinghua University
Won Park
University of Michigan
Xingjun Ma
Deakin University
Xinwei Zhao
Drexel University
Yash Sharma
University Tuebingen
Yulong Cao
University of Michigan, Ann Arbor
Yuzhe Yang
MIT
Tong Wu
Washington University in St. Louis
Zhao Li
Alibaba Group
Contact
For any further questions, you can contact us at  lbo@illinois.edu