Workshop on Adversarial Machine
Learning in Real-World Computer Vision
Systems and Online Challenges (AML-CV)

Workshop at CVPR 2021

Overview

As computer vision models are being increasingly deployed in the real world, including applications that require safety considerations such as self-driving vehicles, it is imperative that these models are robust and secure even when subject to adversarial attacks.

This workshop will focus on recent research and future directions for security problems in real-world machine learning and computer vision systems. We aim to bring together experts from the computer vision, security, and robust learning communities in an attempt to highlight recent work in this area as well as to clarify the foundations of secure machine learning. We seek to come to a consensus on a rigorous framework to formulate adversarial machine learning problems in computer vision, characterize the properties that ensure the security of perceptual models, and evaluate the consequences under various adversarial models. Finally, we hope to chart out important directions for future work and cross-community collaborations, including computer vision, machine learning, security, and multimedia communities.

The workshop consists of invited talks from the experts in this area, research paper submissions, and a large-scale online competition on adversarial attacks and defenses against real-world systems. In particular, the competition includes two tracks, which cover two novel topics as (1) Adversarial attacks on ML defense models , and (2) Unrestricted adversarial attacks on ImageNet

Paper Submissions
Paper submission deadline
Notification to authors
Camera ready deadline
Competition
Registration opens
Registration deadline
Challenge award notification
Submission starts
Submission deadline
CVPR conference
Registration opens
Submission starts
Registration deadline
Submission deadline
Challenge award notification
CVPR conference
Call for Papers

Topics

Topics include but are not limited to:
Real world attacks against current computer vision models
Theoretic understanding of adversarial machine learning and certifiable robustness
Vulnerabilities and potential solutions to adversarial machine learning in real-world applications, e.g., autonomous driving, 3D object recognition, and large-scale image retrieval
Repeatable experiments adding to the knowledge about adversarial examples on image, video, audio, point cloud and other data
Real world data distribution drift and its implications to model generalization and robustness
Detection and defense mechanisms against adversarial examples for computer vision systems
Novel challenges and discoveries in adversarial machine learning for computer vision systems
Submissions need to be anonymized, and follow the CVPR 2021 template described here. Submissions should include at most 4 pages, excluding the references and appendices. Accepted papers will not be included in the official CVPR proceedings.
Competition
Competition Track I:
Adversarial Attacks on ML Defense Models

Most of the machine learning classifiers nowadays are vulnerable to adversarial examples, an emerging topic that has been studied widely in recent years. A large number of adversarial defense methods have been proposed to mitigate the threats of adversarial examples. However, some of them can be broken with more powerful or adaptive attacks, making it very difficult to judge and evaluate the effectiveness of the current defenses and future defenses. Without a thorough and correct robustness evaluation of the defenses, the progress on this field will be limited.

To accelerate the research on reliable evaluations of adversarial robustness of the current defense models in image classification, we organize this competition with the purpose of motivating novel attack algorithms to evaluate the adversarial robustness more effectively and reliably. The participants are encouraged to develop strong white-box attack algorithms to find the worst-case robustness of various models.

This competition will be conducted on an adversarial robustness evaluation platform --- ARES (https://github.com/thu-ml/ares).

More
Competition Track II:
Unrestricted Adversarial Attacks on ImageNet

Deep neural network has achieved the most advanced performance in various visual recognition problems. Despite its great success, the security of deep models have also caused many concerns in the industry. For example, deep neural networks are vulnerable to small and imperceptible perturbations on the inputs (these inputs are also called adversarial examples). In addition to the small and imperceptible perturbations, in the actual scene, more threats to the deep model come from the unrestricted adversarial examples, that is, the attacker makes large and visible modifications on the image, which causes the model classifying mistakenly, but does not affect the normal observation in human perspective. Unrestricted adversarial attack is a popular direction in the field of adversarial attack in recent years. We hope that this competition can lead competitors not only understand and explore the scene of unrestricted adversarial attack on ImageNet, but also further refine and summarize some innovative and effective schemes of unrestricted attack, so as to promote the development of the field of adversarial attack academically.

More
Invited Speakers
Lihi Zelnik
The Technion-Israel Institute of
Technology/Alibaba Group
Zico Kolter
CMU
Alina Oprea
Northeastern University
Lujo Bauer
CMU
Nicolas Papernot
University of Toronto
Ding Zhao
CMU
Trevor Darrell
UC Berkeley
Una-May O’Reilly
MIT
Schedule
08:40 - 09:00
Opening Remarks
Session 1
09:00 - 09:30
Invited Talk #1 Lihi Zelnik
09:30 - 09:45
Contributed Talk #1
09:45 - 10:15
Invited Talk #2 Trevor Darrell
10:15 - 10:30
Coffee Break
Session 2
10:30 - 11:00
Invited Talk #3 Zico Kolter
11:00 - 11:15
Contributed Talk #2
11:15 - 11:45
Invited Talk #4 Nicolas Papernot
11:45 - 12:00
Poster Spotlights #1
12:00 - 13:00
Lunch
Session 3
1:00 - 1:30
Invited Talk #5 Lujo Bauer
1:30 - 1:45
Contributed Talk #3
1:45 - 2:15
Invited Talk #6 Alina Oprea
2:15 - 3:00
Poster Session/Break
Session 4
3:00 - 3:30
Invited Talk #7 Una-May O'Reilly
3:30 - 4:00
Invited Talk #8 Ding Zhao
Session 5
4:00 - 4:45
Winners results showoff
4:45 - 5:00
Awarding session
Organizers
Faculty OrganizersStudent Organizers
Dawn Song
UC Berkeley
Faculty Organizers
Bo Li
UIUC
Faculty Organizers
Jun Zhu
Tsinghua University
Faculty Organizers
Hang Su
Tsinghua University
Faculty Organizers
Hui Xue
Alibaba Group
Faculty Organizers
Yuan He
Alibaba Group
Faculty Organizers
Xinyun Chen
UC Berkeley
Student Organizers
Fan Wu
UIUC
Student Organizers
Chulin Xie
UIUC
Student Organizers
Boxin Wang
UIUC
Student Organizers
Yinpeng Dong
Tsinghua University
Student Organizers
Qi-An Fu
Tsinghua University
Student Organizers
Program Committee
Bhavya Khailkhura
Lawrence Livermore National Lab
Catherine Olsson
OpenAI
Chaowei Xiao
University of Michigan
David Evans
University of Virginia
Dimitris Tsipras
Massachusetts Institute of Technology
Earlence Fernandes
University of Washington
Eric Wong
Carnegie Mellon University
Fartash Faghri
University of Toronto
Yuefeng Chen
Alibaba Group
Florian Tramer
Stanford University
Hadi Abdullah
University of Florida
Hao Su
UCSD
Jonathan Uesato
DeepMind
Karl Ni
In-Q-Tel
Kassem Fawaz
University of Wisconsin-Madison
Kathrin Grosse
CISPA
Hui Xue
Alibaba Group
Zhao Li
Alibaba Group
Krishna Gummadi
MPI-SWS
Matthew Wicker
University of Georgia
Mengyi Liu
Alibaba Group
Nathan Mundhenk
Lawrence Livermore National Lab
Nicholas Carlini
Google Brain
Nicolas Papernot
Google Brain and University of Toronto
Octavian Suciu
University of Maryland
Pin-Yu Chen
IBM
Pushmeet Kohli
DeepMind
Shreya Shankar
Stanford University
Suman Jana
Columbia University
Tianyu Pang
Tsinghua University
Varun Chandrasekaran
University of Wisconsin-Madison
Xiaowei Huang
Liverpool University
Xiao Yang
Tsinghua University
Yanjun Qi
University of Virginia
Yigitcan Kaya
University of Maryland
Yinpeng Dong
Tsinghua University
Yizheng Chen
Georgia Tech
Zhenxing Niu
Alibaba Group
Zhijie Deng
Tsinghua University
Sravanti Addepalli
Indian Institute of Science
Gaurang Sriramanan
Indian Institute of Science
Contact
For any further questions, you can contact us at  lbo@illinois.edu