Tentative Important Dates

Timeline (Tentative)

Workshop Schedule
Event Start time End time
Opening Remarks 8:30 8:40
Challenge Session 8:40 9:00
Invited talk #1: Prof. Chaowei Xiao 9:00 9:30
Invited talk #2: Prof. Bo Li 9:30 10:00
Invited Talk #3: Prof. Zico Kolter 10:00 10:30
Invited Talk #4: Prof. Neil Gong 10:30 11:00
Poster Session #1 11:00 12:30
Lunch (12:30-13:30)
Invited Talk #5: Prof. Ludwig Schmidt 13:30 14:00
Invited Talk #6: Prof. FlorianTramer 14:00 14:30
Invited Talk #7: Prof. Tom Goldstein 14:30 15:00
Invited Talk #8: Prof. Alex Beutel 15:00 15:30
Poster Session #2 15:30 16:30

Call for Papers

Artificial intelligence (AI) has entered a new era with the emergence of foundation models (FMs). Alongside their potential benefits, the increasing reliance on FMs has also exposed their vulnerabilities to adversarial attacks. The workshop will bring together researchers and practitioners from the computer vision and machine learning communities to explore the latest advances and challenges in adversarial machine learning, with a focus on the robustness of foundation models. We welcome research contributions related to the following (but not limited to) topics:
  • Robustness of foundation models
  • Adversarial attacks on computer vision tasks
  • Improving the robustness of deep learning systems
  • Interpreting and understanding model robustness, especially foundation models
  • Adversarial attacks for social good
  • Dataset and benchmark that could evaluate foundation model robustness
Format: Submissions papers (.pdf format) must use the CVPR 2024 Author Kit for LaTeX/Word Zip file and be anonymized and follow CVPR 2024 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 8 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references.

Submission Site: https://cmt3.research.microsoft.com/CVPRAdvML2024
Submission Due: March 15, 2024, Anywhere on Earth (AoE)


Distinguished Paper Award

  • Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [Paper]
    Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xuan, Ee-Chien Chang, Xiaochun Cao.

Accepted Long Paper

  • Large Language Models in Wargaming: Methodology, Application, and Robustness [Paper]
    Yuwei Chen (Aviation Industry Development Research Center of China)*; Shiyong Chu (Aviation Industry Development Research Center of China)
  • Enhancing Targeted Attack Transferability via Diversified Weight Pruning [Paper]
    Hung-Jui Wang (National Taiwan University)*; Yu-Yu Wu (National Taiwan University); Shang-Tse Chen (National Taiwan University)
  • Enhancing the Transferability of Adversarial Attacks with Stealth Preservation [Paper]
    Xinwei Zhang (Beihang University); Tianyuan Zhang (Beihang University); Yitong Zhang (Beihang University); Shuangcheng Liu (Beihang University)*
  • Adversarial Attacks on Foundational Vision Models
    Nathan Inkawhich (Air Force Research Laboratory)*; Ryan S Luley (Air Force Research Laboratory); Gwendolyn N McDonald (AFRL)
  • Benchmarking Robustness in Neural Radiance Fields
    Chen Wang (University of Pennsylvania)*; Angtian Wang (Johns Hopkins University); Junbo Li (UC Santa Cruz); Alan Yuille (Johns Hopkins University); Cihang Xie ( University of California, Santa Cruz)
  • Sharpness-Aware Optimization for Real-World Adversarial Attacks for Diverse Compute Platforms with Enhanced Transferability [Paper]
    Muchao Ye (The Pennsylvania State University); Xiang Xu (Amazon)*; Qin Zhang (Amazon.com); Jonathan Wu (Amazon)
  • Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study [Paper]
    Chenguang Wang (Washington University)*; Ruoxi Jia (Virginia Tech); Xin Liu (University of California); Dawn Song (UC Berkeley)
  • Red-Teaming Segment Anything Model [Paper]
    Krzysztof Jankowski (University of Warsaw)*; Bartłomiej Jan Sobieski (Warsaw University of Technology); Mateusz Kwiatkowski (MI2.AI, University of Warsaw); Jakub Szulc (University of Warsaw); Michał Janik (University of Warsaw); Hubert Baniecki (University of Warsaw); Przemyslaw Biecek (Warsaw University of Technology)
  • Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities [Paper]
    SangHwa Hong (SeoulTech)*
  • Multimodal Attack Detection for Action Recognition Models [Paper]
    Furkan Mumcu (University of South Florida)*; Yasin Yilmaz (University of South Florida)

Accepted Extended Abstract

  • Attack End-to-End Autonomous Driving through Module-Wise Noise [Paper]
    Lu Wang (Beihang University); Tianyuan Zhang (Beihang University)*; Yikai Han (Beijing University of Aeronautics and Astronautics); Muyang Fang (Beihang University); Ting Jin (Beihang University); 家麒 康 (Beihang University)
  • Scaling Vision-Language Models Does Not Improve Relational Understanding: The Right Learning Objective Helps [Paper]
    Haider Al-Tahan (Meta - FAIR)*; Quentin Garrido (Meta - FAIR); Randall Balestriero (Facebook AI Research); Diane Bouchacourt (Facebook AI); Caner Hazirbas (Meta AI); Mark Ibrahim (Capital One Center for Machine Learning)
  • ResampleTrack: Online Resampling for Adversarially Robust Visual Tracking
    Xuhong Ren (School of Computer Science and Engineering, Tianjin University of Technology); Jianlang Chen (Kyushu University); Yue Cao (Nanyang Technological University); Wanli Xue (Tianjin University of Technology)*; Qing Guo (A*STAR); Lei Ma (The University of Tokyo / University of Alberta); Jianjun Zhao (Kyushu University); Chen Shengyong (Zhejiang University of technology;Tianjin University of Technology)
  • Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [Paper] ⭐Distinguished Paper Award
    Siyuan Liang (Department of Computer Science, National University of Singapore)*; kuanrong liu (Sun Yat-San University); Jiajun Gong (School of Computing, National University of Singapore); Jiawei Liang (SUN YAT-SEN UNIVERSITY); Yuan Xun (Institute of Information Engineering, Chinese Academy of Sciences); Ee-Chien Chang (NUS); Xiaochun Cao (Sun Yat-sen University)

Sponsors

logo-img
logo-img

logo-img