Tentative Important Dates

Call for Papers

Artificial intelligence (AI) has entered a new era with the emergence of foundation models (FMs). Alongside their potential benefits, the increasing reliance on FMs has also exposed their vulnerabilities to adversarial attacks. The workshop will bring together researchers and practitioners from the computer vision and machine learning communities to explore the latest advances and challenges in adversarial machine learning, with a focus on the robustness of foundation models. We welcome research contributions related to the following (but not limited to) topics:
  • Robustness of foundation models
  • Adversarial attacks on computer vision tasks
  • Improving the robustness of deep learning systems
  • Interpreting and understanding model robustness, especially foundation models
  • Adversarial attacks for social good
  • Dataset and benchmark that could evaluate foundation model robustness
Format: Submissions papers (.pdf format) must use the CVPR 2024 Author Kit for LaTeX/Word Zip file and be anonymized and follow CVPR 2024 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 8 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references.

Submission Site: https://cmt3.research.microsoft.com/CVPRAdvML2024
Submission Due: March 15, 2024, Anywhere on Earth (AoE)


Accepted Long Paper

  • Large Language Models in Wargaming: Methodology, Application, and Robustness [Paper]
    Yuwei Chen (Aviation Industry Development Research Center of China)*; Shiyong Chu (Aviation Industry Development Research Center of China)
  • Enhancing Targeted Attack Transferability via Diversified Weight Pruning [Paper]
    Hung-Jui Wang (National Taiwan University)*; Yu-Yu Wu (National Taiwan University); Shang-Tse Chen (National Taiwan University)
  • Enhancing the Transferability of Adversarial Attacks with Stealth Preservation [Paper]
    Xinwei Zhang (Beihang University); Tianyuan Zhang (Beihang University); Yitong Zhang (Beihang University); Shuangcheng Liu (Beihang University)*
  • Adversarial Attacks on Foundational Vision Models
    Nathan Inkawhich (Air Force Research Laboratory)*; Ryan S Luley (Air Force Research Laboratory); Gwendolyn N McDonald (AFRL)
  • Benchmarking Robustness in Neural Radiance Fields
    Chen Wang (University of Pennsylvania)*; Angtian Wang (Johns Hopkins University); Junbo Li (UC Santa Cruz); Alan Yuille (Johns Hopkins University); Cihang Xie ( University of California, Santa Cruz)
  • Sharpness-Aware Optimization for Real-World Adversarial Attacks for Diverse Compute Platforms with Enhanced Transferability [Paper]
    Muchao Ye (The Pennsylvania State University); Xiang Xu (Amazon)*; Qin Zhang (Amazon.com); Jonathan Wu (Amazon)
  • Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study [Paper]
    Chenguang Wang (Washington University)*; Ruoxi Jia (Virginia Tech); Xin Liu (University of California); Dawn Song (UC Berkeley)
  • Red-Teaming Segment Anything Model [Paper]
    Krzysztof Jankowski (University of Warsaw)*; Bartłomiej Jan Sobieski (Warsaw University of Technology); Mateusz Kwiatkowski (MI2.AI, University of Warsaw); Jakub Szulc (University of Warsaw); Michał Janik (University of Warsaw); Hubert Baniecki (University of Warsaw); Przemyslaw Biecek (Warsaw University of Technology)
  • Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities [Paper]
    SangHwa Hong (SeoulTech)*
  • Multimodal Attack Detection for Action Recognition Models [Paper]
    Furkan Mumcu (University of South Florida)*; Yasin Yilmaz (University of South Florida)

Accepted Extended Abstract

  • Attack End-to-End Autonomous Driving through Module-Wise Noise [Paper]
    Lu Wang (Beihang University); Tianyuan Zhang (Beihang University)*; Yikai Han (Beijing University of Aeronautics and Astronautics); Muyang Fang (Beihang University); Ting Jin (Beihang University); 家麒 康 (Beihang University)
  • Scaling Vision-Language Models Does Not Improve Relational Understanding: The Right Learning Objective Helps [Paper]
    Haider Al-Tahan (Meta - FAIR)*; Quentin Garrido (Meta - FAIR); Randall Balestriero (Facebook AI Research); Diane Bouchacourt (Facebook AI); Caner Hazirbas (Meta AI); Mark Ibrahim (Capital One Center for Machine Learning)
  • ResampleTrack: Online Resampling for Adversarially Robust Visual Tracking
    Xuhong Ren (School of Computer Science and Engineering, Tianjin University of Technology); Jianlang Chen (Kyushu University); Yue Cao (Nanyang Technological University); Wanli Xue (Tianjin University of Technology)*; Qing Guo (A*STAR); Lei Ma (The University of Tokyo / University of Alberta); Jianjun Zhao (Kyushu University); Chen Shengyong (Zhejiang University of technology;Tianjin University of Technology)
  • Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [Paper]
    Siyuan Liang (Department of Computer Science, National University of Singapore)*; kuanrong liu (Sun Yat-San University); Jiajun Gong (School of Computing, National University of Singapore); Jiawei Liang (SUN YAT-SEN UNIVERSITY); Yuan Xun (Institute of Information Engineering, Chinese Academy of Sciences); Ee-Chien Chang (NUS); Xiaochun Cao (Sun Yat-sen University)

Sponsors

logo-img
logo-img

logo-img