4th Deep Learning and
Security Workshop
co-located with the 42nd IEEE Symposium on Security and Privacy
May 27, 2021
Photo: Pixabay

Keynotes

Talk Title: Lessons from adversarially attacking commercial malware detectors
Sadia Afroz

Abstract: Modern commercial antivirus systems increasingly rely on machine learning to keep up with the rampant inflation of new malware. Machine learning models are vulnerable to adversarial attacks, which can be devastating for using ML models in security. However, academic adversarial attacks focusing on purely machine learning models fail to evade real-world anti-virus systems. In this talk, I will discuss our experiences of performing adversarial attacks on commercial malware detectors.
Bio: Sadia Afroz is a researcher at the International Computer Science Institute (ICSI) and Avast Software. Her work focuses on anti-censorship, anonymity, and adversarial learning. Her work on adversarial authorship attribution received the Privacy Enhancing Technology (PET) award and the ACM SIGSAC dissertation award (runner-up). More about her research can be found: http://www1.icsi.berkeley.edu/~sadia/

Talk Title: Certification Against Adversarial Attacks
Martin Vechev

Abstract: In this talk, I will discuss some of the latest progress we have made on the problem of proving that deep learning models are provably secure against adversarial attacks. Concretely, I will focus on the latest advancements in new convex relaxations which can be used to provide mathematical guarantees that noise-based, geometric, audio or natural language attacks are not possible. In the process, I will also discuss various open research problems in building large-scale models that come with provable security guarantees.
Bio:Martin Vechev is an Associate Professor of Computer Science at ETH Zurich where he leads the Secure, Reliable and Intelligent Systems Lab (https://www.sri.inf.ethz.ch/). Prior to that he was a Research Staff Member at the IBM T.J. Watson Research Center in New York, USA. He received his PhD from Cambridge University, England. His main research interests span the intersection of symbolic and probabilistic techniques with applications to trustworthy and robust AI, security, and systems. He has also co-founded 3 start-ups DeepCode (2016), ChainSecurity (2017), and LatticeFlow (2020), the first two of which have been recently acquired while the latest (LatticeFlow) is currently focused on building a product that enables the creation of trustworthy, fair and robust AI models.

Programme - May 27, 2021

The following times are on PT time zone. Proceedings are available (with credentials) here.
8:45–09:00 Opening and Welcome
09:00–10:00 Keynote I (Chair: Lorenzo Cavallaro)
Lessons from adversarially attacking commercial malware detectors
Sadia Afroz
10:00-11:30 Session I (Chair: TBA)
10:00: Innocent Until Proven Guilty (IUPG): Building Deep Learning Models with Embedded Robustness to Out-Of-Distribution Content
Brody Kutt (Palo Alto Networks), William Hewlett (Palo Alto Networks), Oleksii Starov (Palo Alto Networks), Yuchen Zhou (Palo Alto Networks)
10:30: SAFELearn: Secure Aggregation for private FEderated Learning
Hossein Fereidooni (TU Darmstadt), Samuel Marchal (Aalto University & F-Secure Corporation), Markus Miettinen (TU Darmstadt), Azalia Mirhoseini (Google Brain), Helen Moellering (TU Darmstadt), Thien Duc Nguyen (TU Darmstadt), Phillip Rieger (TU Darmstadt), Ahmad-Reza Sadeghi (TU Darmstadt), Thomas Schneider (TU Darmstadt), Hossein Yalame (TU Darmstadt), Shaza Zeitouni (TU Darmstadt)
11:00: Applying Deep Learning to Combat Mass Robocalls
Sharbani Pandit (Georgia Institute of Technology), Jienan Liu (University of Georgia), Roberto Perdisci (University of Georgia, Georgia Institute of Technology), Mustaque Ahamad (Georgia Institute of Technology)
11:30–12:30 Lunch Break
12:30–13:30 Keynote II (Chair: Nicholas Carlini)
Certification Against Adversarial Attacks
Martin Vechev
13:30-15:00 Session II: (Chair: TBA)
13:30: MMGuard: Automatically Protecting On-Device Deep Learning Models in Android Apps
Jiayi Hua (Beijing University of Posts and Telecommunications), Yuanchun Li (Microsoft Research), Haoyu Wang (Beijing University of Posts and Telecommunications)
14:00: BODMAS: An Open Dataset for Learning based Temporal Analysis of PE Malware
Limin Yang (University of Illinois at Urbana-Champaign), Arridhana Ciptadi (Blue Hexagon), Ihar Laziuk (Blue Hexagon), Ali Ahmadzadeh (Blue Hexagon), Gang Wang (University of Illinois at Urbana-Champaign)
14:30: Binary Black-Box Attacks Against Static Malware Detectors with Reinforcement Learning in Discrete Action Spaces
Mohammadreza Ebrahimi (University of Arizona), Jason Pacheco (University of Arizona), Weifeng Li (University of Georgia), James Lee Hu (University of Arizona), Hsinchun Chen (University of Arizona)
15:00–15:30 Break
15:30–16:30 Privacy Pannel (Chair: Ram Shankar Siva Kumar)
Beyond deep learning security, what is needed to make ML trustworthy?
A pannel discussion with Anupam Datta, Seth Neel, Aleksandra Korolova, and Kamalika Chaudhuri
16:30–16:35 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: January 7 14 (EXTENDED), 2021, 11:59 PM (AoE, UTC-12)
  • Acceptance notification: February
  • Camera-ready due: March
  • Workshop: May 27, 2021

Overview

Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness of neural networks has recently been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning.

Topics of Interest

DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Deep learning for program embedding and similarity
  • Deep program learning
  • Modern deep NLP
  • Recurrent network architectures
  • Neural networks for graphs
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Relational modeling and prediction
  • Deep reinforcement learning
  • Attacks against deep learning
  • Resilient and explainable deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.

Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

Presentation Form

All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the (virtual) workshop and present the paper for it to be included in the proceedings.

Submission Site

https://dls2021.hotcrp.com/

Committee

Workshop Chair

Program Chair

Program Co-Chair

Steering Committee

Program Committee

    Alvaro Cardenas
    Andrew Ilyas
    Baris Coskun
    Bo Li
    Chao Zhang
    Christian Wressnegger
    Daniel Arp
    Dimitris Tsipras
    Erwin Quiring
    Fabio Pierazzi
    Feargus Pendlebury
    Florian Tramèr
    Gang Wang
    Hyrum Anderson
    Kang Li
    Konrad Rieck
    Kui Ren
    Lorenzo Cavallaro
    Matthew Jagielski
    Neil Gong
    Nikolaos Vasiloglou
    Pavel Laskov
    Philip Tully
    Pin-Yu Chen
    Reza Shokri
    Roberto Perdisci
    Sai Deep Tetali
    Samuel Marchal
    Sanghyun Hong
    Shang-Tse Chen
    Sudhamsh Reddy
    Varun Chandrasekaran
    Yinzhi Cao
    Yizheng Chen
    Yufei Han
    Ziqi Yang