3rd Deep Learning and
Security Workshop
co-located with the 41st IEEE Symposium on Security and Privacy
May 21, 2020
Photo: Pixabay

Keynotes

To Defend Against Adversarial Examples We Need to Understand Human Vision
Wieland Brendel, Tübingen AI Center, Germany

Abstract — Adversarial examples are among the most striking failure cases of modern deep neural networks and defending against them has proven notoriously difficult. In this talk I discuss the connection between adversarial examples and a much larger class of failure cases that arise from machines exploiting shortcuts in the given task formulation. Shortcut learning is astonishingly difficult (or even impossible) to detect in the typical i.i.d. formulation of machine learning tasks. However, testing machines in more challenging out-of-distribution (o.o.d.) scenarios — such as adversarial examples — often highlight a clear difference to the decision strategies humans employ. Closing this gap is unlikely to arise from a single change in the architecture or the training procedure, but will likely require a combined effort that aligns the tasks, the data, the learning strategies and the model architectures closer with the human visual system and the evolutionary pressures that have pushed humans to develop robust visual representations and decision strategies.

Biography — Wieland Brendel is a principal investigator at the Tübingen AI Center (Germany). His research focuses on the robustness and interpretability of machine vision models and draws inspiration from human perception. He received his M.S. in Physics from the University of Regensburg in 2009 and his Ph.D. in Computational Neuroscience at the Ecole Normale Superieure Paris in 2014. After a four-year post-doctoral fellowship in Machine Learning at the University of Tübingen he joined the Tübingen AI Center as a PI in 2018.

Detecting Deep-Fake Videos from Appearance and Behavior
Hany Farid, University of California, Berkeley

Abstract — Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. I will describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. I will show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.

BiographyHany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Science and the School of Information. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his M.S. in Computer Science from SUNY Albany, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.

Programme - May 21, 2020

The following times are on PT time zone. Proceedings are available (with credentials) here.
09:00–09:15 Opening and Welcome
09:15–10:15 Keynote I (Chair: Battista Biggio)
To Defend Against Adversarial Examples We Need to Understand Human Vision
Wieland Brendel, Tübingen AI Center, Germany
10:15–10:45 Break
10:45-12:00 Session I: Deep Learning for Security (Chair: Lorenzo Cavallaro)
10.45: Learning from Context: A Multi-View Deep Learning Architecture for Malware Detection
Adarsh Kyadige (Sophos), Ethan Rudd (FireEye) and Konstantin Berlin (Sophos)
11.10: Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach
Mohammadreza Ebrahimi (University of Arizona), Sagar Samtani (University of South Florida), Yidong Chai (Tsinghua University) and Hsinchun Chen (University of Arizona)
11.35 (recorded): Attributing and Detecting Fake Images Generated by Known GANs
Matthew Joslin (University of Texas at Dallas) and Shuang Hao (University of Texas at Dallas)
12:00-12:25 Session II: Security of Deep Learning against Poisoning (Chair: Lorenzo Cavallaro)
12.00 (recorded): Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring (TU Braunschweig) and Konrad Rieck (TU Braunschweig)
12:30–13:30 Break
13:30–14:30 Keynote II (Chair: Nicholas Carlini)
Detecting Deep-Fake Videos from Appearance and Behavior
Hany Farid, University of California, Berkeley
14:30-15:20 Session III: Weaponizing Deep Reinforcement Learning (Chair: Florian Tramer)
14.30: On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Jieyu Lin (University of Toronto), Kristina Dzeparoska (University of Toronto), Sai Qian Zhang (Harvard University), Alberto Leon-Garcia (University of Toronto) and Nicolas Papernot (University of Toronto and Vector Institute)
14.55: RTA3: A real time adversarial attack on recurrent neural networks
Christopher Serrano (HRL Laboratories, LLC), Pape Sylla (HRL Laboratories, LLC), Sicun Gao (University of California San Diego) and Michael Warren (HRL Laboratories, LLC)
15:20–15:45 Break
15:45-17:50 Session IV: Security of Deep Learning and Industry Perspectives (Chair: Nicholas Carlini)
15.45: Adversarial Machine Learning - Industry Perspectives
Ram Shankar Siva Kumar (Microsoft), Magnus Nystrom, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann and Sharon Xia (Microsoft)
16.10 (recorded): Adversarial Attacks Against LipNet: End-to-End Sentence Level Lipreading
Mahir Jethanandani (UC Berkeley) and Derek Tang (UC Berkeley)
16.35: Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Chawin Sitawarin (UC Berkeley) and David Wagner (UC Berkeley)
17.00: Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features
Zhanyuan Zhang (UC Berkeley), Benson Yuan, Michael McCoyd and David Wagner (UC Berkeley)
17.25: sentiNet: Detecting Localized Universal Attack Against Deep Learning Systems
Edward Chou (Carnegie Mellon University), Florian Tramèr (Stanford University) and Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)
17:50–18:05 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline (extended): January 13 20, 2020, 11:59 PM (AoE, UTC-12)
  • Acceptance notification: February 24, 2020
  • Camera-ready due: March 9, 2020
  • Workshop: May 21, 2020

Overview

Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness of neural networks has recently been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning.

Topics of Interest

DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Deep learning for program embedding and similarity
  • Deep program learning
  • Modern deep NLP
  • Recurrent network architectures
  • Neural networks for graphs
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Relational modeling and prediction
  • Deep reinforcement learning
  • Attacks against deep learning
  • Resilient and explainable deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.

Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

Presentation Form

All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission Site

Submission link: https://easychair.org/conferences/?conf=dls2020.

All questions about submissions should be emailed to dlsec2020@gmail.com.

Committee

Workshop Chair

Program Chair

Program Co-Chair

Web and Publicity Chair

Steering Committee

Program Committee