3rd Deep Learning and
Security Workshop
co-located with the 41st IEEE Symposium on Security and Privacy
Photo: Pixabay

Keynotes

Detecting Deep-Fake Videos from Appearance and Behavior
Hany Farid, University of California, Berkeley

Abstract — Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. I will describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. I will show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.

BiographyHany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Science and the School of Information. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his M.S. in Computer Science from SUNY Albany, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.

Programme

The following times are on PT time zone.
09:00–09:15 Opening and Welcome
09:15–10:15 Keynote
Detecting Deep-Fake Videos from Appearance and Behavior
Hany Farid, University of California, Berkeley
10:15–10:45 Break
10:45-12:00 Session: Deep Learning for Security
Learning from Context: A Multi-View Deep Learning Architecture for Malware Detection
Adarsh Kyadige (Sophos), Ethan Rudd (FireEye) and Konstantin Berlin (Sophos)
Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach
Mohammadreza Ebrahimi (University of Arizona), Sagar Samtani (University of South Florida), Yidong Chai (Tsinghua University) and Hsinchun Chen (University of Arizona)
Attributing and Detecting Fake Images Generated by Known GANs
Matthew Joslin and Shuang Hao (University of Texas at Dallas)
12:00-12:25 Session: Security of Deep Learning against poisoning
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring and Konrad Rieck (TU Braunschweig)
12:30–13:30 Break
13:30–14:30 Keynote
TBA
14:30-15:20 Session: Weaponizing Deep Reinforcement Learning
On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Jieyu Lin, Kristina Dzeparoska (University of Toronto), Sai Qian Zhang (Harvard University), Alberto Leon-Garcia (University of Toronto) and Nicolas Papernot (University of Toronto and Vector Institute)
RTA3: A real time adversarial attack on recurrent neural networks
Christopher Serrano, Pape Sylla (HRL Laboratories, LLC), Sicun Gao (University of California San Diego) and Michael Warren (HRL Laboratories, LLC)
15:20–15:45 Break
15:45-17:50 Session: Security of Deep Learning and Industry Perspectives
Adversarial Machine Learning - Industry Perspectives
Ram Shankar Siva Kumar, Magnus Nystrom, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann and Sharon Xia (Microsoft)
Adversarial Attacks Against LipNet: End-to-End Sentence Level Lipreading
Mahir Jethanandani and Derek Tang (University of California, Berkeley)
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Chawin Sitawarin and David Wagner (University of California, Berkeley)
Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features
Zhanyuan Zhang, Benson Yuan, Michael McCoyd and David Wagner (University of California Berkeley)
sentiNet: Detecting Localized Universal Attack Against Deep Learning Systems
Edward Chou (Carnegie Mellon University), Florian Tramèr (Stanford University) and Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)
17:50–18:05 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline (extended): January 13 20, 2020, 11:59 PM (AoE, UTC-12)
  • Acceptance notification: February 24, 2020
  • Camera-ready due: March 9, 2020
  • Workshop: May 21, 2020

Overview

Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness of neural networks has recently been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning.

Topics of Interest

DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Deep learning for program embedding and similarity
  • Deep program learning
  • Modern deep NLP
  • Recurrent network architectures
  • Neural networks for graphs
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Relational modeling and prediction
  • Deep reinforcement learning
  • Attacks against deep learning
  • Resilient and explainable deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.

Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

Presentation Form

All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission Site

Submission link: https://easychair.org/conferences/?conf=dls2020.

All questions about submissions should be emailed to dlsec2020@gmail.com.

Committee

Workshop Chair

Program Chair

Program Co-Chair

Web and Publicity Chair

Steering Committee

Program Committee