2nd Deep Learning and
Security Workshop
May 23, 2019 — San Francisco, CA
co-located with the 40th IEEE Symposium on Security and Privacy
Photo: Pixabay


Making and Measuring Progress in Adversarial Machine Learning
Nicholas Carlini, Google Brain

Abstract — Despite the the difficulty in measuring progress in adversarial environments, the field of adversarial machine learning undeniably is making progress. After briefly considering the ways in which we have succeeded, talk argues there are ways in which the entire field—both the attackers and defenders—could make more rapid and meaningful progress.

BiographyNicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he received best paper awards at ICML and IEEE S&P. He received his PhD from the University of California, Berkeley in 2018.

Towards Learned Program Analyses with Machine Learning
Miltos Allamanis, Microsoft Research

Abstract — Static and dynamic program analysis is a flourishing area of programming language research. However, existing analyses often fail to use readily available but ambiguous information about program behavior which is usually available to software engineers. Recently, thanks to advancements in machine learning, and deep learning in specific, a promising new kind of learned code analysis has emerged that aims to fuse all information and probabilistically reason about code.

In this talk, I will give a brief overview of research that lies in the intersection of machine learning and programming languages and discuss research that provides indications that the "soft" aspects of source code contain useful information that can be exploited by code analyses. Then, I will focus on some recent advances on learnable code analyses. In particular, by representing relationships of program elements with graphs, we can exploit a powerful set of deep learning algorithms that allow us to learn from "big code" to perform statistical static analyses and find real-life bugs. I will conclude by discussing open problems and challenges in this area.

Biography — I am a researcher at Microsoft Research, Cambridge, UK. My research is at the intersection of machine learning, natural language processing and software engineering. My aim is to combine the rich structural aspects of programming languages with machine learning to create better coding tools for end-users and developers, while using problems in this area to motivate machine learning research. I have published in both machine learning and software engineering conferences and recently coauthored survey on machine learning for source code. I obtained my PhD from the University of Edinburgh advised by Dr. Charles Sutton. More information about me and my publications can be found here


Workshop location: Bayview B
09:00–09:15 Opening and Welcome
09:15–10:15 Keynote
Making and Measuring Progress in Adversarial Machine Learning  
Nicholas Carlini, Google Brain
10:15–10:45 Coffee break (Seacliff Foyer)
10:45-12:25 Session: Security of Deep Learning (Chair: Brendan Dolan-Gavitt)
On the Robustness of Deep K-Nearest Neighbors  
Chawin Sitawarin and David Wagner (University of California, Berkeley)
Exploring Adversarial Examples in Malware Detection  
Octavian Suciu (University of Maryland, College Park), Scott E. Coull, and Jeffrey Johns (FireEye, Inc.)
Targeted Adversarial Examples for Black Box Audio Systems  
Rohan Taori, Amog Kamsetty, Brenton Chu, and Nikita Vemuri (UC Berkeley)
Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification  
Scott E. Coull and Christopher Gardner (FireEye, Inc.)
12:30–13:30 Lunch (Waterfront C/D/E)
13:30–14:30 Keynote
Towards Learned Program Analyses with Machine Learning  
Miltos Allamanis, Microsoft Research
14:30-15:20 Session: Deep Learning for Security (Chair: Xinyu Xing)
MaxNet: Neural network architecture for continuous detection of malicious activity
Petr Gronat, Javier Aldana-Iuit, and Martin Balek (AVAST)
Deep in the Dark - Deep Learning-based Malware Traffic Detection without Expert Knowledge  
Gonzalo Marín (IIE-FING, Universidad de la República), Pedro Casas (AIT Austrian Institute of Technology), and Germán Capdehourat (IIE-FING, Universidad de la República)
15:20–15:45 Coffee break (Seacliff Foyer)
15:45-17:00 Session: Deep Learning and Privacy (Chair: Florian Tramer)
Defending against NN Model Stealing Attacks using Deceptive Perturbations
Taesung Lee, Benjamin Edwards, Ian Molloy, and Dong Su (IBM Research AI)
Membership Inference Attacks against Adversarially Robust Deep Learning Models  
Liwei Song (Princeton University), Reza Shokri (National University of Singapore), and Prateek Mittal (Princeton University)
Efficient Evaluation of Activation Functions over Encrypted Data  
Patricia Thaine (University of Toronto), Sergey Gorbunov (University of Waterloo), and Gerald Penn (University of Toronto)
17:00–17:15 Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: December 21, 2018 December 14, 2018 (AoE, UTC-12)
  • Acceptance notification: February 15, 2019
  • Publication-ready Papers Due: March 11, 2019
  • Workshop: May 23, 2019


Deep learning and security have made remarkable progress in the last years. Neural networks have been recognized as an essential tool for security in academia and industry, for example, for detecting attacks, analyzing malicious code or uncovering vulnerabilities in software. At the same time, the security of deep learning has gained focus in research and novel types of attacks against neural networks have been explored, such as adversarial perturbations, neural backdoors and membership inference attacks.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning. The workshop is aimed at academic and industrial researchers.

Topics of Interest

DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Deep learning architectures for program embedding
  • Deep learning methods for program similarity
  • Deep program learning
  • Modern deep NLP
  • Recurrent network architectures
  • Neural networks for graphs
  • Neural Turing machines
  • Semantic knowledge-bases
  • Generative adversarial networks
  • Relational modeling and prediction
  • Deep reinforcement learning
  • Attacks against deep learning
  • Resilient and explainable deep learning

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.

Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review.

For any questions, contact the workshop organizers at dls2019@sec.tu-bs.de

Presentation Form

All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission Site

Submissions should be made online at https://dls2019.sec.tu-bs.de.


Workshop Chair

Program Chair

Program Co-Chair

Steering Committee

Program Committee