Abstract — Despite the the difficulty in measuring progress in adversarial environments, the field of adversarial machine learning undeniably is making progress. After briefly considering the ways in which we have succeeded, talk argues there are ways in which the entire field—both the attackers and defenders—could make more rapid and meaningful progress.
Biography — Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he received best paper awards at ICML and IEEE S&P. He received his PhD from the University of California, Berkeley in 2018.
Abstract — Static and dynamic program analysis is a flourishing area of programming language research. However, existing analyses often fail to use readily available but ambiguous information about program behavior which is usually available to software engineers. Recently, thanks to advancements in machine learning, and deep learning in specific, a promising new kind of learned code analysis has emerged that aims to fuse all information and probabilistically reason about code.
In this talk, I will give a brief overview of research that lies in the intersection of machine learning and programming languages and discuss research that provides indications that the "soft" aspects of source code contain useful information that can be exploited by code analyses. Then, I will focus on some recent advances on learnable code analyses. In particular, by representing relationships of program elements with graphs, we can exploit a powerful set of deep learning algorithms that allow us to learn from "big code" to perform statistical static analyses and find real-life bugs. I will conclude by discussing open problems and challenges in this area.
Biography — I am a researcher at Microsoft Research, Cambridge, UK. My research is at the intersection of machine learning, natural language processing and software engineering. My aim is to combine the rich structural aspects of programming languages with machine learning to create better coding tools for end-users and developers, while using problems in this area to motivate machine learning research. I have published in both machine learning and software engineering conferences and recently coauthored survey on machine learning for source code. I obtained my PhD from the University of Edinburgh advised by Dr. Charles Sutton. More information about me and my publications can be found here
|Workshop location:||Bayview B|
|09:00–09:15||Opening and Welcome|
|Making and Measuring Progress in Adversarial Machine Learning
Nicholas Carlini, Google Brain
|10:15–10:45||Coffee break (Seacliff Foyer)|
|10:45-12:25||Session: Security of Deep Learning (Chair: Brendan Dolan-Gavitt)|
|On the Robustness of Deep K-Nearest Neighbors
Chawin Sitawarin and David Wagner (University of California, Berkeley)
|Exploring Adversarial Examples in Malware Detection
Octavian Suciu (University of Maryland, College Park), Scott E. Coull, and Jeffrey Johns (FireEye, Inc.)
|Targeted Adversarial Examples for Black Box Audio Systems
Rohan Taori, Amog Kamsetty, Brenton Chu, and Nikita Vemuri (UC Berkeley)
|Activation Analysis of a Byte-Based Deep Neural Network for Malware Classification
Scott E. Coull and Christopher Gardner (FireEye, Inc.)
|12:30–13:30||Lunch (Waterfront C/D/E)|
|Towards Learned Program Analyses with Machine Learning
Miltos Allamanis, Microsoft Research
|14:30-15:20||Session: Deep Learning for Security (Chair: Xinyu Xing)|
|MaxNet: Neural network architecture for continuous detection of malicious activity
Petr Gronat, Javier Aldana-Iuit, and Martin Balek (AVAST)
|Deep in the Dark - Deep Learning-based Malware Traffic Detection without Expert Knowledge
Gonzalo Marín (IIE-FING, Universidad de la República), Pedro Casas (AIT Austrian Institute of Technology), and Germán Capdehourat (IIE-FING, Universidad de la República)
|15:20–15:45||Coffee break (Seacliff Foyer)|
|15:45-17:00||Session: Deep Learning and Privacy (Chair: Florian Tramer)|
|Defending against NN Model Stealing Attacks using Deceptive Perturbations
Taesung Lee, Benjamin Edwards, Ian Molloy, and Dong Su (IBM Research AI)
|Membership Inference Attacks against Adversarially Robust Deep Learning Models
Liwei Song (Princeton University), Reza Shokri (National University of Singapore), and Prateek Mittal (Princeton University)
| Efficient Evaluation of Activation Functions over Encrypted Data
Patricia Thaine (University of Toronto), Sergey Gorbunov (University of Waterloo), and Gerald Penn (University of Toronto)
Deep learning and security have made remarkable progress in the last years. Neural networks have been recognized as an essential tool for security in academia and industry, for example, for detecting attacks, analyzing malicious code or uncovering vulnerabilities in software. At the same time, the security of deep learning has gained focus in research and novel types of attacks against neural networks have been explored, such as adversarial perturbations, neural backdoors and membership inference attacks.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning. The workshop is aimed at academic and industrial researchers.
DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review.
For any questions, contact the workshop organizers at firstname.lastname@example.org
All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
Submissions should be made online at https://dls2019.sec.tu-bs.de.