1st Deep Learning and Security Workshop
May 24, 2018
co-located with the
39th IEEE Symposium on Security and Privacy
Registration

Registration Informaion

To register for DLS 2018, please visit the IEEE S&P Workshops page and click on the Register button.

Workshop Agenda

Time

Title

Authors / Speakers

7:30 - 8:30

Breakfast

8:45 - 9

Opening Remarks

Nikolaos Vasiloglou,

Roberto Perdisci

Session I

9 - 9:55

Keynote I

David Evans

9:55 - 10:15

Best paper presentation

see papers list

10:15 - 10:45

Break

Session II

10:45 - 12:30

(7x15min paper presentations)

Oral presentations

see papers list

12:30 - 1:30

Lunch

Session III

1:30 - 2:25

Keynote II

Ian Goodfellow

2:30 - 3:45

Poster presentations

see papers list

3:20 - 3:45

Break

Session IV

3:45 - 4:40

Keynote III

Dawn Song

4:45 - 5:30

(3x15min paper presentations)

Oral presentations

see papers list

5:30 - 5:45

Closing Remarks

Nikolaos Vasiloglou,

Roberto Perdisci



NOTE: Due to a large number of submissions and accepted papers, some papers will be presented during a Poster Session while others will be presented orally. Nonetheless, all accepted papers should be considered as having equal importance.

Accepted Papers

    Session I (Best Paper)
  • Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
    (Nicholas Carlini and David Wagner)

  • Session II
  • A Deep Learning Approach to Fast, Format-Agnostic Detection of Malicious Web Content
    (Joshua Saxe, Richard Harang, Cody Wild and Hillary Sanders)
  • Mouse Authentication without the Temporal Aspect – What does a 2D-CNN learn?
    (Penny Chong, Yi Xiang Marcus Tan, Juan Guarnizo, Yuval Elovici and Alexander Binder)
  • Detecting Homoglyph Attacks with a Siamese Neural Network
    (Jonathan Woodbridge, Hyrum Anderson, Anjum Ahuja and Daniel Grant)
  • Machine-Learning DDoS Detection for Consumer Internet-of-Things Devices
    (Rohan Doshi, Noah Apthorpe and Nick Feamster)
  • Adversarial examples for generative models
    (Jernej Kos, Ian Fischer and Dawn Song)
  • Learning Universal Adversarial Perturbations with Generative Models
    (Jamie Hayes and George Danezis)
  • Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
    (Ji Gao, Jack Lanchantin, Mary Lou Soffa and Yanjun Qi)

  • Session III (Posters)
  • Exploring the Use of Autoencoders for Botnets Traffic Representation
    (Ruggiero Dargenio, Shashank Srikant, Erik Hemberg and Una-May O'Reilly)
  • The good, the bad and the bait: Detecting and characterizing clickbait videos on YouTube
    (Savvas Zannettou, Sotirios Chatzis, Kostantinos Papadamou and Michael Sirivianos)
  • Bringing a GAN to a Knife-fight: Adapting Malware Communication to Avoid Detection
    (Maria Rigaki and Sebastian Garcia)
  • Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
    (Alex Huang, Abdullah Al-Dujaili, Erik Hemberg and Una-May O'Reilly)
  • Extending Detection with Privileged Information via Generalized Distillation
    (Z. Berkay Celik and Patrick McDaniel)
  • Detecting Deceptive Reviews using Generative Adversarial Networks
    (Hojjat Aghakhani, Aravind Machiry, Shirin Nilizadeh, Christopher Kruegel and Giovanni Vigna)
  • Background Class Defense (miss) Against Adversarial Examples
    (Michael McCoyd and David Wagner)
  • Time Series Deinterleaving of DNS Traffic
    (Amir Asiaee T., Hardik Goel, Shalini Ghosh, Vinod Yegneswaran and Arindam Banerjee)
  • Extended Abstract -- Mimicry Resilient Program Behavior Modeling with LSTM based Branch Models
    (Hayoon Yi, Gyuwan Kim, Jangho Lee, Sunwoo Ahn, Younghan Lee, Sungroh Yoon and Yunheung Paek)
  • Extended Abstract -- Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos
    (Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mittal and Mung Chiang)

  • Session IV
  • HeNet: A Deep Learning Approach on Intel® Processor Trace for Effective Exploit Detection
    (Li Chen, Salmin Sultana and Ravi Sahita)
  • Deep Reinforcement Fuzzing
    (Konstantin Böttinger, Patrice Godefroid and Rishabh Singh)
  • Security Risks in Deep Learning Implementations
    (Qixue Xiao, Kang Li, Deyue Zhang and Weilin Xu)

Committees

Workshop Chair
Nikolaos Vasiloglou - Symantec Center for Advanced Machine Learning
 
Program Committee Chair
Roberto Perdisci - University of Georgia and Georgia Tech
 
Program Committee Co-Chairs
Babak Rahbarinia - Auburn University at Montgomery
Andrew Gardner - Symantec Center for Advanced Machine Learning
 
Steering Committee
Dawn Song - EECS UC Berkeley
Ian Goodfellow - Google Brain
Wenke Lee - Georgia Institute of Technology
 
Technical Program Committee
Alexandros Dimakis - University of Texas at Austin
Alvaro Cardenas - University of Texas at Dallas
Alina Oprea - Northeastern University
Baris Coskun - Amazon Web Services
Battista Biggio - University of Cagliari, Italy
Benjamin Rubinstein - University of Melbourne, Australia
Bo Li - EECS UC Berkeley
Christos Dimitrakakis - Chalmers University of Technology, Sweden
Giorgio Giacinto - University of Cagliari, Italy
Javier Echauz - Symantec Research
Kang Li - University of Georgia
Kevin Roundy - Symantec Research
Konrad Rieck - TU Braunschweig, Germany
Lorenzo Cavallaro - Royal Holloway University, London
Neil Zhenqiang Gong - Iowa State University
Nicolas Papernot - Pennsylvania State University
Philip Tully - ZeroFOX
Polo Chau - Georgia Institute of Technology
Sai Deep Tetali - Google
Tummalapalli S. Reddy - Kayak Software
Yinzhi Cao - Lehigh University
Yizheng Chen - Baidu
Yufei Han - Symantec Research
 
 

Call for Papers

1st Deep Learning and Security Workshop

Thursday, May 24, 2018
co-located with the 39th IEEE Symposium on Security and Privacy

Important Dates

NOTE: Due to network issues that affected the EasyChair.org website just hours before the deadline, we will keep the submission system open until January 6, 11am PST, so that affected authors will have an opportunity to submit their papers. Please notice that email submissions will not be considered for review. If you have experienced problems with the EasyChair.org website, please try again, as the problem has now been resolved.

  • Paper Submissions Deadline: December 22, 2017 January 6, 2018 (11:00am PST)
  • Acceptance Notice to Authors: February 15, 2018 (tentative) March 5, 2018
  • Camera ready papers: March 25, 2018
  • Workshop date: Thursday, May 24, 2018

Overview

Over the past decade, machine learning methods have found their way into a large variety of computer security applications, including accurate spam detection, scalable discovery of new malware families, identifying malware download events in vast amounts of web traffic, detecting software exploits, blocking phishing web pages, and preventing fraudulent financial transactions, just to name a few.

At the same time, machine learning methods themselves have evolved. In particular, Deep Learning methods have recently demonstrated great improvements over more “traditional” learning approaches on a number of important tasks, including image and audio classification, natural language processing, machine translation, etc. Moreover, areas such as program induction and neural abstract machines have made it possible to generate and analyze programs in depth. It is therefore natural to ask how the success of these deep learning methods may be translated to advancing the state-of-the-art in security applications.

This workshop is aimed at academic and industrial researchers interested in the application of deep learning methods to computer security problems. Some of the key research questions of interest will include the following:

  • What are the strengths and shortcomings of current learning methods for representing and/or detecting security threats?
  • Can deep learning methods be successfully applied to security applications?
  • Can deep learning help to develop more efficient malware analysis by building a more accurate representation of program behaviors?
  • What are the challenges involved, and will the use of deep learning methods significantly improve over previous results?
  • Can deep learning methods better cope with problems related to learning in adversarial environments?
  • What are the big, open problems in threat representation, especially for the detection of malicious software?
  • How can generative models improve our understanding and detection of threats?

Topics

Topics of interest include (but are not limited to):

Deep Learning
- Deep learning architectures
- Deep NLP (natural language processing)
- Recurrent networks architectures
- Effective feature embedding
- Neural networks for graphs
- Generative adversarial networks
- Deep reinforcement learning
- Relational modeling and prediction
- Semantic knowledge-bases
- Neural abstract machines and program induction

Security Applications
- Program representation
- Malware identification, analysis and similarity
- Detecting malicious software downloads at scale
- Representation and detection of social engineering attacks
- Botnet detection
- Intrusion detection and response
- Spam and phishing detection
- Classification of sequences of system/network events
- Security in social networks
- Application of learning to computer forensics
- Learning in adversarial environments

Workshop Format

The workshop invites two types of submissions: full research papers and extended abstracts. Full papers are expected to present completed work and will be published in the workshop’s IEEE proceedings. On the other hand, extended abstract submissions are intended to encourage the presentation of preliminary research ideas or case studies around challenges and solutions related to the use of deep learning systems in real-world security applications. While accepted extended abstracts will not be part of the formal IEEE proceedings, they will be preserved as an online open publication (e.g., on arxiv.org) and the authors will be free to submit an extended version of their work to other venues.

One author of each accepted paper is expected to present the submitted work at the workshop. Paper presentations will follow the traditional conference-style format with questions from the audience. More information on available speaking slots and workshop format details will be provided ahead of the workshop date.

Instructions for Submission

To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.

Full research papers must be no longer than six pages, plus one page for references.
Extended abstract submissions must be no longer than four pages, plus one page for references, and need to include Extended Abstract in the title or subtitle.

Submitted papers should contain the name and affiliation of all authors. Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review.

Papers must be in Portable Document Format (.pdf). Authors should pay special attention to unusual fonts, images, and figures that might create problems for reviewers. Submitted documents should render correctly in Adobe Reader and when printed in black and white.

Papers must be submitted at https://easychair.org/conferences/?conf=dls2018
For any questions, please contact the workshop organizers at: dlsec2018@gmail.com

Supporters



Avast