Abstract — Synthetically-generated audios and videos -- so-called deep fakes -- continue to capture the imagination of the computer-graphics and computer-vision communities. At the same time, the democratization of access to technology that can create sophisticated manipulated video of anybody saying anything continues to be of concern because of its power to disrupt democratic elections, commit small to large-scale fraud, fuel dis-information campaigns, and create non-consensual pornography. I will describe a biometric-based forensic technique for detecting face-swap deep fakes. This technique combines a static biometric based on facial recognition with a temporal, behavioral biometric based on facial expressions and head movements, where the behavioral embedding is learned using a CNN with a metric-learning objective function. I will show the efficacy of this approach across several large-scale video datasets, as well as in-the-wild deep fakes.
Biography — Hany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Science and the School of Information. His research focuses on digital forensics, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his M.S. in Computer Science from SUNY Albany, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.
|09:00–09:15||Opening and Welcome|
|Detecting Deep-Fake Videos from Appearance and Behavior
Hany Farid, University of California, Berkeley
|10:45-12:00||Session: Deep Learning for Security|
|Learning from Context: A Multi-View Deep Learning Architecture for Malware Detection
Adarsh Kyadige (Sophos), Ethan Rudd (FireEye) and Konstantin Berlin (Sophos)
|Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach
Mohammadreza Ebrahimi (University of Arizona), Sagar Samtani (University of South Florida), Yidong Chai (Tsinghua University) and Hsinchun Chen (University of Arizona)
|Attributing and Detecting Fake Images Generated by Known GANs
Matthew Joslin and Shuang Hao (University of Texas at Dallas)
|12:00-12:25||Session: Security of Deep Learning against poisoning|
|Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring and Konrad Rieck (TU Braunschweig)
|14:30-15:20||Session: Weaponizing Deep Reinforcement Learning|
|On the Robustness of Cooperative Multi-Agent Reinforcement Learning
Jieyu Lin, Kristina Dzeparoska (University of Toronto), Sai Qian Zhang (Harvard University), Alberto Leon-Garcia (University of Toronto) and Nicolas Papernot (University of Toronto and Vector Institute)
|RTA3: A real time adversarial attack on recurrent neural networks
Christopher Serrano, Pape Sylla (HRL Laboratories, LLC), Sicun Gao (University of California San Diego) and Michael Warren (HRL Laboratories, LLC)
|15:45-17:50||Session: Security of Deep Learning and Industry Perspectives|
|Adversarial Machine Learning - Industry Perspectives
Ram Shankar Siva Kumar, Magnus Nystrom, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann and Sharon Xia (Microsoft)
|Adversarial Attacks Against LipNet: End-to-End Sentence Level Lipreading
Mahir Jethanandani and Derek Tang (University of California, Berkeley)
|Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Chawin Sitawarin and David Wagner (University of California, Berkeley)
|Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features
Zhanyuan Zhang, Benson Yuan, Michael McCoyd and David Wagner (University of California Berkeley)
|sentiNet: Detecting Localized Universal Attack Against Deep Learning Systems
Edward Chou (Carnegie Mellon University), Florian Tramèr (Stanford University) and Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)
Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a promising tool for security in academia and industry. On the other hand, the security of deep learning has gained focus in research, the robustness of neural networks has recently been called into question.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security of deep learning.
DLS seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
You are invited to submit papers of up to six pages, plus one page for references. To be considered, papers must be received by the submission deadline (see Important Dates). Submissions must be original work and may not be under submission to another venue at the time of review.
Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
All accepted submissions will be presented at the workshop and included in the IEEE workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
Submission link: https://easychair.org/conferences/?conf=dls2020.
All questions about submissions should be emailed to email@example.com.