MAY 22-24, 2017 AT THE FAIRMONT HOTEL, SAN JOSE, CA

38th IEEE Symposium on
Security and Privacy

Agenda


May 21


Registration and Reception

04:00PM - 07:00PM

May 22


Registration

07:00AM - 05:00PM

Breakfast

07:30AM - 08:30AM

Opening Remarks

08:30AM - 08:40AM

Session #1: Privacy and Learning

08:40AM - 10:20AM

Membership Inference Attacks against Machine Learning Models
Reza Shokri (Cornell Tech), Marco Stronati (INRIA), Congzheng Song (Cornell), Vitaly Shmatikov (Cornell Tech)
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model’s training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model’s predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial “machine learning as a service” providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
SecureML: A System for Scalable Privacy-Preserving Machine Learning
Payman Mohassel (Visa Research), Yupeng Zhang (University of Maryland)
Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to nonlinear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini (University of California, Berkeley), David Wagner (University of California, Berkeley)
Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks’ ability to find adversarial examples from 95% to 0.5%. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.
Is Interaction Necessary for Distributed Private Learning?
Adam Smith (Pennsylvania State University), Abhradeep Thakurta (University of California Santa Cruz), Jalaj Upadhyay (Pennsylvania State University)
Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual’s device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users’ hands. For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol. We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction. We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.
Pyramid: Enhancing Selectivity in Big Data Protection with Count Featurization
Mathias Lecuyer (Columbia University), Riley Spahn (Columbia University), Roxana Geambasu (Columbia University), Tzu-Kuo Huang (Uber Advanced Technologies Group), Siddhartha Sen (Microsoft Research)
Protecting vast quantities of data poses a daunting challenge for the growing number of organizations that collect, stockpile, and monetize it. The ability to distinguish data that is actually needed from data collected “just in case” would help these organizations to limit the latter’s exposure to attack. A natural approach might be to monitor data use and retain only the working-set of in-use data in accessible storage; unused data can be evicted to a highly protected store. However, many of today’s big data applications rely on machine learning (ML) workloads that are periodically retrained by accessing, and thus exposing to attack, the entire data store. Training set minimization methods, such as count featurization, are often used to limit the data needed to train ML workloads to improve performance or scalability. We present Pyramid, a limited-exposure data management system that builds upon count featurization to enhance data protection. As such, Pyramid uniquely introduces both the idea and proof-of-concept for leveraging training set minimization methods to instill rigor and selectivity into big data management. We integrated Pyramid into Spark Velox, a framework for ML-based targeting and personalization. We evaluate it on three applications and show that Pyramid approaches state-of-the-art models while training on less than 1% of the raw data.

Break (40 Minutes)

10:20AM - 11:00AM

Session #2: Getting Security Right

11:00AM - 12:40PM

SoK: Science, Security, and the Elusive Goal of Security as a Scientific Pursuit
Cormac Herley (Microsoft Research, USA), Paul C. van Oorschot (Carleton University, Canada)
The past ten years has seen increasing calls to make security research more “scientific”. On the surface, most agree that this is desirable, given universal recognition of “science” as a positive force. However, we find that there is little clarity on what “scientific” means in the context of computer security research, or consensus on what a “Science of Security” should look like. We selectively review work in the history and philosophy of science and more recent work under the label “Science of Security”. We explore what has been done under the theme of relating science and security, put this in context with historical science, and offer observations and insights we hope may motivate further exploration and guidance. Among our findings are that practices on which the rest of science has reached consensus appear little used or recognized in security, and a pattern of methodological errors continues unaddressed.
Stack Overflow Considered Harmful? --- The Impact of Copy&Paste on Android Application Security
Felix Fischer (AISEC, Fraunhofer), Konstantin Böttinger (AISEC, Fraunhofer), Huang Xiao (AISEC, Fraunhofer), Christian Stransky (CISPA, Saarland University), Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sascha Fahl (CISPA, Saarland University)
Online programming discussion platforms such as Stack Overflow serve as a rich source of information for software developers. Available information include vibrant discussions and oftentimes ready-to-use code snippets. Previous research identified Stack Overflow as one of the most important information sources developers rely on. Anecdotes report that software developers copy and paste code snippets from those information sources for convenience reasons. Such behavior results in a constant flow of community-provided code snippets into production software. To date, the impact of this behaviour on code security is unknown. We answer this highly important question by quantifying the proliferation of security-related code snippets from Stack Overflow in Android applications available on Google Play. Access to the rich source of information available on Stack Overflow including ready-to-use code snippets provides huge benefits for software developers. However, when it comes to code security there are some caveats to bear in mind: Due to the complex nature of code security, it is very difficult to provide ready-to-use and secure solutions for every problem. Hence, integrating a security-related code snippet from Stack Overflow into production software requires caution and expertise. Unsurprisingly, we observed insecure code snippets being copied into Android applications millions of users install from Google Play every day. To quantitatively evaluate the extent of this observation, we scanned Stack Overflow for code snippets and evaluated their security score using a stochastic gradient descent classifier. In order to identify code reuse in Android applications, we applied state-of-the-art static analysis. Our results are alarming: 15.4% of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9% contain at least one insecure code snippet.
Obstacles to the Adoption of Secure Communication Tools
Ruba Abu-Salma (University College London (UCL), UK), M. Angela Sasse (University College London (UCL), UK), Joseph Bonneau (Stanford University & Electronic Frontier Foundation (EFF), USA), Anastasia Danilova (University of Bonn, Germany), Alena Naiakshina (University of Bonn, Germany), Matthew Smith (University of Bonn, Germany)
The computer security community has advocated widespread adoption of secure communication tools to counter mass surveillance. Several popular personal communication tools (e.g., WhatsApp, iMessage) have adopted end-to-end encryption, and many new tools (e.g., Signal, Telegram) have been launched with security as a key selling point. However it remains unclear if users understand what protection these tools offer, and if they value that protection. In this study, we interviewed 60 participants about their experience with different communication tools and their perceptions of the tools’ security properties. We found that the adoption of secure communication tools is hindered by fragmented user bases and incompatible tools. Furthermore, the vast majority of participants did not understand the essential concept of end-to-end encryption, limiting their motivation to adopt secure tools. We identified a number of incorrect mental models that underpinned participants’ beliefs.
Comparing the Usability of Cryptographic APIs
Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sascha Fahl (CISPA, Saarland University), Simson Garfinkel (National Institute of Standards and Technology), Doowon Kim (University of Maryland), Michelle Mazurek (University of Maryland), Christian Stransky (CISPA, Saarland University)
Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable; however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits—reducing the decision space, as expected, prevents choice of insecure parameters—simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions; however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.'
SoK: Cryptographically Protected Database Search
Benjamin Fuller (University of Connecticut), Mayank Varia (Boston University), Arkady Yerukhimovich (MIT Lincoln Laboratory), Emily Shen (MIT Lincoln Laboratory), Ariel Hamlin (MIT Lincoln Laboratory), Vijay Gadepally (MIT Lincoln Laboratory), Richard Shay (MIT Lincoln Laboratory), John Darby Mitchell (MIT Lincoln Laboratory), Robert K. Cunningham (MIT Lincoln Laboratory)
Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions: 1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms. 2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality. 3) An analysis of attacks against protected search for different base queries. 4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.

Lunch

12:40PM - 01:40PM

Session #3: Attacks

01:40PM - 03:20PM

IoT Goes Nuclear: Creating a Zigbee Chain Reaction
Eyal Ronen (Weizmann Institute of Science), Colin O’Flynn (Dalhousie University), Adi Shamir (Weizmann Institute of Science), Achi-Or Weingarten (Weizmann Institute of Science)
Within the next few years, billions of IoT devices will densely populate our cities. In this paper we describe a new type of threat in which adjacent IoT devices will infect each other with a worm that will rapidly spread over large areas, provided that the density of compatible IoT devices exceeds a certain critical mass. In particular, we developed and verified such an infection using the popular Philips Hue smart lamps as a platform. The worm spreads by jumping directly from one lamp to its neighbors, using only their built-in ZigBee wireless connectivity and their physical proximity. The attack can start by plugging in a single infected bulb anywhere in the city, and then catastrophically spread everywhere within minutes. It enables the attacker to turn all the city lights on or off, to permanently brick them, or to exploit them in a massive DDOS attack. To demonstrate the risks involved, we use results from percolation theory to estimate the critical mass of installed devices for a typical city such as Paris whose area is about 105 square kilometers: The chain reaction will fizzle if there are fewer than about 15,000 randomly located smart lamps in the whole city, but will spread everywhere when the number exceeds this critical mass (which had almost certainly been surpassed already). To make such an attack possible, we had to find a way to remotely yank already installed lamps from their current networks, and to perform over-the-air firmware updates. We overcame the first problem by discovering and exploiting a major bug in the implementation of the Touchlink part of the ZigBee Light Link protocol, which is supposed to stop such attempts with a proximity test. To solve the second problem, we developed a new version of a side channel attack to extract the global AES-CCM key (for each device type) that Philips uses to encrypt and authenticate new firmware. We used only readily available equipment costing a few hundred dollars, and managed to find this key without seeing any actual updates. This demonstrates once again how difficult it is to get security right even for a large company that uses standard cryptographic techniques to protect a major product.
SoK: Exploiting Network Printers
Jens Müller (Horst Görtz Institute for IT-Security, Ruhr University Bochum ), Vladislav Mladenov (Horst Görtz Institute for IT-Security, Ruhr University Bochum), Juraj Somorovsky (Horst Görtz Institute for IT-Security, Ruhr University Bochum), Jörg Schwenk (Horst Görtz Institute for IT-Security, Ruhr University Bochum)
The idea of a paperless office has been dreamed of for more than three decades. However, nowadays printers are still one of the most essential devices for daily work and common Internet users. Instead of removing them, printers evolved from simple devices into complex network computer systems, installed directly into company networks, and carrying considerable confidential data in their print jobs. This makes them to an attractive attack target. In this paper we conduct a large scale analysis of printer attacks and systematize our knowledge by providing a general methodology for security analyses of printers. Based on our methodology, we implemented an open-source tool called PRinter Exploitation Toolkit (PRET). We used PRET to evaluate 20 printer models from different vendors and found all of them to be vulnerable to at least one of the tested attacks. These attacks included, for example, simple Denial-of-Service (DoS) attacks or skilled attacks, extracting print jobs and system files. On top of our systematic analysis we reveal novel insights that enable attacks from the Internet by using advanced cross-site printing techniques, combined with printer CORS spoofing. Finally, we show how to apply our attacks to systems beyond typical printers like Google Cloud Print or document processing websites.
How They Did It: An Analysis of Emission Defeat Devices in Modern Automobiles
Moritz Contag (Ruhr University Bochum), Guo Li (University of California, San Diego), Andre Pawlowski (Ruhr University Bochum), Felix Domke (), Kirill Levchenko (University of California, San Diego), Thorsten Holz (Ruhr University Bochum), Stefan Savage (University of California, San Diego)
Modern vehicles are required to comply with a range of environmental regulations limiting the level of emissions for various greenhouse gases, toxins and particulate matter. To ensure compliance, regulators test vehicles in controlled settings and empirically measure their emissions at the tailpipe. However, the black box nature of this testing and the standardization of its forms have created an opportunity for evasion. Using modem electronic engine controllers, manufacturers can programmatically infer when a car is undergoing an emission test and alter the behavior of the vehicle to comply with emission standards, while exceeding them during normal driving in favor of improved performance. While the use of such a defeat device by Volkswagen has brought the issue of emissions cheating to the public’s attention, there have been few details about the precise nature of the defeat device, how it came to be, and its effect on vehicle behavior. In this paper, we present our analysis of two families of software defeat devices for diesel engines: one used by the Volkswagen Group to pass emissions tests in the US and Europe, and a second that we have found in Fiat Chrysler Automobiles. To carry out this analysis, we developed new static analysis firmware forensics techniques necessary to automatically identify known defeat devices and confirm their function. We tested about 900 firmware images and were able to detect a potential defeat device in more than 400 firmware images spanning eight years. We describe the precise conditions used by the firmware to detect a test cycle and how it affects engine behavior. This work frames the technical challenges faced by regulators going forward and highlights the important research agenda in providing focused software assurance in the presence of adversarial manufacturers.
The Password Reset MitM Attack
Nethanel Gelernter (Cyberpion & The College of Management Academic Studies), Senia Kalma (The College of Management Academic Studies), Bar Magnezi (The College of Management Academic Studies), Hen Porcilan (The College of Management Academic Studies)
We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it. The attack has several variants, including exploitation of a password reset process that relies on the victim’s mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well. Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process.
An Experimental Security Analysis of an Industrial Robot Controller
Davide Quarta (Politecnico di Milano), Marcello Pogliani (Politecnico di Milano), Mario Polino (Politecnico di Milano), Federico Maggi (Trend Micro Inc.), Andrea Maria Zanchettin (Politecnico di Milano), Stefano Zanero (Politecnico di Milano)
Industrial robots, automated manufacturing, and efficient logistics processes are at the heart of the upcoming fourth industrial revolution. While there are seminal studies on the vulnerabilities of cyber-physical systems in the industry, as of today there has been no systematic analysis of the security of industrial robot controllers. We examine the standard architecture of an industrial robot and analyze a concrete deployment from a systems security standpoint. Then, we propose an attacker model and confront it with the minimal set of requirements that industrial robots should honor: precision in sensing the environment, correctness in execution of control logic, and safety for human operators. Following an experimental and practical approach, we then show how our modeled attacker can subvert such requirements through the exploitation of software vulnerabilities, leading to severe consequences that are unique to the robotics domain. We conclude by discussing safety standards and security challenges in industrial robotics.

Break (30 Minutes)

03:20PM - 03:50PM

Session #4: Systems Security and Authentication

03:50PM - 05:30PM

Protecting Bare-metal Embedded Systems with Privilege Overlays
Abraham A Clements (Purdue and Sandia National Labs), Naif Saleh Almakhdhub (Purdue), Khaled Saab (Georgia Institute of Technology), Prashast Srivastava (Purdue), Jinkyu Koo (Purdue), Saurabh Bagchi (Purdue), Mathias Payer (Purdue)
Embedded systems are ubiquitous in every aspect of modern life. As the Internet of Thing expands, our dependence on these systems increases. Many of these interconnected systems are and will be low cost bare-metal systems, executing without an operating system. Bare-metal systems rarely employ any security protection mechanisms and their development assumptions (unrestricted access to all memory and instructions), and constraints (runtime, energy, and memory) makes applying protections challenging. To address these challenges we present EPOXY, an LLVM-based embedded compiler. We apply a novel technique, called privilege overlaying, wherein operations requiring privileged execution are identified and only these operations execute in privileged mode. This provides the foundation on which code integrity, adapted control-flow hijacking defenses, and protections for sensitive IO are applied. We also design fine-grained randomization schemes, that work within the constraints of bare-metal systems to provide further protection against control-flow and data corruption attacks. These defenses prevent code injection attacks and ROP attacks from scaling across large sets of devices. We evaluate the performance of our combined defense mechanisms for a suite of 75 benchmarks and 3 real-world IoT applications. Our results for the application case studies show that EPOXY has, on average, a 1.8% increase in execution time and a 0.5% increase in energy usage.
Norax: Enabling Execute-Only Memory for COTS Binaries on AArch64
Yaohui Chen (Stony Brook University), Dongli Zhang (Stony Brook University), Ruowen Wang (Samsung Research America), Rui Qiao (Stony Brook University), Ahmed M. Azab (Samsung Research America), Long Lu (Stony Brook University), Hayawardh Vijayakumar (Samsung Research America), Wenbo Shen (Samsung Research America)
Code reuse attacks exploiting memory disclosure vulnerabilities can bypass all deployed mitigations. One promising defense against this class of attacks is to enable execute-only memory (XOM) protection on top of fine-grained address space layout randomization (ASLR). However, recent works implementing XOM, despite their efficacy, only protect programs that have been (re)built with new compiler support, leaving commercial-off-the-shelf (COTS) binaries and source-unavailable programs unprotected. We present the design and implementation of NORAX, a practical system that retrofits XOM into stripped COTS binaries on AArch64 platforms. Unlike previous techniques, NORAX requires neither source code nor debugging symbols. NORAX statically transforms existing binaries so that during runtime their code sections can be loaded into XOM memory pages with embedded data relocated and data references properly updated. NORAX allows transformed binaries to leverage the new hardware-based XOM support—a feature widely available on AArch64 platforms (e.g., recent mobile devices) yet virtually unused due to the incompatibility of existing binaries. Furthermore, NORAX is designed to co-exist with other COTS binary hardening techniques, such as in-place randomization (IPR). We apply NORAX to the commonly used Android system binaries running on SAMSUNG Galaxy S6 and LG Nexus 5X devices. The results show that NORAX on average slows down the execution of transformed binaries by 1.18% and increases their memory footprint by 2.21%, suggesting NORAX is practical for real-world adoption.
Securing Augmented Reality Output
Kiron Lebeck (University of Washington), Kimberly Ruth (University of Washington), Tadayoshi Kohno (University of Washington), Franziska Roesner (University of Washington)
Augmented reality (AR) technologies, such as Microsoft’s HoloLens head-mounted display and AR-enabled car windshields, are rapidly emerging. AR applications provide users with immersive virtual experiences by capturing input from a user’s surroundings and overlaying virtual output on the user’s perception of the real world. These applications enable users to interact with and perceive virtual content in fundamentally new ways. However, the immersive nature of AR applications raises serious security and privacy concerns. Prior work has focused primarily on input privacy risks stemming from applications with unrestricted access to sensor data. However, the risks associated with malicious or buggy AR output remain largely unexplored. For example, an AR windshield application could intentionally or accidentally obscure oncoming vehicles or safety-critical output of other AR applications. In this work, we address the fundamental challenge of securing AR output in the face of malicious or buggy applications. We design, prototype, and evaluate Arya, an AR platform that controls application output according to policies specified in a constrained yet expressive policy framework. In doing so, we identify and overcome numerous challenges in securing AR output.
SysPal:System-guided Pattern Locks for Android
Geumhwan Cho (Sungkyunkwan University), Jun Ho Huh (Software R&D Center, Samsung Electronics), Junsung Cho (Sungkyunkwan University), Seongyeol Oh (Sungkyunkwan University), Youngbae Song (Sungkyunkwan University), Hyoungshick Kim (Sungkyunkwan University)
To improve the security of user-chosen Android screen lock patterns, we propose a novel system-guided pattern lock scheme called “SysPal” that mandates the use of a small number of randomly selected points while selecting a pattern. Users are given the freedom to use those mandated points at any position. We conducted a large-scale online study with 1,717 participants to evaluate the security and usability of three SysPal policies, varying the number of mandatory points that must be used (upon selecting a pattern) from one to three. Our results suggest that the two SysPal policies that mandate the use of one and two points can help users select significantly more secure patterns compared to the current Android policy: 22.5$% and 23.19% fewer patterns were cracked. Those two SysPal policies, however, did not show any statistically significant inferiority in pattern recall success rate (the percentage of participants who correctly recalled their pattern after 24 hours). In our lab study, we asked participants to install our screen unlock application on their own Android device, and observed their real-life phone unlock behaviors for a day. Again, our lab study did not show any statistically significant difference in memorability for those two SysPal policies compared to the current Android policy.
Multi-touch Authentication Using Hand Geometry and Behavioral Information
Yunpeng Song (Xi'an Jiaotong University), Zhongmin Cai (Xi'an Jiaotong University), ZhiLi Zhang (University of Minnesota)
In this paper we present a simple and reliable authentication method for mobile devices equipped with multi-touch screens such as smart phones, tablets and laptops. Users are authenticated by performing specially designed multi-touch gestures with one swipe on the touchscreen. During this process, both hand geometry and behavioral characteristics are recorded in the multi-touch traces and used for authentication. By combining both geometry information and behavioral characteristics, we overcome the problem of behavioral variability plaguing many behavior based authentication techniques-which often leads to less accurate authentication or poor user experience - while also ensuring the discernibility of different users with possibly similar handshapes. We evaluate the design of the proposed authentication method thoroughly using a large multi-touch dataset collected from 161 subjects with an elaborately designed procedure to capture behavior variability. The results demonstrate that the fusion of behavioral information with hand geometry features produces effective resistance to behavioral variability over time while at the same time retains discernibility. Our approach achieves EER of 5.84% with only 5 training samples and the performance is further improved to EER of 1.88% with enough training. Security analyses are also conducted to demonstrate that the proposed method is resilient against common smartphone authentication threats such as smudge attack, shoulder surfing attack and statistical attack. Finally, user acceptance of the method is illustrated via a usability study.

Poster Reception

05:30PM - 07:30PM

May 23


Registration

07:00AM - 05:00PM

Breakfast

07:30AM - 08:30AM

Awards

08:30AM - 08:40AM

Session #5: Bitcoin and Distributed Systems

08:40AM - 10:20AM

Hijacking Bitcoin: Routing Attacks on Cryptocurrencies
Maria Apostolaki (ETH Zürich), Aviv Zohar (Hebrew University), Laurent Vanbever (ETH Zürich)
As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic. This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate ~50% of the mining power—even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages. We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supemode combined with BGP routing data. The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately.
Catena: Efficient Non-equivocation via Bitcoin
Alin Tomescu (MIT), Srinivas Devadas (MIT)
We present Catena, an efficiently-verifiable Bitcoin witnessing scheme. Catena enables any number of thin clients, such as mobile phones, to efficiently agree on a log of application-specific statements managed by an adversarial server. Catena implements a log as an OP_RETURN transaction chain and prevents forks in the log by leveraging Bitcoin’s security against double spends. Specifically, if a log server wants to equivocate it has to double spend a Bitcoin transaction output. Thus, Catena logs are as hard to fork as the Bitcoin blockchain: an adversary without a large fraction of the network’s computational power cannot fork Bitcoin and thus cannot fork a Catena log either. However, different from previous Bitcoin-based work, Catena decreases the bandwidth requirements of log auditors from 90 GB to only tens of megabytes. More precisely, our clients only need to download all Bitcoin block headers (currently less than 35 MB) and a small, 600-byte proof for each statement in a block. We implement Catena in Java using the bitcoinj library and use it to extend CONIKS, a recent key transparency scheme, to witness its public-key directory in the Bitcoin blockchain where it can be efficiently verified by auditors. We show that Catena can secure many systems today, such as public-key directories, Tor directory servers and software transparency schemes.
IKP: Turning a PKI Around with Decentralized Automated Incentives
Stephanos Matsumoto (Carnegie Mellon University/ETH Zurich), Raphael M. Reischuk (ETH Zurich)
Despite a great deal of work to improve the TLS PKI, CA misbehavior continues to occur, resulting in unauthorized certificates that can be used to mount man-in-the-middle attacks against HTTPS sites. CAs lack the incentives to invest in higher security, and the manual effort required to report a rogue certificate deters many from contributing to the security of the TLS PKI. In this paper, we present IKP, a platform that automates responses to unauthorized certificates and provides incentives for CAs to behave correctly and for others to report potentially unauthorized certificates. Domains in IKP specify criteria for their certificates, and CAs specify reactions such as financial penalties that execute in case of unauthorized certificate issuance. By leveraging smart contracts and blockchain-based consensus, we can decentralize IKP while still providing automated incentives. We describe a theoretical model for payment flows and implement IKP in Ethereum to show that decentralizing and automating PKIs with financial incentives is both economically sound and technically viable.
Augur: Internet-Wide Detection of Connectivity Disruptions
Paul Pearce (UC Berkeley), Roya Ensafi (Princeton), Frank Li (UC Berkeley), Nick Feamster (Princeton), Vern Paxson (UC Berkeley)
Anecdotes, news reports, and policy briefings collectively suggest that Internet censorship practices are pervasive. The scale and diversity of Internet censorship practices makes it difficult to precisely monitor where, when, and how censorship occurs, as well as what is censored. The potential risks in performing the measurements make this problem even more challenging. As a result, many accounts of censorship begin—and end—with anecdotes or short-term studies from only a handful of vantage points. We seek to instead continuously monitor information about Internet reachability, to capture the onset or termination of censorship across regions and ISPs. To achieve this goal, we introduce Augur, a method and accompanying system that utilizes TCP/IP side channels to measure reachability between two Internet locations without directly controlling a measurement vantage point at either location. Using these side channels, coupled with techniques to ensure safety by not implicating individual users, we develop scalable, statistically robust methods to infer network-layer filtering, and implement a corresponding system capable of performing continuous monitoring of global censorship. We validate our measurements of Internet-wide disruption in nearly 180 countries over 17 days against sites known to be frequently blocked; we also identify the countries where connectivity disruption is most prevalent.
Scalable Bias-Resistant Distributed Randomness
Ewa Syta (Trinity College), Philipp Jovanovic (École Polytechnique Fédérale de Lausanne), Eleftherios Kokoris Kogias (École Polytechnique Fédérale de Lausanne), Nicolas Gailly (École Polytechnique Fédérale de Lausanne), Linus Gasser (École Polytechnique Fédérale de Lausanne), Ismail Khoffi (École Polytechnique Fédérale de Lausanne), Michael J. Fischer (Yale University), Bryan Ford (École Polytechnique Fédérale de Lausanne)
Bias-resistant public randomness is a critical component in many (distributed) protocols. Generating public randomness is hard, however, because active adversaries may behave dishonestly to bias public random choices toward their advantage. Existing solutions do not scale to hundreds or thousands of participants, as is needed in many decentralized systems. We propose two large-scale distributed protocols, RandHound and RandHerd, which provide publicly-verifiable, unpredictable, and unbiasable randomness against Byzantine adversaries. Rand-Hound relies on an untrusted client to divide a set of randomness servers into groups for scalability, and it depends on the pigeonhole principle to ensure output integrity, even for non-random, adversarial group choices. RandHerd implements an efficient, decentralized randomness beacon. RandHerd is structurally similar to a BFT protocol, but uses RandHound in a one-time setup to arrange participants into verifiably unbiased random secret-sharing groups, which then repeatedly produce random output at predefined intervals. Our prototype demonstrates that RandHound and RandHerd achieve good performance across hundreds of participants while retaining a low failure probability by properly selecting protocol parameters, such as a group size and secret-sharing threshold. For example, when sharding 512 nodes into groups of 32, our experiments show that RandHound can produce fresh random output after 240 seconds. RandHerd, after a setup phase of 260 seconds, is able to generate fresh random output in intervals of approximately 6 seconds. For this configuration, both protocols operate at a failure probability of at most 0.08% against a Byzantine adversary.

Break (30 Minutes)

10:20AM - 10:50AM

Session #6: TLS Session Security

10:50AM - 12:30PM

Implementing and Proving the TLS 1.3 Record Layer
Karthikeyan Bhargavan (Inria Paris-Rocquencourt), Antoine Delignat-Lavaud (Microsoft Research), Cédric Fournet (Microsoft Research), Markulf Kohlweiss (Microsoft Research), Jianyang Pan (Inria Paris-Rocquencourt), Jonathan Protzenko (Microsoft Research), Aseem Rastogi (Microsoft Research), Nikhil Swamy (Microsoft Research), Santiago Zanella-Béguelin (Microsoft Research), Jean Karim Zinzindohoué (Inria Paris-Rocquencourt)
The record layer is the main bridge between TLS applications and internal sub-protocols. Its core functionality is an elaborate form of authenticated encryption: streams of messages for each sub-protocol (handshake, alert, and application data) are fragmented, multiplexed, and encrypted with optional padding to hide their lengths. Conversely, the subprotocols may provide fresh keys or signal stream termination to the record layer. Compared to prior versions, TLS 1.3 discards obsolete schemes in favor of a common construction for Authenticated Encryption with Associated Data (AEAD), instantiated with algorithms such as AES-GCM and ChaCha20-Polyl305. It differs from TLS 1.2 in its use of padding, associated data and nonces. It also encrypts the content-type used to multiplex between sub-protocols. New protocol features such as early application data (0-RTT and 0.5-RTT) and late handshake messages require additional keys and a more general model of stateful encryption. We build and verify a reference implementation of the TLS record layer and its cryptographic algorithms in F*, a dependency typed language where security and functional guarantees can be specified as pre- and post-conditions. We reduce the high-level security of the record layer to cryptographic assumptions on its ciphers. Each step in the reduction is verified by typing an F* module; for each step that involves a cryptographic assumption, this module precisely captures the corresponding game. We first verify the functional correctness and injectivity properties of our implementations of one-time MAC algorithms (Poly1305 and GHASH) and provide a generic proof of their security given these two properties. We show the security of a generic AEAD construction built from any secure one-time MAC and PRF. We extend AEAD, first to stream encryption, then to length-hiding, multiplexed encryption. Finally, we build a security model of the record layer against an adversary that controls the TLS sub-protocols. We compute concrete security bounds for the AES_128_GCM, AES_256_GCM, and CHACHA20_POLY1305 ciphersuites, and derive recommended limits on sent data before re-keying. We plug our implementation of the record layer into the miTLS library, confirm that they interoperate with Chrome and Firefox, and report initial performance results. Combining our functional correctness, security, and experimental results, we conclude that the new TLS record layer (as described in RFCs and cryptographic standards) is provably secure, and we provide its first verified implementation.
Verified Models and Reference Implementations for the TLS 1.3 Standard Candidate
Karthikeyan Bhargavan (INRIA), Bruno Blanchet (INRIA), Nadim Kobeissi (INRIA)
TLS 1.3 is the next version of the Transport Layer Security (TLS) protocol. Its clean-slate design is a reaction both to the increasing demand for low-latency HTTPS connections and to a series of recent high-profile attacks on TLS. The hope is that a fresh protocol with modem cryptography will prevent legacy problems; the danger is that it will expose new kinds of attacks, or reintroduce old flaws that were fixed in previous versions of TLS. After 18 drafts, the protocol is nearing completion, and the working group has appealed to researchers to analyze the protocol before publication. This paper responds by presenting a comprehensive analysis of the TLS 1.3 Draft-l8 protocol. We seek to answer three questions that have not been fully addressed in previous work on TLS 13: (1) Does TLS 1.3 prevent well-known attacks on TLS 1.2, such as Logjam or the Triple Handshake, even if it is run in parallel with TLS 1.2? (2) Can we mechanically verify the computational security of TLS 1.3 under standard (strong) assumptions on its cryptographic primitives? (3) How can we extend the guarantees of the TLS 1.3 protocol to the details of its implementations? To answer these questions, we propose a methodology for developing verified symbolic and computational models of TLS 1.3 hand-in-hand with a high-assurance reference implementation of the protocol. We present symbolic ProVerif models for various intermediate versions of TLS 1.3 and evaluate them against a rich class of attacks to reconstruct both known and previously unpublished vulnerabilities that influenced the current design of the protocol. We present a computational CryptoVerif model for TLS 1.3 Draft-18 and prove its security. We present RefTLS, an interoperable implementation of TLS 1.0-13 and automatically analyze its protocol core by extracting a ProVerif model from its typed JavaScript code.
SymCerts: Practical Symbolic Execution For Exposing Noncompliance in X.509 Certificate Validation Implementations
Sze Yiu Chau (Purdue University), Omar Chowdhury (The University of Iowa), Endadul Hoque (Purdue University), Huangyi Ge (Purdue University), Aniket Kate (Purdue University), Cristina Nita-Rotaru (Northeastern University), Ninghui Li (Purdue University)
The X.509 Public-Key Infrastructure has long been used in the SSL/TLS protocol to achieve authentication. A recent trend of Intemet-of-Things (IoT) systems employing small footprint SSL/TLS libraries for secure communication has further propelled its prominence. The security guarantees provided by X.509 hinge on the assumption that the underlying implementation rigorously scrutinizes X.509 certificate chains, and accepts only the valid ones. Noncompliant implementations of X.509 can potentially lead to attacks and/or interoperability issues. In the literature, black-box fuzzing has been used to find flaws in X.509 validation implementations; fuzzing, however, cannot guarantee coverage and thus severe flaws may remain undetected. To thoroughly analyze X.509 implementations in small footprint SSL/TLS libraries, this paper takes the complementary approach of using symbolic execution. We observe that symbolic execution, a technique proven to be effective in finding software implementation flaws, can also be leveraged to expose noncompliance in X.509 implementations. Directly applying an off-the-shelf symbolic execution engine on SSL/TLS libraries is, however, not practical due to the problem of path explosion. To this end, we propose the use of SymCerts, which are X.509 certificate chains carefully constructed with a mixture of symbolic and concrete values. Utilizing SymCerts and some domain-specific optimizations, we symbolically execute the certificate chain validation code of each library and extract path constraints describing its accepting and rejecting certificate universes. These path constraints help us identify missing checks in different libraries. For exposing subtle but intricate noncompliance with X.509 standard, we cross-validate the constraints extracted from different libraries to find further implementation flaws. Our analysis of 9 small footprint X.509 implementations has uncovered 48 instances of noncompliance. Findings and suggestions provided by us have already been incorporated by developers into newer versions of their libraries.
HVLearn: Automated Black-box Analysis of Hostname Verification in SSL/TLS Implementations
Suphannee Sivakorn (Columbia University), George Argyros (Columbia University), Kexin Pei (Columbia University), Angelos D. Keromytis (Columbia University), Suman Jana (Columbia University)
SSL/TLS is the most commonly deployed family of protocols for seeming network communications. The security guarantees of SSL/TLS are critically dependent on the correct validation of the X.509 server certificates presented during the handshake stage of the SSL/TLS protocol. Hostname verification is a critical component of the certificate validation process that verifies the remote server’s identity by checking if the hostname of the server matches any of the names present in the X.509 certificate. Hostname verification is a highly complex process due to the presence of numerous features and comer cases such as wildcards, IP addresses, international domain names, and so forth. Therefore, testing hostname verification implementations present a challenging task. In this paper, we present HVLeam, a novel black-box testing framework for analyzing SSL/TLS hostname verification implementations, which is based on automata learning algorithms. HVLeam utilizes a number of certificate templates, i.e., certificates with a common name (CN) set to a specific pattern, in order to test different rules from the corresponding specification. For each certificate template, HVLeam uses automata learning algorithms to infer a Deterministic Finite Automaton (DFA) that describes the set of all hostnames that match the CN of a given certificate. Once a model is inferred for a certificate template, HVLeam checks the model for bugs by finding discrepancies with the inferred models from other implementations or by checking against regular-expression-based rules derived from the specification. The key insight behind our approach is that the acceptable hostnames for a given certificate template form a regular language. Therefore, we can leverage automata learning techniques to efficiently infer DFA models that accept the corresponding regular language. We use HVLeam to analyze the hostname verification implementations in a number of popular SSL/TLS libraries and applications written in a diverse set of languages like C, Python, and Java. We demonstrate that HVLeam can achieve on average 11.21% higher code coverage than existing black/gray-box fuzzing techniques. By comparing the DFA models inferred by HVLeam, we found 8 unique violations of the RFC specifications in the tested hostname verification implementations. Several of these violations are critical and can render the affected implementations vulnerable to active man-in-the-middle attacks.
CRLite: A Scalable System for Pushing All TLS Revocations to All Browsers
James Larisch (Northeastern University), David Choffnes (Northeastern University), Dave Levin (University of Maryland), Bruce M. Maggs (Duke University and Akamai Technologies), Alan Mislove (Northeastern University), Christo Wilson (Northeastern University)
Currently, no major browser fully checks for TLS/SSL certificate revocations. This is largely due to the fact that the deployed mechanisms for disseminating revocations (CRLs, OCSP, OCSP Stapling, CRLSet, and OneCRL) are each either incomplete, insecure, inefficient, slow to update, not private, or some combination thereof. In this paper, we present CRLite, an efficient and easily-deployable system for proactively pushing all TLS certificate revocations to browsers. CRLite servers aggregate revocation information for all known, valid TLS certificates on the web, and store them in a space-efficient filter cascade data structure. Browsers periodically download and use this data to check for revocations of observed certificates in realtime. CRLite does not require any additional trust beyond the existing PKI, and it allows clients to adopt a fail-closed security posture even in the face of network errors or attacks that make revocation information temporarily unavailable. We present a prototype of CRLite that processes TLS certificates gathered by Rapid7, the University of Michigan, and Google’s Certificate Transparency on the server-side, with a Firefox extension on the client-side. Comparing CRLite to an idealized browser that performs correct CRL/OCSP checking, we show that CRLite reduces latency and eliminates privacy concerns. Moreover, CRLite has low bandwidth costs: it can represent all certificates with an initial download of 10 MB (less than 1 byte per revocation) followed by daily updates of 580 KB on average. Taken together, our results demonstrate that complete TLS/SSL revocation checking is within reach for all clients.

Lunch

12:30PM - 01:30PM

Session #7: Software Security

01:30PM - 03:10PM

Finding and Preventing Bugs in JavaScript Bindings
Fraser Brown (Stanford University), Shravan Narayan (UCSD), Riad S. Wahby (Stanford University), Dawson Engler (Stanford University), Ranjit Jhala (UCSD), Deian Stefan (UCSD)
JavaScript, like many high-level languages, relies on runtime systems written in low-level C and C++. For example, the Node.js runtime system gives JavaScript code access to the underlying filesystem, networking, and I/O by implementing utility functions in C++. Since C++’s type system, memory model, and execution model differ significantly from JavaScript’s, JavaScript code must call these runtime functions via intermediate binding layer code that translates type, state, and failure between the two languages. Unfortunately, binding code is both hard to avoid and hard to get right This paper describes several types of exploitable errors that binding code creates, and develops both a suite of easily-to-build static checkers to detect such errors and a backwards-compatible, low-overhead API to prevent them. We show that binding flaws are a serious security problem by using our checkers to craft 81 proof-of-concept exploits for security flaws in the binding layers of the Node.js and Chrome, runtime systems that support hundreds of millions of users. As one practical measure of binding bug severity, we were awarded $6,000 in bounties for just two Chrome bug reports.
Skyfire: Data-Driven Seed Generation for Fuzzing
Junjie WANG (Nanyang Technological University), Bihuan CHEN (Nanyang Technological University), Lei WEI (Nanyang Technological University), Yang LIU (Nanyang Technological University)
Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption hugs (among which there are 16 new vulnerabilities) and 32 denial-of-service bugs.
VUDDY: A Scalable Approach for Vulnerable Code Clone Discovery
Seulbae Kim (Korea University), Seunghoon Woo (Korea University), Heejo Lee (Korea University), Hakjoo Oh (Korea University)
The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find “vulnerable” clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.
NEZHA: Efficient Domain-independent Differential Testing
Theofilos Petsios (Columbia University), Adrian Tang (Columbia University), Salvatore Stolfo (Columbia University), Angelos D. Keromytis (Columbia University), Suman Jana (Columbia University)
Differential testing uses similar programs as cross-referencing oracles to find semantic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Unfortunately, existing differential testing tools are domain-specific and inefficient, requiring large numbers of test inputs to find a single bug. In this paper, we address these issues by designing and implementing Nezha, an efficient input-format-agnostic differential testing framework. The key insight behind Nezha’s design is that current took generate inputs by simply borrowing techniques designed for finding crash or memory corruption bugs in individual programs (e.g., maximizing code coverage). By contrast, Nezha exploits the behavioral asymmetries between multiple test programs to focus on inputs that are more likely to trigger semantic bugs. We introduce the notion of 5-diversity, which summarizes the observed asymmetries between the behaviors of multiple test applications. Based on <5-diversity, we design two efficient domain-independent input generation mechanisms for differential testing, one gray-box and one blackbox. We demonstrate that both of these input generation schemes are significantly more efficient than existing tools at finding semantic bugs in real-world, complex software. Nezha’s average rate of finding differences is 52 times and 27 times higher than that of Frankencerts and Mucerts, two popular domain-specific differential testing tools that check SSL/TLS certificate validation implementations, respectively. Moreover, performing differential testing with Nezha results in 6 times more semantic bugs per tested input, compared to adapting state-of-the-art general-purpose fuzzers like American Fuzzy Lop (AFL) to differential testing by running them on individual test programs for input generation. Nezha discovered 778 unique, previously unknown discrepancies across a wide variety of applications (ELF and XZ parsers, PDF viewers and SSL/TLS libraries), many of which constitute previously unknown critical security vulnerabilities. In particular, we found two critical evasion attacks against ClamAV, allow ing arbitrary malicious ELF/XZ files to evade detection. The discrepancies Nezha found in the X.509 certificate validation implementations of the tested SSL/TLS libraries range from mishandling certain types of KeyUsage extensions, to incorrect acceptance of specially crafted expired certificates, enabling man-in-the-middle attacks. All of our reported vulnerabilities have been confirmed and fixed within a week from the date of reporting.
Backward-bounded DSE: Targeting Infeasibility Questions on Obfuscated Codes
Sébastien Bardin (CEA LIST), Robin David (CEA LIST), Jean-Yves Marion (LORIA)
Software deobfuscation is a crucial activity in security analysis and especially in malware analysis. While standard static and dynamic approaches suffer from well-known shortcomings, Dynamic Symbolic Execution (DSE) has recently been proposed as an interesting alternative, more robust than static analysis and more complete than dynamic analysis. Yet, DSE addresses only certain kinds of questions encountered by a reverser, namely feasibility questions. Many issues arising during reverse, e.g., detecting protection schemes such as opaque predicates, fall into the category of infeasibility questions. We present Backward-Bounded DSE, a generic, precise, efficient and robust method for solving infeasibility questions. We demonstrate the benefit of the method for opaque predicates and call stack tampering, and give some insight for its usage for some other protection schemes. Especially, the technique has successfully been used on state-of-the-art packers as well as on the government-grade X-Tunnel malware - allowing its entire deobfuscation. Backward-Bounded DSE does not supersede existing DSE approaches, but rather complements them by addressing infeasibility questions in a scalable and precise manner. Following this line, we propose sparse disassembly, a combination of Backward-Bounded DSE and static disassembly able to enlarge dynamic disassembly in a guaranteed way, hence getting the best of dynamic and static disassembly. This work paves the way for robust, efficient and precise disassembly tools for heavily-obfuscated binaries.

Break (30 Minutes)

03:10PM - 03:40PM

Session #8: Information-flow Channel Security

03:40PM - 05:20PM

Leakage-Abuse Attacks on Order-Revealing Encryption
Paul Grubbs (Cornell Tech), Kevin Sekniqi (Cornell University), Vincent Bindschaedler (UIUC), Muhammad Naveed (USC), Tom Ristenpart (Cornell Tech)
Order-preserving encryption and its generalization order-revealing encryption (OPE/ORE) allow sorting, performing range queries, and filtering data — all while only having access to ciphertexts. But OPE and ORE ciphertexts necessarily leak information about plaintexts, and what level of security they provide in practice has been unclear. In this work, we introduce new leakage-abuse attacks that recover plaintexts from OPE/ORE-encrypted databases. Underlying our new attacks is a framework in which we cast the adversary’s challenge as a non-crossing bipartite matching problem. This allows easy tailoring of attacks to a specific scheme’s leakage profile. In a case study of customer records, we show attacks that recover 99% of first names, 97% of last names, and 90% of birthdates held in a database, despite all values being encrypted with the OPE scheme most widely used in practice. We also show the first attack against the recent frequency-hiding Kerschbaum scheme, to which no prior attacks have been demonstrated. Our attack recovers frequently occurring plaintexts most of the time.
Side-Channel Attacks on Shared Search Indexes
Liang Wang (University of Wisconsin, Madison), Paul Grubbs (Cornell Tech), Jiahui Lu (SJTU), Vincent Bindschaedler (UIUC), David Cash (Rutgers University), Thomas Ristenpart (Cornell Tech)
Full-text search systems, such as Elasticsearch and Apache Solr, enable document retrieval based on keyword queries. In many deployments these systems are multi-tenant, meaning distinct users’ documents reside in, and their queries are answered by, one or more shared search indexes. Large deployments may use hundreds of indexes across which user documents are randomly assigned. The results of a search query are filtered to remove documents to which a client should not have access. We show the existence of exploitable side channels in modem multi-tenant search. The starting point for our attacks is a decade-old observation that the TF-IDF scores used to rank search results can potentially leak information about other users’ documents. To the best of our knowledge, no attacks have been shown that exploit this side channel in practice, and constructing a working side channel requires overcoming numerous challenges in real deployments. We nevertheless develop a new attack, called STRESS (Search Text RElevance Score Side channel), and in so doing show how an attacker can map out the number of indexes used by a service, obtain placement of a document within each index, and then exploit co-tenancy with all other users to (1) discover the terms in other tenants’ documents or (2) determine the number of documents (belonging to other tenants) that contain a term of interest. In controlled experiments, we demonstrate the attacks on popular services such as GitHub and Xen.do. We conclude with a discussion of countermeasures.
From Trash to Treasure: Timing-Sensitive Garbage Collection
Mathias Vorreiter Pedersen (Aarhus University), Aslan Askarov (Aarhus University)
This paper studies information flows via tuning channels in the presence of automatic memory management. We construct a series of example attacks that illustrate that garbage collectors form a shared resource that can be used to reliably leak sensitive information at a rate of up to 1 byte/sec on a contemporary general-purpose computer. The created channel is also observable across a network connection in a datacenter-like setting. We subsequently present a design of automatic memory management that is provably resilient against such attacks.
Verifying and Synthesizing Constant-Resource Implementations with Types
Van Chan Ngo (Carnegie Mellon University), Mario Dehesa-Azuara (Carnegie Mellon University), Matthew Fredrikson (Carnegie Mellon University), Jan Hoffmann (Carnegie Mellon University)
Side channel attacks have been used to extract critical data such as encryption keys and confidential user data in a variety of adversarial settings. In practice, this threat is addressed by adhering to a constant-time programming discipline, which imposes strict constraints on the way in which programs are written. This introduces an additional hurdle for programmers faced with the already difficult task of writing secure code, highlighting the need for solutions that give the same source-level guarantees while supporting more natural programming models. We propose a novel type system for verifying that programs correctly implement constant-resource behavior. Our type system extends recent work on automatic amortized resource analysis (AARA), a set of techniques that automatically derive provable upper bounds on the resource consumption of programs. We devise new techniques that build on the potential method to achieve compositionality, precision, and automation. A strict global requirement that a program always maintains constant resource usage is too restrictive for most practical applications. It is sufficient to require that the program’s resource behavior remain constant with respect to an attacker who is only allowed to observe part of the program’s state and behavior. To account for this, our type system incorporates information flow tracking into its resource analysis. This allows our system to certify programs that need to violate the constant time requirement in certain cases, as long as doing so does not leak confidential information to attackers. We formalize this guarantee by defining a new notion of resource-aware noninterference, and prove that our system enforces it. Finally, we show how our type inference algorithm can be used to synthesize a constant-time implementation from one that cannot be verified as secure, effectively repairing insecure programs automatically. We also show how a second novel AARA system that computes lower bounds on resource usage can be used to derive quantitative bounds on the amount of information that a program leaks through its resource use. We implemented each of these systems in Resource Aware ML, and show that it can be applied to verify constant-time behavior in a number of applications including encryption and decryption routines, database queries, and other resource-aware functionality.
CoSMeDis: A Distributed Social Media Platform with Formally Verified Confidentiality Guarantees
Thomas Bauereiß (German Research Center for Artificial Intelligence (DFKI) Bremen, Germany), Armando Pesenti Gritti (Global NoticeBoard, UK), Andrei Popescu (School of Science and Technology, Middlesex University, UK/Institute of Mathematics Simion Stoilow of the Romanian Academy), Franco Raimondi (School of Science and Technology, Middlesex University, UK)
We present the design, implementation and information flow verification of CoSMeDis, a distributed social media platform. The system consists of an arbitrary number of communicating nodes, deployable at different locations over the Internet. Its registered users can post content and establish intra-node and inter-node friendships, used to regulate access control over the posts. The system’s kernel has been verified in the proof assistant Isabelle/HOL and automatically extracted as Scala code. We formalized a framework for composing a class of information flow security guarantees in a distributed system, applicable to input/output automata. We instantiated this framework to confidentiality properties for CoSMeDis’s sources of information: posts, friendship requests, and friendship status.

Break (10 Minutes)

05:20PM - 05:30PM

Short Talks

05:30PM - 06:30PM

S&P TC Business Meeting

06:30PM - 07:30PM

May 24


Registration

07:00AM - 06:00PM

Breakfast

07:30AM - 08:30AM

Closing Remarks

08:30AM - 08:40AM

Session #9: Underground Economics

08:40AM - 10:20AM

How to Learn Klingon Without Dictionary: Detection and Measurement of Black Keywords Used by Underground Economy
Hao Yang (Tsinghua University), Xiulin Ma (Tsinghua University), Kun Du (Tsinghua University), Zhou Li (IEEE Member), Haixin Duan (Tsinghua University), Xiaodong Su (Baidu Inc.), Guang Liu (Baidu Inc.), Zhifeng Geng (Baidu Inc.), Jianping Wu (Tsinghua University)
Online underground economy is an important channel that connects the merchants of illegal products and their buyers, which is also constantly monitored by legal authorities. As one common way for evasion, the merchants and buyers together create a vocabulary of jargons (called “black keywords” in this paper) to disguise the transaction (e.g., “smack” is one street name for “heroin” [1]). Black keywords are often “unfriendly” to the outsiders, which are created by either distorting the original meaning of common words or tweaking other black keywords. Understanding black keywords is of great importance to track and disrupt the underground economy, but it is also prohibitively difficult: the investigators have to infiltrate the inner circle of criminals to learn their meanings, a task both risky and time-consuming. In this paper, we make the first attempt towards capturing and understanding the ever-changing black keywords. We investigated the underground business promoted through blackhat SEO (search engine optimization) and demonstrate that the black keywords targeted by the SEOers can be discovered through a fully automated approach. Our insights are two-fold: first, the pages indexed under black keywords are more likely to contain malicious or fraudulent content (e.g., SEO pages) and alarmed by off-the-shelf detectors; second, people tend to query multiple similar black keywords to find the merchandise. Therefore, we could infer whether a search keyword is “black” by inspecting the associated search results and then use the related search queries to extend our findings. To this end, we built a system called KDES (Keywords Detection and Expansion System), and applied it to the search results of Baidu, China’s top search engine. So far, we have already identified 478,879 black keywords which were clustered under 1,522 core words based on text similarity. We further extracted the information like emails, mobile phone numbers and instant messenger IDs from the pages and domains relevant to the underground business. Such information helps us gain better understanding about the underground economy of China in particular. In addition, our work could help search engine vendors purify the search results and disrupt the channel of the underground market. Our co-authors from Baidu compared our results with their blacklist, found many of them (e.g., long-tail and obfuscated keywords) were not in it, and then added them to Baidu’s internal blacklist.
To Catch a Ratter: Monitoring the Behavior of Amateur DarkComet RAT Operators in the Wild
Brown Farinholt (UC San Diego), Mohammad Rezaeirad (GMU), Paul Pearce (UC Berkeley), Hitesh Dharamdasani (Informant Networks), Haikuo Yin (UC San Diego), Stevens LeBlond (MPI-SWS), Damon McCoy (NYU), Kirill Levchenko (UC San Diego)
Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large-scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they’ve been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective. In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several-week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample’s behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use.
A Lustrum of Malware Network Communication: Evolution and Insights
Chaz Lever (Georgia Tech), Platon Kotzias (IMDEA), Davide Balzarotti (Eurecom), Juan Caballero (IMDEA), Manos Antonakakis (Georgia Tech)
Both the operational and academic security communities have used dynamic analysis sandboxes to execute malware samples for roughly a decade. Network information derived from dynamic analysis is frequently used for threat detection, network policy, and incident response. Despite these common and important use cases, the efficacy of the network detection signal derived from such analysis has yet to be studied in depth. This paper seeks to address this gap by analyzing the network communications of 26.8 million samples that were collected over a period of five years. Using several malware and network datasets, our large scale study makes three core contributions. (1) We show that dynamic analysis traces should be carefully curated and provide a rigorous methodology that analysts can use to remove potential noise from such traces. (2) We show that Internet miscreants are increasingly using potentially unwanted programs (PUPs) that rely on a surprisingly stable DNS and IP infrastructure. This indicates that the security community is in need of better protections against such threats, and network policies may provide a solid foundation for such protections. (3) Finally, we see that, for the vast majority of malware samples, network traffic provides the earliest indicator of infection—several weeks and often months before the malware sample is discovered. Therefore, network defenders should rely on automated malware analysis to extract indicators of compromise and not to build early detection systems.
Under the Shadow of Sunshine: Understanding and Detecting Bulletproof Hosting on Legitimate Service Provider Networks
Sumayah Alrwais (Indiana University at Bloomington), Xiaojing Liao (Georgia Institute of Technology), Xianghang Mi (Indiana University at Bloomington), Peng Wang (Indiana University at Bloomington), XiaoFeng Wang (Indiana University at Bloomington), Feng Qian (Indiana University at Bloomington), Raheem Beyah (Georgia Institute of Technology), Damon McCoy (New York University)
BulletProof Hosting (BPH) services provide criminal actors with technical infrastructure that is resilient to complaints of illicit activities, which serves as a basic building block for streamlining numerous types of attacks. Anecdotal reports have highlighted an emerging trend of these BPH services reselling infrastructure from lower end service providers (hosting ISPs, doud hosting, and CDNs) instead of from monolithic BPH providers. This has rendered many of the prior methods of detecting BPH less effective, since instead of the infrastructure being highly concentrated within a few malicious Autonomous Systems (ASes) it is now agile and dispersed across a larger set of providers that have a mixture of benign and malicious clients. In this paper, we present the first systematic study on this new trend of BPH services. By collecting and analyzing a large amount of data (25 Whois snapshots of the entire IPv4 address space, 1.5 TB of passive DNS data, and longitudinal data from several blacklist feeds), we are able to identify a set of new features that uniquely characterizes BPH on sub-allocations and are costly to evade. Based upon these features, we train a classifier for detecting malicious sub-allocated network blocks, achieving a 98% recall and 1.5% false discovery rates according to our evaluation. Using a conservatively trained version of our classifier, we scan the whole IPv4 address space and detect 39K malicious network blocks. This allows us to perform a large-scale study of the BPH service ecosystem, which sheds light on this underground business strategy, including patterns of network blocks being recycled and malicious clients migrating to different network blocks, in an effort to evade IP address based blacklisting. Our study highlights the trend of agile BPH services and points to potential methods of detecting and mitigating this emerging threat.
Your Exploit is Mine: Automatic Shellcode Transplant for Remote Exploits
Tiffany Bao (CMU), Ruoyu Wang (UCSB), Yan Shoshitaishvili (UCSB), David Brumley (CMU)
Developing a remote exploit is not easy. It requires a comprehensive understanding of a vulnerability and delicate techniques to bypass defense mechanisms. As a result, attackers may prefer to reuse an existing exploit and make necessary changes over developing a new exploit from scratch. One such adaptation is the replacement of the original shellcode (i.e., the attacker-injected code that is executed as the final step of the exploit) in the original exploit with a replacement shellcode, resulting in a modified exploit that carries out the actions desired by the attacker as opposed to the original exploit author. We call this a shellcode transplant. Current automated shellcode placement methods are insufficient because they over-constrain the replacement shellcode, and so cannot be used to achieve shellcode transplant. For example, these systems consider the shellcode as an integrated memory chunk, and require that the execution path of the modified exploit must be same as the original one. To resolve these issues, we present ShellSwap, a system that uses symbolic tracing, with a combination of shellcode layout remediation and path kneading to achieve shellcode transplant. We evaluated the ShellSwap system on a combination of 20 exploits and 5 pieces of shellcode that are independently developed and different from the original exploit. Among the 100 test cases, our system successfully generated 88% of the exploits.

Break (30 Minutes)

10:20AM - 10:50AM

Session #10: Cryptography

10:50AM - 12:30PM

Optimized Honest-Majority MPC for Malicious Adversaries - Breaking the 1 Billion-Gate Per Second Barrier
Toshinori Araki (NEC), Assi Barak (Bar-Ilan University), Jun Furukawa (NEC), Tamar Lichter (Queens College - CUNY), Yehuda Lindell (Bar-Ilan University), Ariel Nof (Bar-Ilan University), Kazuma Ohara (NEC), Adi Watzman (The Weizmann Institute of Science), Or Weinstein (Bar-Ilan University)
Secure multiparty computation enables a set of parties to securely carry out a joint computation of their private inputs without revealing anything but the output. In the past few years, the efficiency of secure computation protocols has increased in leaps and bounds. However, when considering the case of security in the presence of malicious adversaries (who may arbitrarily deviate from the protocol specification), we are still very far from achieving high efficiency. In this paper, we consider the specific case of three parties and an honest majority. We provide general techniques for improving efficiency of cut-and-choose protocols on multiplication triples and utilize them to significantly improve the recently published protocol of Furukawa et al. (ePrint 2016/944). We reduce the bandwidth of their protocol down from 10 bits per AND gate to 7 bits per AND gate, and show how to improve some computationally expensive parts of their protocol. Most notably, we design cache-efficient shuffling techniques for implementing cut-and-choose without randomly permuting large arrays (which is very slow due to continual cache misses). We provide a combinatorial analysis of our techniques, bounding the cheating probability of the adversary. Our implementation achieves a rate of approximately 1.15 billion AND gates per second on a cluster of three 20-core machines with a lOGbps network. Thus, we can securely compute 212,000 AES encryptions per second (which is hundreds of times faster than previous work for this setting). Our results demonstrate that high-throughput secure computation for malicious adversaries is possible.
vSQL: Verifying Arbitrary SQL Queries over Dynamic Outsourced Databases
Yupeng Zhang (University of Maryland), Daniel Genkin (University of Maryland & University of Pennsylvania), Jonathan Katz (University of Maryland), Dimitrios Papadopoulos (University of Maryland), Charalampos Papamanthou (University of Maryland)
Cloud database systems such as Amazon RDS or Google Cloud SQL enable the outsourcing of a large database to a server who then responds to SQL queries. A natural problem here is to efficiently verify the correctness of responses returned by the (untrusted) server. In this paper we present vSQL, a novel cryptographic protocol for publicly verifiable SQL queries on dynamic databases. At a high level, our construction relies on two extensions of the CMT interactive-proof protocol [Cormode et al., 2012]: (i) supporting outsourced input via the use of a polynomial-delegation protocol with succinct proofs, and (ii) supporting auxiliary input (i.e., non-deterministic computation) efficiently. Compared to previous verifiable-computation systems based on interactive proofs, our construction has verification cost polylogarithmic in the auxiliary input (which for SQL queries can be as large as the database) rather than linear. In order to evaluate the performance and expressiveness of our scheme, we tested it on SQL queries based on the 1PC-H benchmark on a database with 6 x 106 rows and 13 columns. The server overhead in our scheme (which is typically the main bottleneck) is up to 120 x lower than previous approaches based on succinct arguments of knowledge (SNARKs), and moreover we avoid the need for query-dependent pre-processing which is required by optimized SNARK-based schemes. In our construction, the server/client time and the communication cost are comparable to, and sometimes smaller than, those of existing customized solutions which only support specific queries.
A Framework for Universally Composable Diffie-Hellman Key Exchange
Ralf Küsters (University of Stuttgart), Daniel Rausch (University of Stuttgart)
The analysis of real-world protocols, in particular key exchange protocols and protocols building on these protocols, is a very complex, error-prone, and tedious task. Besides the complexity of the protocols itself, one important reason for this is that the security of the protocols has to be reduced to the security of the underlying cryptographic primitives for every protocol time and again. We would therefore like to get rid of reduction proofs for real-world key exchange protocols as much as possible and in many cases altogether, also for higher-level protocols which use the exchanged keys. So far some first steps have been taken in this direction. But existing work is still quite limited, and, for example, does not support Diffie-Hellman (DH) key exchange, a prevalent cryptographic primitive for real-world protocols. In this paper, building on work by Kusters and Thengerthal, we provide an ideal functionality in the universal composability setting which supports several common cryptographic primitives, including DH key exchange. This functionality helps to avoid reduction proofs in the analysis of real-world protocols and often eliminates them completely. We also propose a new general ideal key exchange functionality which allows higher-level protocols to use exchanged keys in an ideal way. As a proof of concept, we apply our framework to three practical DH key exchange protocols, namely ISO 9798-3, SIGMA, and OPTLS.
One TPM to Bind Them All: Fixing TPM2.0 for Provably Secure Anonymous Attestation
Jan Camenisch (IBM Research - Zurich), Liqun Chen (University of Surrey), Manu Drijvers (IBM Research - Zurich and ETH Zurich), Anja Lehmann (IBM Research - Zurich), David Novick (Intel), Rainer Urian (Infineon)
Abstract—The Trusted Platform Module (TPM) is an international standard for a security chip that can be used for the management of cryptographic keys and for remote attestation. The specification of the most recent TPM 2.0 interfaces for direct anonymous attestation unfortunately has a number of severe shortcomings. First of all, they do not allow for security proofs (indeed, the published proofs are incorrect). Second, they provide a Diffie-Hellman oracle w.r.t. the secret key of the TPM, weakening the security and preventing forward anonymity of attestations. Fixes to these problems have been proposed, but they create new issues: they enable a fraudulent TPM to encode information into an attestation signature, which could be used to break anonymity or to leak the secret key. Furthermore, all proposed ways to remove the Diffie-Hellman oracle either strongly limit the functionality of the TPM or would require significant changes to the TPM 2.0 interfaces. In this paper we provide a better specification of the TPM 2.0 interfaces that addresses these problems and requires only minimal changes to the current TPM 2.0 commands. We then show how to use the revised interfaces to build g-SDH- and LRSW-based anonymous attestation schemes, and prove their security. We finally discuss how to obtain other schemes addressing different use cases such as key-binding for U-Prove and e-cash.
Cryptographic Function Detection in Obfuscated Binaries via Bit-precise Symbolic Loop Mapping
Dongpeng Xu (The Pennsylvania State University), Jiang Ming (University of Texas at Arlington), Dinghao Wu (The Pennsylvania State University)
Cryptographic functions have been commonly abused by malware developers to hide malicious behaviors, disguise destructive payloads, and bypass network-based firewalls. Now-infamous crypto-ransomware even encrypts victim’s computer documents until a ransom is paid. Therefore, detecting cryptographic functions in binary code is an appealing approach to complement existing malware defense and forensics. However, pervasive control and data obfuscation schemes make cryptographic function identification a challenging work. Existing detection methods are either brittle to work on obfuscated binaries or ad hoc in that they can only identify specific cryptographic functions. In this paper, we propose a novel technique called bit-precise symbolic loop mapping to identify cryptographic functions in obfuscated binary code. Our trace-based approach captures the semantics of possible cryptographic algorithms with bit-precise symbolic execution in a loop. Then we perform guided fuzzing to efficiently match boolean formulas with known reference implementations. We have developed a prototype called CryptoHunt and evaluated it with a set of obfuscated synthetic examples, well-known cryptographic libraries, and malware. Compared with the existing tools, CryptoHunt is a general approach to detecting commonly used cryptographic functions such as TEA, AES, RC4, MD5, and RSA under different control and data obfuscation scheme combinations.

Lunch

12:30PM - 01:30PM

Session #11: Privacy

01:30PM - 03:10PM

XHOUND: Quantifying the Fingerprintability of Browser Extensions
Oleksii Starov (Stony Brook University), Nick Nikiforakis (Stony Brook University)
In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users’ online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser’s private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension’s organic activity in a page’s DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of S54 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound’s results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.
Identifying Personal DNA Methylation Profiles by Genotype Inference
Michael Backes (CISPA, Saarland University & MPI-SWS), Pascal Berrang (CISPA, Saarland University), Matthias Bieg (German Cancer Research Center (DKFZ)), Roland Eils (German Cancer Research Center (DKFZ) & University of Heidelberg), Carl Herrmann (German Cancer Research Center (DKFZ) & University of Heidelberg), Mathias Humbert (CISPA, Saarland University), Irina Lehmann (Helmholtz Centre for Environmental Research Leipzig - UFZ, Leipzig)
Since the first whole-genome sequencing, the biomedical research community has made significant steps towards a more precise, predictive and personalized medicine. Genomic data is nowadays widely considered privacy-sensitive and consequently protected by strict regulations and released only after careful consideration. Various additional types of biomedical data, however, are not shielded by any dedicated legal means and consequently disseminated much less thoughtfully. This in particular holds true for DNA methylation data as one of the most important and well-understood epigenetic element influencing human health. In this paper, we show that, in contrast to the aforementioned belief, releasing one’s DNA methylation data causes privacy issues akin to releasing one’s actual genome. We show that already a small subset of methylation regions influenced by genomic variants are sufficient to infer parts of someone’s genome, and to further map this DNA methylation profile to the corresponding genome. Notably, we show that such re-identification is possible with 97.5% accuracy, relying on a dataset of more than 2500 genomes, and that we can reject all wrongly matched genomes using an appropriate statistical test. We provide means for countering this threat by proposing a novel cryptographic scheme for privately classifying tumors that enables a privacy-respecting medical diagnosis in a common clinical setting. The scheme relies on a combination of random forests and homomorphic encryption, and it is proven secure in the honest-but-curious model. We evaluate this scheme on real DNA methylation data, and show that we can keep the computational overhead to acceptable values for our application scenario.
Counter-RAPTOR: Safeguarding Tor Against Active Routing Attacks
Yixin Sun (Princeton University), Anne Edmundson (Princeton University), Nick Feamster (Princeton University), Mung Chiang (Princeton University), Prateek Mittal (Princeton University)
Tor is vulnerable to network-level adversaries who can observe both ends of the communication to deanonymize users. Recent work has shown that Tor is susceptible to the previously unknown active BGP routing attacks, called RAPTOR attacks, which expose Tor users to more network-level adversaries. In this paper, we aim to mitigate and detect such active routing attacks against Tor. First, we present a new measurement study on the resilience of the Tor network to active BGP prefix attacks. We show that ASes with high Tor bandwidth can be less resilient to attacks than other ASes. Second, we present a new Tor guard relay selection algorithm that incorporates resilience of relays into consideration to proactively mitigate such attacks. We show that the algorithm successfully improves the security for Tor clients by up to 36% on average (up to 166% for certain clients). Finally, we build a live BGP monitoring system that can detect routing anomalies on the Tor network in real time by performing an AS origin check and novel detection analytics. Our monitoring system successfully detects simulated attacks that are modeled after multiple known attack types as well as a real-world hijack attack (performed by us), while having low false positive rates.
Machine-Checked Proofs of Privacy for Electronic Voting Protocols
Véronique Cortier (LORIA, CNRS & Inria & Université de Lorraine), Constantin Cătalin Drăgan (LORIA, CNRS & Inria), François Dupressoir (University of Surrey), Benedikt Schmidt (IMDEA Software Institute), Pierre-Yves Strub (École Polytechnique), Bogdan Warinschi (University of Bristol)
We provide the first machine-checked proof of privacy-related properties (including ballot privacy) for an electronic voting protocol in the computational model. We target the popular Helios family of voting protocols, for which we identify appropriate levels of abstractions to allow the simplification and convenient reuse of proof steps across many variations of the voting scheme. The resulting framework enables machine-checked security proofs for several hundred variants of Helios and should serve as a stepping stone for the analysis of further variations of the scheme. In addition, we highlight some of the lessons learned regarding the gap between pen-and-paper and machine-checked proofs, and report on the experience with formalizing the security of protocols at this scale.
Spotless Sandboxes: Evading Malware Analysis Systems using Wear-and-Tear Artifacts
Najmeh Miramirkhani (Stony Brook University), Mahathi Priya Appini (Stony Brook University), Nick Nikiforakis (Stony Brook University), Michalis Polychronakis (Stony Brook University)
Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the “wear and tear” that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system’s age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.

Break (20 Minutes)

03:10PM - 03:40PM

Session #12: Authorization

03:40PM - 05:20PM

Hardening Java's Access Control by Abolishing Implicit Privilege Elevation
Philipp Holzinger (Fraunhofer SIT), Ben Hermann (Technische Universität Darmstadt), Johannes Lerch (Technische Universität Darmstadt), Eric Bodden (Paderborn University & Fraunhofer IEM), Mira Mezini (Technische Universität Darmstadt)
While the Java runtime is installed on billions of devices and servers worldwide, it remains a primary attack vector for online criminals. As recent studies show, the majority of all exploited Java vulnerabilities comprise incorrect or insufficient implementations of access-control checks. This paper for the first time studies the problem in depth. As we find, attacks are enabled by shortcuts that short-circuit Java’s general principle of stack-based access control. These shortcuts, originally introduced for ease of use and to improve performance, cause Java to elevate the privileges of code implicitly. As we show, this creates many pitfalls for software maintenance, making it all too easy for maintainers of the runtime to introduce blatant confused-deputy vulnerabilities even by just applying normally semantics preserving refactorings. How can this problem be solved? Can one implement Java’s access control without shortcuts, and if so, does this implementation remain usable and efficient? To answer those questions, we conducted a tool-assisted adaptation of the Java Class Library (JCL), avoiding (most) shortcuts and therefore moving to a fully explicit model of privilege elevation. As we show, the proposed changes significantly harden the JCL against attacks: they effectively hinder the introduction of new confused-deputy vulnerabilities in future library versions, and successfully restrict the capabilities of attackers when exploiting certain existing vulnerabilities. We discuss usability considerations, and through a set of large-scale experiments show that with current JVM technology such a faithful implementation of stack-based access control induces no observable performance loss.
Cloak and Dagger: From Two Permissions to Complete Control of the UI Feedback Loop
Yanick Fratantonio (UC Santa Barbara), Chenxiong Qian (Georgia Tech), Simon Chung (Georgia Tech), Wenke Lee (Georgia Tech)
The effectiveness of the Android permission system fundamentally hinges on the user’s correct understanding of the capabilities of the permissions being granted. In this paper, we show that both the end-users and the security community have significantly underestimated the dangerous capabilities granted by the SYSTEM_ALERT_WINDOW and the BIND_ACCESSIBILITY_SERVICE permissions: while it is known that these are security-sensitive permissions and they have been abused individually (e.g., in UI redressing attacks, accessibility attacks), previous attacks based on these permissions rely on vanishing side-channels to time the appearance of overlay UI, cannot respond properly to user input, or make the attacks literally visible. This work, instead, uncovers several design shortcomings of the Android platform and shows how an app with these two permissions can completely control the UI feedback loop and create devastating attacks. In particular, we demonstrate how such an app can launch a variety of stealthy, powerful attacks, ranging from stealing user’s login credentials and security PIN, to the silent installation of a God-mode app with all permissions enabled, leaving the victim completely unsuspecting. To make things even worse, we note that when installing an app targeting a recent Android SDK, the list of its required permissions is not shown to the user and that these attacks can be carried out without needing to lure the user to knowingly enable any permission. In fact, the SYSTEM_ALERT_WINDOW permission is automatically granted for apps installed from the Play Store and our experiment shows that it is practical to lure users to unknowingly grant the BIND_ACCESSIBILITY_SERVICE permission by abusing capabilities from the SYSTEM_ALERT_WINDOW permission. We evaluated the practicality of these attacks by performing a user study: none of the 20 human subjects that took part of the experiment even suspected they had been attacked. We also found that it is straightforward to get a proof-of-concept app requiring both permissions accepted on the official store. We responsibly disclosed our findings to Google. Unfortunately, since these problems are related to design issues, these vulnerabilities are still unaddressed. We conclude the paper by proposing a novel defense mechanism, implemented as an extension to the current Android API, which would protect Android users and developers from the threats we uncovered.
SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices
Katarzyna Olejnik (Raytheon BBN Technologies), Italo Dacosta (EPFL), Joana Machado (EPFL), Ḱevin Huguenin (UNIL), Mohammad Emtiyaz Khan (Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo), Jean-Pierre Hubaux (EPFL)
Permission systems are the main defense that mobile platforms, such as Android and iOS, offer to users to protect their private data from prying apps. However, due to the tension between usability and control, such systems have several limitations that often force users to overshare sensitive data. We address some of these limitations with SmarPer, an advanced permission mechanism for Android. To address the rigidity of current permission systems and their poor matching of users’ privacy preferences, SmarPer relies on contextual information and machine learning methods to predict permission decisions at runtime. Note that the goal of SmarPer is to mimic the users’ decisions, not to make privacy-preserving decisions per se. Using our SmarPer implementation, we collected 8,521 runtime permission decisions from 41 participants in real conditions. With this unique data set, we show that using an efficient Bayesian linear regression model results in a mean correct classification rate of 80% (±3%). This represents a mean relative reduction of approximately 50% in the number of incorrect decisions when compared with a user-defined static permission policy, i.e., the model used in current permission systems. SmarPer also focuses on the suboptimal trade-off between privacy and utility; instead of only “allow” or “deny” type of decisions, SmarPer also offers an “obfuscate” option where users can still obtain utility by revealing partial information to apps. We implemented obfuscation techniques in SmarPer for different data types and evaluated them during our data collection campaign. Our results show that 73% of the participants found obfuscation useful and it accounted for almost a third of the total number of decisions. In short, we are the first to show, using a large dataset of real in situ permission decisions, that it is possible to leant users’ unique decision patterns at runtime using contextual information while supporting data obfuscation; this is an important step towards automating the management of permissions in smartphones.
The Feasibility of Dynamically Granted Permissions: Aligning Mobile Privacy with User Preferences
Primal Wijesekera (University of British Columbia), Arjun Baokar (University of California, Berkeley), Lynn Tsai (University of California, Berkeley), Joel Reardon (University of California, Berkeley), Serge Egelman (University of California, Berkeley), David Wagner (University of California, Berkeley), Konstantin Beznosov (University of British Columbia)
Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. We performed a longitudinal 131-person field study to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources. We built a classifier to make privacy decisions on the user’s behalf by detecting when context has changed and, when necessary, inferring privacy preferences based on the user’s past decisions and behavior. Our goal is to automatically grant appropriate resource requests without further user intervention, deny inappropriate requests, and only prompt the user when the system is uncertain of the user’s preferences. We show that our approach can accurately predict users’ privacy decisions 96.$% of the time, which is a four-fold reduction in error rate compared to current systems.
IVD: Automatic Learning and Enforcement of Authorization Rules in Online Social Networks
Paul Dan Marinescu (Facebook), Chad Parry (Facebook), Marjori Pomarole (Facebook), Yuan Tian (CMU), Patrick Tague (CMU), Ioannis Papagiannis (Facebook)
Authorization bugs, when present in online social networks, are usually caused by missing or incorrect authorization checks and can allow attackers to bypass the online social network’s protections. Unfortunately, there is no practical way to fully guarantee that an authorization bug will never be introduced—even with good engineering practices—as a web application and its data model become more complex. Unlike other web application vulnerabilities such as XSS and CSRF, there is no practical general solution to prevent missing or incorrect authorization checks. In this paper we propose Invariant Detector (IVD), a defense-in-depth system that automatically learns authorization rules from normal data manipulation patterns and distills them into likely invariants. These invariants, usually learned during the testing or pre-release stages of new features, are then used to block any requests that may attempt to exploit bugs in the social network’s authorization logic, IVD acts as an additional layer of defense, working behind the scenes, complementary to privacy frameworks and testing. We have designed and implemented IVD to handle the unique challenges posed by modern online social networks. IVD is currently running at Facebook, where it infers and evaluates daily more than 200,000 invariants from a sample of roughly 500 million client requests, and checks the resulting invariants every second against millions of writes made to a graph database containing trillions of entities. Thus far IVD has detected several high impact authorization bugs and has successfully blocked attempts to exploit them before code fixes were deployed.

BoF: Research directions for Federal privacy R&D

05:45PM - 07:45PM

In July 2016, the Federal Government published the National Privacy Research Strategy, identifying priorities for Federally-funded privacy research. Join representatives from DHS, FTC, NSA, NSF to help identify what research problems should be tackled to pave the way toward achieving objectives outlined in the NPRS.

Speed Mentoring, BoFs, Symposium/Workshops Bridging Reception

05:45PM - 07:45PM

May 25


Workshops Registration

07:00AM - 11:00AM

Workshops Breakfast

07:30AM - 08:30AM

Workshops Opening Remarks

08:45AM - 09:00AM

Workshops Session #1

09:00AM - 10:00AM

Workshops Break (30 minutes)

10:15AM - 10:45AM

Workshops Session #2

10:45AM - 12:30PM

Workshops Lunch

12:30PM - 01:30PM

Workshops Session #3

01:30PM - 03:15PM

Workshops Break (30 minutes)

03:15PM - 03:45PM

Workshops Session #4

03:45PM - 05:30PM

Closing Remarks

05:30PM - 05:45PM