MAY 23-25, 2016 AT THE FAIRMONT, SAN JOSE, CA

37th IEEE Symposium on
Security and Privacy

   Register

Agenda


May 22


Registration and Reception

04:00PM - 07:00PM

May 23


Registration

07:00AM - 05:00PM

Breakfast

07:30AM - 08:30AM

Opening Remarks

08:30AM - 08:40AM

Session #1: Hardware and private execution

08:40AM - 10:20AM

HDFI: Hardware-Assisted Data-Flow Isolation
Chengyu Song (Georgia Institute of Technology), Hyungon Moon (Seoul National University), Monjur Alam, Insu Yun, Byoungyoung Lee, Taesoo Kim, and Wenke Lee (Georgia Institute of Technology), and Yunheung Paek (Seoul National University)
Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate, in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell-LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low.
A2: Analog Malicious Hardware
Kaiyuan Yang, Matthew Hicks, Qing Dong, Todd Austin, and Dennis Sylvester (University of Michigan)
While the move to smaller transistors has been a boon for performance it has dramatically increased the cost to fabricate chips using those smaller transistors. This forces the vast majority of chip design companies to trust a third party -- often overseas -- to fabricate their design. To guard against shipping chips with errors (intentional or otherwise) chip design companies rely on post-fabrication testing. Unfortunately, this type of testing leaves the door open to malicious modifications since attackers can craft attack triggers requiring a sequence of unlikely events, which will never be encountered by even the most diligent tester. In this paper, we show how a fabrication-time attacker can leverage analog circuits to create a hardware attack that is small (i.e., requires as little as one gate) and stealthy (i.e., requires an unlikely trigger sequence before effecting a chip's functionality). In the open spaces of an already placed and routed design, we construct a circuit that uses capacitors to siphon charge from nearby wires as they transition between digital values. When the capacitors fully charge, they deploy an attack that forces a victim flip-flop to a desired value. We weaponize this attack into a remotely-controllable privilege escalation by attaching the capacitor to a wire controllable and by selecting a victim flip-flop that holds the privilege bit for our processor. We implement this attack in an OR1200 processor and fabricate a chip. Experimental results show that our attacks work, show that our attacks elude activation by a diverse set of benchmarks, and suggest that our attacks evade known defenses.
Cache Storage Channels: Alias-Driven Attacks and Verified Countermeasures
Roberto Guanciale, Hamed Nemati, Christoph Baumann, and Mads Dam (KTH)
Caches pose a significant challenge to formal proofs of security for code executing on application processors, as the cache access pattern of security-critical services may leak secret information. This paper reveals a novel attack vector, exposing a low-noise cache storage channel that can be exploited by adapting well-known timing channel analysis techniques. The vector can also be used to attack various types of security-critical software such as hypervisors and application security monitors. The attack vector uses virtual aliases with mismatched memory attributes and self-modifying code to misconfigure the memory system, allowing an attacker to place incoherent copies of the same physical address into the caches and observe which addresses are stored in different levels of cache. We design and implement three different attacks using the new vector on trusted services and report on the discovery of an 128-bit key from an AES encryption service running in TrustZone on Raspberry Pi 2. Moreover, we subvert the integrity properties of an ARMv7 hypervisor that was formally verified against a cache-less model. We evaluate well-known countermeasures against the new attack vector and propose a verification methodology that allows to formally prove the effectiveness of defence mechanisms on the binary code of the trusted software.
Shreds: Fine-grained Execution Units with Private Memory
Yaohui Chen, Sebassujeen Reymondjohnson, Zhichuang Sun, and Long Lu (Stony brook University)
Once attackers have injected code into a victim program's address space, or found a memory disclosure vulnerability, all sensitive data and code inside that address space are subject to thefts or manipulation. Unfortunately, this broad type of attack is hard to prevent, even if software developers wish to cooperate, mostly because the conventional memory protection only works at process level and previously proposed in-process memory isolation methods are not practical for wide adoption. We propose shreds, a set of OS-backed programming primitives that addresses developers' currently unmet needs for fine-grained, convenient, and efficient protection of sensitive memory content against in-process adversaries. A shred can be viewed as a flexibly defined segment of a thread execution (hence the name). Each shred is associated with a protected memory pool, which is accessible only to code running in the shred. Unlike previous works, shreds offer in-process private memory without relying on separate page tables, nested paging, or even modified hardware. Plus, shreds provide the essential data flow and control flow guarantees for running sensitive code. We have built the compiler toolchain and the OS module that together enable shreds on Linux. We demonstrated the usage of shreds and evaluated their performance using 5 non-trivial open source software, including OpenSSH and Lighttpd. The results show that shreds are fairly easy to use and incur low runtime overhead (4.67%).
CaSE: Cache-Assisted Secure Execution on ARM Processors
Ning Zhang (Virginia Polytechnic Institute and State University), Kun Sun (College of William and Mary), and Wenjing Lou and Y. Thomas Hou (Virginia Polytechnic Institute and State University)
Recognizing the pressing demands to secure embedded applications, ARM TrustZone has been adopted in both academic research and commercial products to protect sensitive code and data in a privileged, isolated execution environment. However, the design of TrustZone cannot prevent physical memory disclosure attacks such as cold boot attack from gaining unrestricted read access to the sensitive contents in the dynamic random access memory (DRAM). A number of system-on-chip (SoC) bound execution solutions have been proposed to thaw the cold boot attack by storing sensitive data only in CPU registers, CPU cache or internal RAM. However, when the operating system, which is responsible for creating and maintaining the SoC-bound execution environment, is compromised, all the sensitive data is leaked. In this paper, we present the design and development of a cache-assisted secure execution framework, called CaSE, on ARM processors to defend against sophisticated attackers who can launch multi-vector attacks including software attacks and hardware memory disclosure attacks. CaSE utilizes TrustZone and Cache-as-RAM technique to create a cache-based isolated execution environment, which can protect both code and data of security-sensitive applications against the compromised OS and the cold boot attack. To protect the sensitive code and data against cold boot attack, applications are encrypted in memory and decrypted only within the processor for execution. The memory separation and the cache separation provided by TrustZone are used to protect the cached applications against compromised OS. We implement a prototype of CaSE on the i.MX53 running ARM Cortex-A8 processor. The experimental results show that CaSE incurs small impacts on system performance when executing cryptographic algorithms including AES, RSA, and SHA1.

Break (40 Minutes)

10:20AM - 11:00AM

Session #2: Analyze me

11:00AM - 12:40PM

Back in Black: Towards Formal, Black Box Analysis of Sanitizers and Filters
George Argyros (Columbia University), Ioannis Stais (University of Athens), Angelos Keromytis (Columbia University), and Aggelos Kiayias (University Of Athens)
We tackle the problem of analyzing filter and sanitizer programs remotely, i.e. given only the ability to query the targeted program and observe the output. We focus on two important and widely used program classes: regular expression (RE) filters and string sanitizers. We demonstrate that existing tools from machine learning that are available for analyzing RE filters, namely automata learning algorithms, require a very large number of queries in order to infer real life RE filters. Motivated by this, we develop the first algorithm that infers symbolic representations of automata in the standard membership/equivalence query model. We show that our algorithm provides an improvement of x15 times in the number of queries required to learn real life XSS and SQL filters of popular web application firewall systems such as mod-security and PHPIDS. % Active learning algorithms require the usage of an equivalence oracle, i.e. an oracle that tests the equivalence of a hypothesis with the target machine. We show that when the goal is to audit a target filter with respect to a set of attack strings from a context free grammar, i.e. find an attack or infer that none exists, we can use the attack grammar to implement the equivalence oracle with a single query to the filter. Our construction finds on average 90% of the target filter states when no attack exists and is very effective in finding attacks when they are present. For the case of string sanitizers, we show that existing algorithms for inferring sanitizers modelled as Mealy Machines are not only inefficient, but lack the expressive power to be able to infer real life sanitizers. We design two novel extensions to existing algorithms that allow one to infer sanitizers represented as single-valued transducers. Our algorithms are able to infer many common sanitizer functions such as HTML encoders and decoders. Furthermore, we design an algorithm to convert the inferred models into BEK programs, which allows for further applications such as cross checking different sanitizer implementations and cross compiling sanitizers into different languages supported by the BEK backend. We showcase the power of our techniques by utilizing our black-box inference algorithms to perform an equivalence checking between different HTML encoders including the encoders from Twitter, Facebook and Microsoft Outlook email, for which no implementation is publicly available.
LAVA: Large-scale Automated Vulnerability Addition
Brendan Dolan-Gavitt (NYU), Patrick Hulin (MIT Lincoln Laboratory), Engin Kirda (Northeastern University), Tim Leek (MIT Lincoln Laboratory), Andrea Mambretti and Wil Robertson (Northeastern University), and Frederick Ulrich and Ryan Whelan (MIT Lincoln Laboratory)
Work on automating vulnerability discovery has long been hampered by a shortage of ground-truth corpora with which to evaluate tools and techniques. This lack of ground truth prevents authors and users of tools alike from being able to measure such fundamental quantities as miss and false alarm rates. In this paper, we present LAVA, a novel dynamic taint analysis-based technique for producing ground-truth corpora by quickly and automatically injecting large numbers of realistic bugs into program source code. Every LAVA bug is accompanied by an input that triggers it whereas normal inputs are extremely unlikely to do so. These vulnerabilities are synthetic but, we argue, still realistic, in the sense that they are embedded deep within programs and are triggered by real inputs. Using LAVA, we have injected thousands of bugs into eight real-world programs, including bash, tshark, and the GNU coreutils. In a preliminary evaluation, we found that a prominent fuzzer and a symbolic execution-based bug finder were able to locate some but not all LAVA-injected bugs, and that interesting patterns and pathologies were already apparent in their performance. Our work forms the basis of an approach for generating large ground-truth vulnerability corpora on demand, enabling rigorous tool evaluation and providing a high-quality target for tool developers.
Prepose: Privacy, Security, and Reliability for Gesture-Based Programming
Lucas Silva Figueiredo (Federal University of Pernambuco) and Benjamin Livshits, David Molnar, and Margus Veanes (Microsoft Research)
With the rise of sensors such as the Microsoft Kinect, Leap Motion, and hand motion sensors in phones (i.e., Samsung Galaxy S6), gesture-based interfaces have become practical. Unfortunately, today, to recognize such gestures, applications must have access to depth and video of the user, exposing sensitive data about the user and her environment. Besides these privacy concerns, there are also security threats in sensor-based applications, such as multiple applications registering the same gesture, leading to a conflict (akin to Clickjacking on the web). We address these security and privacy threats with Prepose, a novel domain-specific language (DSL) for easily building gesture recognizers, combined with a system architecture that protects privacy, security, and reliability with untrusted applications. We run Prepose code in a trusted core, and only return specific gesture events to applications. Prepose is specifically designed to enable precise and sound static analysis using SMT solvers, allowing the system to check security and reliability properties before running a gesture recognizer. We demonstrate that Prepose is expressive by creating gestures in three representative domains: physical therapy, tai-chi, and ballet. We further show that runtime gesture matching in Prepose is fast, creating no noticeable lag, as measured on traces from Microsoft Kinect runs. To show that gesture checking at the time of submission to a gesture store is fast, we developed a total of four Z3-based static analyses to test for basic gesture safety and internal validity, to make sure the so-called protected gestures are not overridden, and to check inter-gesture conflicts. Our static analysis scales well in practice: safety checking is under 0.5 seconds per gesture, average validity checking time is only 188ms, lastly, for 97% of the cases, the conflict detection time is below 5 seconds, with only one query taking longer than 15 seconds.
SoK: (State of) The Art of War: Offensive Techniques in Binary Analysis
Yan Shoshitaishvili, Ruoyu Wang, Christopher Salls, Nick Stephens, Mario Polino,Andrew Dutcher, John Grosen, Siji Feng, Christophe Hauser, Christopher Kruegel, and Giovanni Vigna (UC Santa Barbara)
Finding and exploiting vulnerabilities in binary code is a challenging task. The lack of high-level, semantically rich information about data structures and control constructs makes the analysis of program properties harder to scale. However, the importance of binary analysis is on the rise. In many situations binary analysis is the only possible way to prove (or disprove) properties about the code that is actually executed. In this paper, we present a binary analysis framework that implements a number of analysis techniques that have been proposed in the past. We present a systematized implementation of these techniques, which allows other researchers to compose them and develop new approaches. In addition, the implementation of these techniques in a unifying framework allows for the direct comparison of these apporaches and the identification of their advantages and disadvantages. The evaluation included in this paper is performed using a recent dataset created by DARPA for evaluating the effectiveness of binary vulnerability analysis techniques. Our framework has been open-sourced and is available to the security community.
Helping Johnny to Analyze Malware: A Usability-Optimized Decompiler and Malware Analysis User Study
Khaled Yakdan and Sergej Dechand (University of Bonn), Elmar Gerhards-Padilla (Fraunhofer FKIE), and Matthew Smith (University of Bonn)
Analysis of malicious software is an essential task in computer security, it provides the necessary understanding to devise effective countermeasures and mitigation strategies. The level of sophistication and complexity of current malware continues to evolve significantly, as the recently discovered "Regin" malware family strikingly illustrates. This complexity makes the already tedious and time-consuming task of manual malware reverse engineering even more difficult and challenging. Decompilation can accelerate this process by enabling analysts to reason about a high-level, more abstract from of binary code. While significant advances have been made, state-of-the-art decompilers still produce very complex and unreadable code and malware analysts still frequently go back to analyzing the assembly code. In this paper, we present several semantics-preserving code transformations to make the decompiled code more readable, thus helping malware analysts understand and combat malware. We have implemented our optimizations as extensions to the academic decompiler DREAM. To evaluate our approach, we conducted the first user study to measure the quality of decompilers for malware analysis. Our study includes 6 analysis tasks based on real malware samples we obtained from independent malware experts. We evaluate three decompilers: the leading industry decompiler Hex-Rays, the state-of-the-art academic decompiler DREAM, and our usability-optimized decompiler DREAM++. The results show that our readability improvements had a significant effect on how well our participants could analyze the malware samples. DREAM++ outperforms both Hex-Rays and DREAM significantly. Using DREAM++ participants solved 3x more tasks than when using Hex-Rays and 2x more tasks than when using DREAM.

Lunch

12:40PM - 01:40PM

Session #3: Oblivious and snarky

01:40PM - 03:20PM

A Practical Oblivious Map Data Structure with Secure Deletion and History Independence
Daniel S. Roche, Adam J. Aviv, and Seung Geol Choi (United States Naval Academy)
We present a new oblivious RAM that supports variable-sized storage blocks (vORAM), which is the first ORAM to allow varying block sizes without trivial padding. We also present a new history-independent data structure (a HIRB tree) that can be stored within a vORAM. Together, this construction provides an efficient and practical oblivious data structure (ODS) for a key/value map, and goes further to provide an additional privacy guarantee as compared to prior ODS maps: even upon client compromise, deleted data and the history of old operations remain hidden to the attacker. We implement and measure the performance of our system using Amazon Web Services, and the single-operation time for a realistic database (up to 256K entries) is less than 1 second. This represents a 100x speed-up compared to the current best oblivious map data structure (which provides neither secure deletion nor history independence) by Wang et al. (CCS 14).
TaoStore: Overcoming Asynchronicity in Oblivious Data Storage
Cetin Sahin, Victor Zakhary, Amr El Abbadi, Huijia (Rachel) Lin, and Stefano Tessaro (University of California, Santa Barbara)
We consider oblivious storage systems hiding both the contents of the data as well as access patterns from an untrusted cloud provider. We target a scenario where multiple users from a trusted group (e.g., corporate employees) asynchronously access and edit potentially overlapping data sets through a trusted proxy mediating client-cloud communication. The main contribution of our paper is twofold. Foremost, we initiate the first formal study of asynchronicity in oblivious storage systems. We provide security definitions for scenarios where both client requests and network communication are asynchronous (and in fact, even adversarially scheduled). While security issues in ObliviStore (Stefanov and Shi, S&P 2013) have recently been surfaced, our treatment shows that also CURIOUS (Bindschaedler at al., CCS 2015), proposed with the exact goal of preventing these attacks, is insecure under asynchronous scheduling of network communication. Second, we develop and evaluate a new oblivious storage system, called Tree-based Asynchronous Oblivious Store, or TaoStore for short, which we prove secure in asynchronous environments. TaoStore is built on top of a new tree-based ORAM scheme that processes client requests concurrently and asynchronously in a non-blocking fashion. This results in a substantial gain in throughput, simplicity, and flexibility over previous systems.
Revisiting Square Root ORAM: Efficient Random Access in Multi-Party Computation
Samee Zahur (University of Virginia), Xiao Wang (University of Maryland), Mariana Raykova (SRI International), Adria Gascon (University of Edinburgh), Jack Doerner and David Evans (University of Virginia), and Jonathan Katz (University of Maryland)
Hiding memory access patterns is required for secure computation, but remains prohibitively expensive for many interesting applications. Prior work has either developed custom algorithms that minimize the need for data-dependant memory access, or proposed the use of Oblivious RAM (ORAM) to provide a general-purpose solution. However, most ORAMs are designed for client-server scenarios, and provide only asymptotic benefits in secure computation. Even the best prior schemes show concrete benefits over naïve linear scan only for array sizes greater than 100. This immediately implies each ORAM access is 100 times slower than a single access at a known location. Even then, prior evaluations ignore the substantial initialization cost of existing schemes. We show how the classical square-root ORAM of Goldreich and Ostrovsky can be modified to overcome these problems, even though it is asymptotically worse than the best known schemes. Specifically, we show a design that has over 100x lower initialization cost, and provides benefits over linear scan for just 8 blocks of data. For all benchmark applications we tried, including Gale-Shapley stable matching and the scrypt key derivation function, our scheme outperforms alternate approaches across a wide range of parameters, often by several orders of magnitude.
Cinderella: Turning Shabby X.509 Certificates into Elegant Anonymous Credentials with the Magic of Verifiable Computation
Antoine Delignat-Lavaud, Cedric Fournet, Markulf Kohlweiss, and Bryan Parno (Microsoft Research)
Despite advances in security engineering, authentication in applications such as email and the Web still primarily relies on the X.509 public key infrastructure introduced in 1988. This PKI has many issues but is nearly impossible to replace. Leveraging recent progress in verifiable computation, we propose a novel use of existing X.509 certificates and infrastructure. Instead of receiving and validating chains of certificates, our applications receive and verify proofs of their knowledge, their validity, and their compliance with application policies. This yields smaller messages (by omitting certificates), stronger privacy (by hiding certificate contents), and stronger integrity (by embedding additional checks, e.g. for revocation). X.509 certificate validation is famously complex and error-prone, as it involves parsing ASN.1 data structures and interpreting them against diverse application policies. To manage this diversity, we propose a new format for writing application policies by composing X.509 templates, and we provide a template compiler that generates C code for validating certificates within a given policy. We then use the Geppetto cryptographic compiler to produce a zero-knowledge verifiable computation scheme for that policy. To optimize the resulting scheme, we develop new C libraries for RSA-PKCS#1 signatures and ASN.1 parsing, carefully tailored for cryptographic verifiability. We evaluate our approach by providing two real-world applications of verifiable computation: a drop-in replacement for certificates within TLS, and access control for the Helios voting protocol. For TLS, we support fine-grained validation policies, with revocation checking and selective disclosure of certificate contents, effectively turning X.509 certificates into anonymous credentials. For Helios, we obtain additional privacy and verifiability guarantees for voters equipped with X.509 certificates, such as those readily available from some national ID cards.
PhotoProof: Cryptographic Image Authentication for Any Set of Permissible Transformations
Assa Naveh and Eran Tromer (Tel Aviv University)
Since the invention of the camera, photos have been used to document reality and to supply proof of events. Yet today it is easy to fabricate realistic images depicting events that never happened. Thus, dozens of papers strive to develop methods for authenticating images. While some commercial cameras already attach digital signatures to photographs, the images often undergo subsequent transformations (cropping, rotation, compression, and so forth), which do not detract from their authenticity, but do change the image data and thus invalidate the signature. Existing methods address this by signing derived image properties that are invariant to some set of transformations. However, these are limited in the supported transformations, and often offer weak security guarantees. We present PhotoProof, a novel approach to image authentication based on cryptographic proofs. It can be configured, according to application requirements, to allow any permissible set of (efficiently computable) transformations. Starting with a signed image, our scheme attaches, to each legitimately derived image, a succinct proof of computational integrity attesting that the transformation was permissible. Anyone can verify these proofs, and generate updated proofs when applying further permissible transformations. Moreover, the proofs are zero-knowledge so that, for example, an authenticated cropped image reveals nothing about the cropped-out regions. PhotoProof is based on Proof-Carrying Data (PCD), a cryptographic primitive for secure execution of distributed computations. We describe the new construction, prove its security, and demonstrate a working prototype supporting a variety of permissible transformations.

Break (30 Minutes)

03:20PM - 03:50PM

Session #4: Call me on usable security

03:50PM - 05:30PM

I Think They're Trying to Tell Me Something: Advice Sources and Selection for Digital Security
Elissa M. Redmiles, Amelia R. Malone, and Michelle L. Mazurek (University of Maryland)
Users receive a multitude of digital-and physical-security advice every day. Indeed, if we implemented all the security advice we received, we would never leave our houses or use the Internet. Instead, users selectively choose some advice to accept and some (most) to reject, however, it is unclear whether they are effectively prioritizing what is most important or most useful. If we can understand from where and why users take security advice, we can develop more effective security interventions. As a first step, we conducted 25 semi-structured interviews of a demographically broad pool of users. These interviews resulted in several interesting findings: (1) participants evaluated digital-security advice based on the trustworthiness of the advice source, but evaluated physical-security advice based on their intuitive assessment of the advice content, (2) negative-security events portrayed in well-crafted fictional narratives with relatable characters (such as those shown in TV or movies) may be effective teaching tools for both digital-and physical-security behaviors, and (3) participants rejected advice for many reasons, including finding that the advice contains too much marketing material or threatens their privacy.
You Get Where You're Looking For: The Impact Of Information Sources On Code Security
Yasemin Acar, Michael Backes, and Sascha Fahl (CISPA, Saarland University), Doowon Kim, Michelle L. Mazurek (University of Maryland, College Park), and Christian Stransky (CISPA, Saarland University)
Vulnerabilities in Android code - including but not limited to insecure data storage, unprotected inter-component communication, broken TLS implementations, and violations of least privilege - have enabled real-world privacy leaks and motivated research cataloguing their prevalence and impact. Researchers have speculated that appification promotes security problems, as it increasingly allows inexperienced laymen to develop complex and sensitive apps. Anecdotally, Internet resources such as Stack Overflow are blamed for promoting insecure solutions that are naively copy-pasted by inexperienced developers. In this paper, we for the first time systematically analyzed how the use of information resources impacts code security. We first surveyed 295 app developers who have published in the Google Play market concerning how they use resources to solve security-related problems. Based on the survey results, we conducted a lab study with 54 Android developers (students and professionals), in which participants wrote security-and privacy-relevant code under time constraints. The participants were assigned to one of four conditions: free choice of resources, Stack Overflow only, official Android documentation only, or books only. Those participants who were allowed to use only Stack Overflow produced significantly less secure code than those using, the official Android documentation or books, while participants using the official Android documentation produced significantly less functional code than those using Stack Overflow. To assess the quality of Stack Overflow as a resource, we surveyed the 139 threads our participants accessed during the study, finding that only 25% of them were helpful in solving the assigned tasks and only 17% of them contained secure code snippets. In order to obtain ground truth concerning the prevalence of the secure and insecure code our participants wrote in the lab study, we statically analyzed a random sample of 200,000 apps from Google Play, finding that 93.6% of the apps used at least one of the API calls our participants used during our study. We also found that many of the security errors made by our participants also appear in the wild, possibly also originating in the use of Stack Overflow to solve programming problems. Taken together, our results confirm that API documentation is secure but hard to use, while informal documentation such as Stack Overflow is more accessible but often leads to insecurity. Given time constraints and economic pressures, we can expect that Android developers will continue to choose those resources that are easiest to use, therefore, our results firmly establish the need for secure-but-usable documentation.
Users Really Do Plug in USB Drives They Find
Matthew Tischer (University of Illinois, Urbana-Champaign), Zakir Durumeric (University of Michigan), Sam Foster, Sunny Duan, and Alec Mori (University of Illinois, Urbana-Champaign), Elie Bursztein (Google), and Michael Bailey (University of Illinois, Urbana-Champaign)
We investigate the anecdotal belief that end users will pick up and plug in USB flash drives they find by completing a controlled experiment in which we drop 297 flash drives on a large university campus. We find that the attack is effective with an estimated success rate of 45 - 98% and expeditious with the first drive connected in less than six minutes. We analyze the types of drives users connected and survey those users to understand their motivation and security profile. We find that a drive's appearance does not increase attack success. Instead, users connect the drive with the altruistic intention of finding the owner. These individuals are not technically incompetent, but are rather typical community members who appear to take more recreational risks then their peers. We conclude with lessons learned and discussion on how social engineering attacks -- while less technical -- continue to be an effective attack vector that our community has yet to successfully address.
SoK: Everyone Hates Robocalls: A Survey of Techniques against Telephone Spam
Huahong Tu, Adam Doupé, Ziming Zhao, and Gail-Joon Ahn (Arizona State University)
Telephone spam costs United States consumers $8.6 billion annually. In 2014, the Federal Trade Commission has received over 22 million complaints of illegal and wanted calls. Telephone spammers today are leveraging recent technical advances in the telephony ecosystem to distribute massive automated spam calls known as robocalls. Given that anti-spam techniques and approaches are effective in the email domain, the question we address is: what are the effective defenses against spam calls? In this paper, we first describe the telephone spam ecosystem, specifically focusing on the differences between email and telephone spam. Then, we survey the existing telephone spam solutions and, by analyzing the failings of the current techniques, derive evaluation criteria that are critical to an acceptable solution. We believe that this work will help guide the development of effective telephone spam defenses, as well as provide a framework to evaluate future defenses.
Sending out an SMS: Characterizing the Security of the SMS Ecosystem with Public Gateways
Bradley Reaves, Nolen Scaife, Dave Tian, Logan Blue, Patrick Traynor, and Kevin Butler (University of Florida)
Text messages sent via the Short Message Service (SMS) have revolutionized interpersonal communication. Recent years have also seen this service become a critical component of the security infrastructure, assisting with tasks including identity verification and second-factor authentication. At the same time, this messaging infrastructure has become dramatically more open and connected to public networks than ever before. However, the implications of this openness, the security practices of benign services, and the malicious misuse of this ecosystem are not well understood. In this paper, we provide the first longitudinal study to answer these questions, analyzing nearly 400,000 text messages sent to public online SMS gateways over the course of 14 months. From this data, we are able to identify not only a range of services sending extremely sensitive plaintext data and implementing low entropy solutions for one-use codes, but also offer insights into the prevalence of SMS spam and behaviors indicating that public gateways are primarily used for evading account creation policies that require verified phone numbers. This latter finding has significant implications for research combatting phone-verified account fraud and demonstrates that such evasion will continue to be difficult to detect and prevent.

Poster Reception

05:30PM - 07:30PM

May 24


Registration

07:00AM - 05:00PM

Breakfast

07:30AM - 08:30AM

Awards

08:30AM - 08:40AM

Session #5: Phoning it in

08:40AM - 10:20AM

Following Devil's Footprints: Cross-Platform Analysis of Potentially Harmful Libraries on Android and iOS
Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences), Xueqiang Wang (Indiana University), Yi Chen (Institute of Information Engineering, Chinese Academy of Sciences), Peng Wang, Yeonjoon Lee, and XiaoFeng Wang (Indiana University), Bin Ma and Aohui Wang (Institute of Information Engineering, Chinese Academy of Sciences), and Yingjun Zhang and Wei Zou (Institute of Software, Chinese Academy of Sciences)
It is reported recently that legitimate libraries are repackaged for propagating malware. An in-depth analysis of such potentially-harmful libraries (PhaLibs), however, has never been done before, due to the challenges in identifying those libraries whose code can be unavailable online (e.g., removed from the public repositories, spreading underground, etc.). Particularly, for an iOS app, the library it integrates cannot be trivially recovered from its binary code and cannot be analyzed by any publicly available anti-virus (AV) systems. In this paper, we report the first systematic study on PhaLibs across Android and iOS, based upon a key observation that many iOS libraries have Android versions that can potentially be used to understand their behaviors and the relations between the libraries on both sides. To this end, we utilize a methodology that first clusters similar packages from a large number of popular Android apps to identify libraries, and strategically analyze them using AV systems to find PhaLibs. Those libraries are then used to search for their iOS counterparts within Apple apps based upon the invariant features shared cross platforms. On each discovered iOS PhaLib, our approach further identifies its suspicious behaviors that also appear on its Android version and uses the AV system on the Android side to confirm that it is indeed potentially harmful. Running our methodology on 1.3 million Android apps and 140,000 popular iOS apps downloaded from 8 markets, we discovered 117 PhaLibs with 1008 variations on Android and 23 PhaLibs with 706 variations on iOS. Altogether, the Android PhaLibs is found to infect 6.84% of Google Play apps and the iOS libraries are embedded within thousands of iOS apps, 2.94% among those from the official Apple App Store. Looking into the behaviors of the PhaLibs, not only do we discover the recently reported suspicious iOS libraries such as mobiSage, but also their Android counterparts and 6 other back-door libraries never known before. Those libraries are found to contain risky behaviors such as reading from their host apps' keychain, stealthily recording audio and video and even attempting to make phone calls. Our research shows that most Android-side harmful behaviors have been preserved on their corresponding iOS libraries, and further identifies new evidence about libraries repackaging for harmful code propagations on both sides.
TriggerScope: Towards Detecting Logic Bombs in Android Applications
Yanick Fratantonio and Antonio Bianchi (UC Santa Barbara), William Robertson (Northeastern University), Engin Kirda (Northeastern University), Christopher Kruegel (UC Santa Barbara), and Giovanni Vigna (UC Santa Barbara)
Android is the most popular mobile platform today, and it is also the mobile operating system that is most heavily targeted by malware. Existing static analyses are effective in detecting the presence of most malicious code and unwanted information flows. However, certain types of malice are very difficult to capture explicitly by modeling permission sets, suspicious API calls, or unwanted information flows. One important type of such malice is malicious application logic, where a program (often subtly) modifies its outputs or performs actions that violate the expectations of the user. Malicious application logic is very hard to identify without a specification of the "normal," expected functionality of the application. We refer to malicious application logic that is executed, or triggered, only under certain (often narrow) circumstances as a logic bomb. This is a powerful mechanism that is commonly employed by targeted malware, often used as part of APTs and state-sponsored attacks: in fact, in this scenario, the malware is designed to target specific victims and to only activate under certain circumstances. In this paper, we make a first step towards detecting logic bombs. In particular, we propose trigger analysis, a new static analysis technique that seeks to automatically identify triggers in Android applications. Our analysis combines symbolic execution, path predicate reconstruction and minimization, and inter-procedural control-dependency analysis to enable the precise detection and characterization of triggers, and it overcomes several limitations of existing approaches. We implemented a prototype of our analysis, called TriggerScope, and we evaluated it over a large corpus of 9,582 benign apps from the Google Play Store and a set of trigger-based malware, including the recently-discovered HackingTeam's RCSAndroid advanced malware. Our system is capable of automatically identify several interesting time-, location-, and SMS-related triggers, is affected by a low false positive rate (0.38%), and it achieves 100% detection rate on the malware set. We also show how existing approaches, specifically when tasked to detect logic bombs, are affected by either a very high false positive rate or false negative rate. Finally, we discuss the logic bombs identified by our analysis, including two previously-unknown backdoors in benign apps.
Inferring User Routes and Locations using Zero-Permission Mobile Sensors
Sashank Narain, Triet Vo Huu, Kenneth Block, and Guevara Noubir (Northeastern University)
Leakage of user location and traffic patterns is a serious security threat with significant implications on privacy as reported by recent surveys and identified by the US Congress Location Privacy Protection Act of 2014. While mobile phones can restrict the explicit access to location information to applications authorized by the user, they are ill-equipped to protect against side-channel attacks. In this paper, we show that a zero-permissions Android app can infer vehicular users' location and traveled routes, with high accuracy and without the users' knowledge, using gyroscope, accelerometer, and magnetometer information. We modeled this problem as a maximum likelihood route identification on a graph. The graph is generated from the OpenStreetMap publicly available database of roads. Our route identification algorithms output both a ranked list of potential routes as well a ranked list of route-clusters. Through extensive simulations over 11 cities, we show that for most cities with probability higher than 50% it is possible to output a short list of 10 routes containing the traveled route. In real driving experiments (over 980 Km) in the cities of Boston (resp. Waltham), Massachusetts, we report a probability of 30% (resp. 60%) of inferring a list of 10 routes containing the true route.
No Pardon for the Interruption: New Inference Attacks on Android Through Interrupt Timing Analysis
Wenrui Diao and Xiangyu Liu (The Chinese University of Hong Kong), Zhou Li (IEEE Member), and Kehuan Zhang (The Chinese University of Hong Kong)
Many new specialized hardware components have been integrated into Android smartphones to improve mobility and usability, such as touchscreen, Bluetooth module, and NFC controller. At the system level, the kernel of Android is built on Linux and inherits its device management mechanisms. However, the security implications surfaced from the integration of new hardware components and the tailored Linux kernel are not fully understood. In this paper, we make the first attempt to evaluate such implications. As a result, we identify a critical information leakage channel from the interrupt handling mechanism, which can be exploited to launch inference attacks without any permission. On Android, all reported interrupts are counted by Linux kernel and the statistical information is logged in a system file /proc/interrupts, which is public to any process. Such statistical information reveals the running status of all integrated devices, and could be exploited by attackers to infer sensitive information passing through them. To assess this new threat, we propose a general attack approach - interrupt timing analysis and apply it to interrupt logs. As showcases, we present two concrete inference attacks against user's unlock pattern and foreground app status respectively. Through analyzing the interrupt time series produced from touchscreen controller, attacker's chance of cracking user's unlock pattern is increased substantially. The interrupt time series produced from Display Sub-System reveals unique UI refreshing patterns and could be leveraged as fingerprints to identify the app running in the foreground. Such information can serve as the stepping stone for the subsequent phishing attacks. The experiment results suggest our inference attacks are highly effective, and the risks should be mitigated immediately.
SoK: Lessons Learned From Android Security Research For Appified Software Platforms
Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sven Bugiel and Sascha Fahl (CISPA, Saarland University), Patrick McDaniel (Pennsylvania State University), and Matthew Smith (University Bonn)
Android security and privacy research has boomed in recent years, far outstripping investigations of other appified platforms. However, despite this attention, research efforts are fragmented and lack any coherent evaluation framework. We present a systematization of Android security and privacy research with a focus on the appification of software systems. To put Android security and privacy research into context, we compare the concept of appification with conventional operating system and software ecosystems. While appification has improved some issues (e.g., market access and usability), it has also introduced a whole range of new problems and aggravated some problems of the old ecosystems (e.g., coarse and unclear policy, poor software development practices). Some of our key findings are that contemporary research frequently stays on the beaten path instead of following unconventional and often promising new routes. Many security and privacy proposals focus entirely on the Android OS and do not take advantage of the unique features and actors of an appified ecosystem, which could be used to roll out new security mechanisms less disruptively. Our work highlights areas that have received the larger shares of attention, which attacker models were addressed, who is the target, and who has the capabilities and incentives to implement the countermeasures. We conclude with lessons learned from comparing the appified with the old world, shedding light on missed opportunities and proposing directions for future research.

Break (40 Minutes)

10:20AM - 11:00AM

Session #6: Key exchange and certificates

11:00AM - 12:40PM

Key Confirmation in Key Exchange: A Formal Treatment and Implications for TLS 1.3
Marc Fischlin and Felix Günther (Technische Universität Darmstadt), Benedikt Schmidt (IMDEA Software Institute), and Bogdan Warinschi (University of Bristol)
Key exchange protocols allow two parties at remote locations to compute a shared secret key. The common security notions for such protocols are secrecy and authenticity, but many widely deployed protocols and standards name another property, called key confirmation, as a major design goal. This property should guarantee that a party in the key exchange protocol is assured that another party also holds the shared key. Remarkably, while secrecy and authenticity definitions have been studied extensively, key confirmation has been treated rather informally so far. In this work, we provide the first rigorous formalization of key confirmation, leveraging the game-based security framework well-established for secrecy and authentication notions for key exchange. We define two flavors of key confirmation, full and almost-full key confirmation, taking into account the inevitable asymmetry of the roles of the parties with respect to the transmission of the final protocol message. These notions capture the strongest level of key confirmation reasonably expectable for the two communication partners of the key exchange. We demonstrate the benefits of having precise security definitions for key-confirmation by applying them to the next version of the Transport Layer Security (TLS) protocol, version 1.3, currently developed by the Internet Engineering Task Force (IETF). Our analysis shows that the full handshake as specified in the TLS 1.3 draft draft-ietf-tls-tls13-10 achieves desirable notions of key confirmation for both clients and servers. While key confirmation is generally understood and in the TLS 1.3 draft described as being obtained from the Finished messages exchanged, interestingly we can show that the full TLS 1.3 handshake provides key confirmation even without those messages, shedding a formal light on the security properties different handshake messages entail. We further demonstrate the usefulness of rigorous definition by revisiting a folklore approach to establish key confirmation (as discussed for example in SP 800-56A of NIST). We provide a formalization as a generic protocol transformation and show that the resulting protocols enjoy strong key confirmation guarantees, thus confirming its beneficial use in both theoretical and practical protocol designs.
Automated Analysis and Verification of TLS 1.3: 0-RTT, Resumption and Delayed Authentication
Cas Cremers and Marko Horvat (University of Oxford) and Sam Scott and Thyla van der Merwe (Royal Holloway, University of London)
After a development process of many months, the TLS 1.3 specification is nearly complete. To prevent past mistakes, this crucial security protocol must be thoroughly scrutinised prior to deployment. In this work we model and analyse revision 10 of the TLS 1.3 specification using the Tamarin prover, a tool for the automated analysis of security protocols. We specify and analyse the interaction of various handshake modes for an unbounded number of concurrent TLS sessions. We show that revision 10 meets the goals of authenticated key exchange in both the unilateral and mutual authentication cases. We extend our model to incorporate the desired delayed client authentication mechanism, a feature that is likely to be included in the next revision of the specification, and uncover a potential attack in which an adversary is able to successfully impersonate a client during a PSK-resumption handshake. This observation was reported to, and confirmed by, the IETF TLS Working Group. Our work not only provides the first supporting evidence for the security of several complex protocol mode interactions in TLS 1.3, but also shows the strict necessity of recent suggestions to include more information in the protocol's signature contents.
Multiple Handshakes Security of TLS 1.3 Candidates
Xinyu Li (Trusted Computing and Information Assurance Laboratory, SKLCS, Institute of Software, Chinese Academy of Sciences, Beijing, China and CAS Key Laboratory of Electromagnetic Space Information, University of Science and Technology of China, Hefei, China), Jing Xu, Zhenfeng Zhang, and Dengguo Feng (Trusted Computing and Information Assurance Laboratory, Institute of Software, Chinese Academy of Sciences), and Honggang Hu (Key Laboratory of Electromagnetic Space Information, Chinese Academy of Sciences, University of Science and Technology of China)
The Transport Layer Security (TLS) protocol is by far the most widely deployed protocol for securing communications and the Internet Engineering Task Force (IETF) is currently developing TLS 1.3 as the next-generation TLS protocol. The TLS standard features multiple modes of handshake protocols and supports many combinational running of successive TLS handshakes over multiple connections. Although each handshake mode is now well-understood in isolation, their composition in TLS 1.2 remains problematic, and yet it is critical to obtain practical security guarantees for TLS. In this paper, we present the first formal treatment of multiple handshakes protocols of TLS 1.3 candidates. First, we introduce a multi-level & stage security model, an adaptation of the BellareRogaway authenticated key exchange model, covering all kinds of compositional interactions between different TLS handshake modes and providing reasonably strong security guarantees. Next, we prove that candidate handshakes of TLS 1.3 draft meet our strong notion of multiple handshakes security. Our results confirm the soundness of TLS 1.3 security protection design. Such a multi-level & stage approach is convenient for analyzing the compositional design of the candidates with different session modes, as they establish dependencies of multiple sessions. We also identify the triple handshake attack of Bhargavan et al. on TLS 1.2 within our multiple handshakes security model. We show generically that the proposed fixes (RFC 7627) for TLS 1.2 offer good protection against multiple handshakes attacks.
Downgrade Resilience in Key-Exchange Protocols
Karthikeyan Bhargavan (Inria Paris-Rocquencourt), Christina Brzuska (Hamburg University of Technology), Cédric Fournet (Microsoft Research), Matthew Green (Johns Hopkins University), and Markulf Kohlweiss and Santiago Zanella-Béguelin (Microsoft Research)
Key-exchange protocols such as TLS, SSH, IPsec, and ZRTP are highly configurable, with typical deployments supporting multiple protocol versions, cryptographic algorithms and parameters. In the first messages of the protocol, the peers negotiate one specific combination: the protocol mode, based on their local configurations. With few notable exceptions, most cryptographic analyses of configurable protocols consider a single mode at a time. In contrast, downgrade attacks, where a network adversary forces peers to use a mode weaker than the one they would normally negotiate, are a recurrent problem in practice. How to support configurability while at the same time guaranteeing the preferred mode is negotiated? We set to answer this question by designing a formal framework to study downgrade resilience and its relation to other security properties of key-exchange protocols. First, we study the causes of downgrade attacks by dissecting and classifying known and novel attacks against widely used protocols. Second, we survey what is known about the downgrade resilience of existing standards. Third, we combine these findings to define downgrade security, and analyze the conditions under which several protocols achieve it. Finally, we discuss patterns that guarantee downgrade security by design, and explain how to use them to strengthen the security of existing protocols, including a newly proposed draft of TLS 1.3.
Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning
Ewa Syta, Iulia Tamas, Dylan Visher, and David Isaac Wolinsky (Yale University), and Philipp Jovanovic, Linus Gasser, Nicolas Gailly, Ismail Khoffi, and Bryan Ford (EPFL)
The secret keys of critical network authorities - such as time, name, certificate, and software update services - represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.

Lunch

12:40PM - 01:40PM

Session #7: Learning about privacy

01:40PM - 03:00PM

Synthesizing Plausible Privacy-Preserving Location Traces
Vincent Bindschaedler (UIUC) and Reza Shokri (UT Austin)
Camouflaging user's actual location with fakes is a prevalent obfuscation technique for protecting location privacy. We show that the protection mechanisms based on the existing (ad hoc) techniques for generating fake locations are easily broken by inference attacks. They are also detrimental to many utility functions, as they fail to credibly imitate the mobility of living people. This paper introduces a systematic approach to synthesizing plausible location traces. We propose metrics that capture both geographic and semantic features of real location traces. Based on these statistical metrics, we design a privacy-preserving generative model to synthesize location traces which are plausible to be trajectories of some individuals with consistent lifestyles and meaningful mobilities. Using a state-of-the-art quantitative framework, we show that our synthetic traces can significantly paralyze location inference attacks. We also show that these fake traces have many useful statistical features in common with real traces, thus can be used in many geo-data analysis tasks. We guarantee that the process of generating synthetic traces itself is privacy preserving and ensures plausible deniability. Thus, although the crafted traces statistically resemble human mobility, they do not leak significant information about any particular individual whose data is used in the synthesis process.
A Method for Verifying Privacy-Type Properties: The Unbounded Case
Lucca Hirschi and David Baelde (LSV, ENS Cachan) and Stéphanie Delaune (LSV, ENS Cachan & CNRS)
In this paper, we consider the problem of verifying anonymity and unlinkability in the symbolic model, where protocols are represented as processes in a variant of the applied pi calculus notably used in the ProVerif tool. Existing tools and techniques do not allow one to verify directly these properties, expressed as behavioral equivalences. We propose a different approach: we design two conditions on protocols which are sufficient to ensure anonymity and unlinkability, and which can then be effectively checked automatically using ProVerif. Our two conditions correspond to two broad classes of attacks on unlinkability, corresponding to data and control-flow leaks. This theoretical result is general enough to apply to a wide class of protocols. In particular, we apply our techniques to provide the first formal security proof of the BAC protocol (e-passport). Our work has also lead to the discovery of new attacks, including one on the LAK protocol (RFID authentication) which was previously claimed to be unlinkable (in a weak sense) and one on the PACE protocol (e-passport).
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot and Patrick McDaniel (The Pennsylvania State University), Xi Wu and Somesh Jha (University of Wisconsin-Madison), and Ananthram Swami (United States Army Research Laboratory)
Deep learning algorithms have been shown to perform extremely well on manyclassical machine learning problems. However, recent studies have shown thatdeep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force adeep neural network (DNN) to provide adversary-selected outputs. Such attackscan seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles canbe crashed, illicit or illegal content can bypass content filters, or biometricauthentication systems can be manipulated to allow improper access. In thiswork, we introduce a defensive mechanism called defensive distillationto reduce the effectiveness of adversarial samples on DNNs. We analyticallyinvestigate the generalizability and robustness properties granted by the useof defensive distillation when training DNNs. We also empirically study theeffectiveness of our defense mechanisms on two DNNs placed in adversarialsettings. The study shows that defensive distillation can reduce effectivenessof sample creation from 95% to less than 0.5% on a studied DNN. Such dramaticgains can be explained by the fact that distillation leads gradients used inadversarial sample creation to be reduced by a factor of 1030. We alsofind that distillation increases the average minimum number of features thatneed to be modified to create adversarial samples by about 800% on one of theDNNs we tested.
Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems
Anupam Datta, Shayak Sen, and Yair Zick (Carnegie Mellon University)
Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque -- it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

Break (40 Minutes)

03:00PM - 03:40PM

Session #8: Vulnerabilities

03:40PM - 04:40PM

Talos: Neutralizing Vulnerabilities with Security Workarounds for Rapid Response
Zhen Huang, Mariana Dangelo, Dhaval Miyani, and David Lie (University of Toronto)
There is often a considerable delay between the discovery of a vulnerability and the issue of a patch. One way to mitigate this window of vulnerability is to use a configuration workaround, which prevents the vulnerable code from being executed at the cost of some lost functionality - but only if one is available. Since application configurations are not specifically designed to mitigate software vulnerabilities, we find that they only cover 25.2% of vulnerabilities. To minimize patch delay vulnerabilities and address the limitations of configuration workarounds, we propose Security Workarounds for Rapid Response (SWRRs), which are designed to neutralize security vulnerabilities in a timely, secure, and unobtrusive manner. Similar to configuration workarounds, SWRRs neutralize vulnerabilities by preventing vulnerable code from being executed at the cost of some lost functionality. However, the key difference is that SWRRs use existing error-handling code within applications, which enables them to be mechanically inserted with minimal knowledge of the application and minimal developer effort. This allows SWRRs to achieve high coverage while still being fast and easy to deploy. We have designed and implemented Talos, a system that mechanically instruments SWRRs into a given application, and evaluate it on five popular Linux server applications. We run exploits against 11 real-world software vulnerabilities and show that SWRRs neutralize the vulnerabilities in all cases. Quantitative measurements on 320 SWRRs indicate that SWRRs instrumented by Talos can neutralize 75.1% of all potential vulnerabilities and incur a loss of functionality similar to configuration workarounds in 71.3% of those cases. Our overall conclusion is that automatically generated SWRRs can safely mitigate 2.1x more vulnerabilities, while only incurring a loss of functionality comparable to that of traditional configuration workarounds.
Security Analysis of Emerging Smart Home Applications
Earlence Fernandes (University of Michigan, Ann Arbor), Jaeyeon Jung (Microsoft Research), and Atul Prakash (University of Michigan, Ann Arbor)
Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.
Staying Secure and Unprepared: Understanding and Mitigating the Security Risks of Apple ZeroConf
Xiaolong Bai (TNList, Tsinghua University, Beijing), Luyi Xing, Nan Zhang, and XiaoFeng Wang (Indiana University Bloomington), Xiaojing Liao (Georgia Institute of Technology), Tongxin Li (Peking University), and Shi-Min Hu (Tsinghua University)
With the popularity of today's usability-oriented designs, dubbed Zero Configuration or ZeroConf, unclear are the security implications of these automatic service discovery, "plug-and-play" techniques. In this paper, we report the first systematic study on this issue, focusing on the security features of the systems related to Apple, the major proponent of ZeroConf techniques. Our research brings to light a disturbing lack of security consideration in these systems' designs: major ZeroConf frameworks on the Apple platforms, including the Core Bluetooth Framework, Multipeer Connectivity and Bonjour, are mostly unprotected and popular apps and system services, such as Tencent QQ, Apple Handoff, printer discovery and AirDrop, turn out to be completely vulnerable to an impersonation or Man-in-the-Middle (MitM) attack, even though attempts have been made to protect them against such threats. The consequences are serious, allowing a malicious device to steal the user's SMS messages, email notifications, documents to be printed out or transferred to another device. Most importantly, our study highlights the fundamental security challenges underlying ZeroConf techniques: in the absence of any pre-configured secret across different devices, authentication has to rely on Apple-issued public-key certificate, which however cannot be properly verified due to the difficulty in finding a unique, nonsensitive and widely known identity of a human user to bind her to her certificate. To address this issue, we developed a suite of new techniques, including a conflict detection approach and a biometric technique that enables the user to speak out her certificate through 6 distinct, rare but pronounceable words to let those who know her voice verify her certificate. We performed a security analysis on the new protection and evaluated its usability and effectiveness using two user studies involving 60 participants. Our research shows that the new protection fits well with the existing ZeroConf systems such as AirDrop. It is well received by users and also providing effective defense even against recently proposed speech synthesis attacks.

Break (20 Minutes)

04:40PM - 05:00PM

Short Talks

05:00PM - 06:00PM

S&P TC Business Meeting

06:15PM - 07:15PM

May 25


Registration

07:00AM - 06:00PM

Breakfast

07:30AM - 08:30AM

Closing Remarks

08:30AM - 08:40AM

Session #9: Don't go on the Web

08:40AM - 10:20AM

MitM Attack by Name Collision: Cause Analysis and Vulnerability Assessment in the New gTLD Era
Qi Alfred Chen (University of Michigan), Eric Osterweil and Matthew Thomas (VeriSign Labs), and Z. Morley Mao (University of Michigan)
Recently, Man in the Middle (MitM) attacks on web browsing have become easier than they have ever been before because of a problem called "Name Collision" and a protocol called the Web Proxy Auto-Discovery (WPAD) protocol. This name collision attack can cause all web traffic of an Internet user to be redirected to a MitM proxy automatically right after the launching of a standard browser. The underlying problem of this attack is internal namespace WPAD query leakage, which itself is a known problem for years. However, it remains understudied since it was not easily exploitable before the recent new gTLD (generic Top-Level Domains) delegation. In this paper, we focus on this newly-exposed MitM attack vector and perform the first systematic study of the underlying problem causes and its vulnerability status in the wild. First, we show the severity of the problem by characterizing leaked WPAD query traffic to the DNS root servers, and find that a major cause of the leakage problem is actually a result of settings on the end user devices. More specifically, we find that under common settings, devices can mistakenly generate internal queries when used outside an internal network (e.g., used at home). Second, we define and quantify a candidate measure of attack surface by defining "highly-vulnerable domains", which are domains routinely exposing a large number of potential victims, and use it to perform a systematic assessment of the vulnerability status. We find that almost all leaked queries are for new gTLD domains we define to be highly-vulnerable, indirectly validating our attack surface definition. We further find that 10% of these highly-vulnerable domains have already been registered, making the corresponding users immediately vulnerable to the exploit at any time. Our results provide a strong and urgent message to deploy proactive protection. We discuss promising directions for remediation at the new gTLD registry, Autonomous System (AS), and end user levels, and use empirical data analysis to estimate and compare their effectiveness and deployment difficulties.
Domain-Z: 28 Registrations Later
Chaz Lever (Georgia Tech), Robert Walls (Penn State), Yacin Nadji and David Dagon (Georgia Tech), Patrick McDaniel (Penn State), and Manos Antonakakis (Georgia Tech)
Any individual that re-registers an expired domain implicitly inherits the residual trust associated with the domain's prior use. We find that adversaries can, and do, use malicious re-registration to exploit domain ownership changes -- undermining the security of both users and systems. In fact, we find that many seemingly disparate security problems share a root cause in residual domain trust abuse. With this study we shed light on the seemingly unnoticed problem of residual domain trust by measuring the scope and growth of this abuse over the past six years. During this time, we identified 27,758 domains from public blacklists and 238,279 domains resolved by malware that expired and then were maliciously re-registered. To help address this problem, we propose a technical remedy and discuss several policy remedies. For the former, we develop Alembic, a lightweight algorithm that uses only passive observations from the Domain Name System (DNS) to flag potential domain ownership changes. We identify several instances of residual trust abuse using this algorithm, including an expired APT domain that could be used to revive existing infections.
Seeking Nonsense, Looking for Trouble: Efficient Promotional-Infection Detection through Semantic Inconsistency Search
Xiaojing Liao (Georgia Institute of Technology), Kan Yuan and XiaoFeng Wang (Indiana University at Bloomington), Zhongyu Pei, Hao Yang, Jianjun Chen, Haixin Duan, and Kun Du (Tsinghua University), Eihal Alowaisheq, Sumayah Alrwais, and Luyi Xing (Indiana University at Bloomington), and Raheem Beyah (Georgia Institute of Technology)
Promotional infection is an attack in which the adversary exploits a website's weakness to inject illicit advertising content. Detection of such an infection is challenging due to its similarity to legitimate advertising activities. An interesting observation we make in our research is that such an attack almost always incurs a great semantic gap between the infected domain (e.g., a university site) and the content it promotes (e.g., selling cheap viagra). Exploiting this gap, we developed a semantic-based technique, called Semantic Inconsistency Search (SEISE), for efficient and accurate detection of the promotional injections on sponsored top-level domains (sTLD) with explicit semantic meanings. Our approach utilizes Natural Language Processing (NLP) to identify the bad terms (those related to illicit activities like fake drug selling, etc.) most irrelevant to an sTLD's semantics. These terms, which we call irrelevant bad terms (IBTs), are used to query search engines under the sTLD for suspicious domains. Through a semantic analysis on the results page returned by the search engines, SEISE is able to detect those truly infected sites and automatically collect new IBTs from the titles/URLs/snippets of their search result items for finding new infections. Running on 403 sTLDs with an initial 30 seed IBTs, SEISE analyzed 100K fully qualified domain names (FQDN), and along the way automatically gathered nearly 600 IBTs. In the end, our approach detected 11K infected FQDN with a false detection rate of 1.5% and over 90% coverage. Our study shows that by effective detection of infected sTLDs, the bar to promotion infections can be substantially raised, since other non-sTLD vulnerable domains typically have much lower Alexa ranks and are therefore much less attractive for underground advertising. Our findings further bring to light the stunning impacts of such promotional attacks, which compromise FQDNs under 3% of .edu, .gov domains and over one thousand gov.cn domains, including those of leading universities such as stanford.edu, mit.edu, princeton.edu, havard.edu and government institutes such as nsf.gov and nih.gov. We further demonstrate the potential to extend our current technique to protect generic domains such as .com and .org.
The Cracked Cookie Jar: HTTP Cookie Hijacking and the Exposure of Private Information
Suphannee Sivakorn, Iasonas Polakis, and Angelos D. Keromytis (Columbia University)
The widespread demand for online privacy, also fueled by widely-publicized demonstrations of session hijacking attacks against popular websites, has spearheaded the increasing deployment of HTTPS. However, many websites still avoid ubiquitous encryption due to performance or compatibility issues. The prevailing approach in these cases is to force critical functionality and sensitive data access over encrypted connections, while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. In this paper, we conduct an in-depth assessment of a diverse set of major websites and explore what functionality and information is exposed to attackers that have hijacked a user's HTTP cookies. We identify a recurring pattern across websites with partially deployed HTTPS, service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-session cookies. Our cookie hijacking study reveals a number of severe flaws, attackers can obtain the user's home and work address and visited websites from Google, Bing and Baidu expose the user's complete search history, and Yahoo allows attackers to extract the contact list and send emails from the user's account. Furthermore, e-commerce vendors such as Amazon and Ebay expose the user's purchase history (partial and full respectively), and almost every website exposes the user's name and email address. Ad networks like Doubleclick can also reveal pages the user has visited. To fully evaluate the practicality and extent of cookie hijacking, we explore multiple aspects of the online ecosystem, including mobile apps, browser security mechanisms, extensions and search bars. To estimate the extent of the threat, we run IRB-approved measurements on a subset of our university's public wireless network for 30 days, and detect over 282K accounts exposing the cookies required for our hijacking attacks. We also explore how users can protect themselves and find that, while mechanisms such as the EFF's HTTPS Everywhere extension can reduce the attack surface, HTTP cookies are still regularly exposed. The privacy implications of these attacks become even more alarming when considering how they can be used to deanonymize Tor users. Our measurements suggest that a significant portion of Tor users may currently be vulnerable to cookie hijacking.
Cloak of Visibility: Detecting When Machines Browse a Different Web
Luca Invernizzi (Google), Kurt Thomas (Google), Alexandros Kapravelos (North Carolina State University), Oxana Comanescu (Google), Jean-Michel Picod (Google), and Elie Bursztein (Google)
The contentious battle between web services and miscreants involved in blackhat search engine optimization and malicious advertisements has driven the underground to develop increasingly sophisticated techniques that hide the true nature of malicious sites. These web cloaking techniques hinder the effectiveness of security crawlers and potentially expose Internet users to harmful content. In this work, we study the spectrum of blackhat cloaking techniques that target browser, network, or contextual cues to detect organic visitors. As a starting point, we investigate the capabilities of ten prominent cloaking services marketed within the underground. This includes a first look at multiple IP blacklists that contain over 50 million addresses tied to the top five search engines and tens of anti-virus and security crawlers. We use our findings to develop an anti-cloaking system that detects split-view content returned to two or more distinct browsing profiles with an accuracy of 95.5% and a false positive rate of 0.9% when tested on a labeled dataset of 94,946 URLs. We apply our system to an unlabeled set of 135,577 search and advertisement URLs keyed on high-risk terms (e.g., luxury products, weight loss supplements) to characterize the prevalence of threats in the wild and expose variations in cloaking techniques across traffic sources. Our study provides the first broad perspective of cloaking as it affects Google Search and Google Ads and underscores the minimum capabilities necessary of security crawlers to bypass the state of the art in mobile, rDNS, and IP cloaking.

Break (30 Minutes)

10:20AM - 10:50AM

Session #10: Applied cryptography

10:50AM - 12:30PM

Verifiable ASICs
Riad S. Wahby (New York University, Stanford University), Max Howald (The Cooper Union, New York University), Siddharth Garg (New York University), abhi shelat (The University of Virginia), and Michael Walfish (New York University)
A manufacturer of custom hardware (ASICs) can undermine the intended execution of that hardware, high-assurance execution thus requires controlling the manufacturing chain. However, a trusted platform might be orders of magnitude worse in performance or price than an advanced, untrusted platform. This paper initiates exploration of an alternative: using verifiable computation (VC), an untrusted ASIC computes proofs of correct execution, which are verified by a trusted processor or ASIC. In contrast to the usual VC setup, here the prover and verifier together must impose less overhead than the alternative of executing directly on the trusted platform. We instantiate this approach by designing and implementing physically realizable, area-efficient, high throughput ASICs (for a prover and verifier), in fully synthesizable Verilog. The system, called Zebra, is based on the CMT and Allspice interactive proof protocols, and required new observations about CMT, careful hardware design, and attention to architectural challenges. For a class of real computations, Zebra meets or exceeds the performance of executing directly on the trusted platform.
SoK: Verifiability Notions for E-Voting Protocols
Veronique Cortier (INRIA/Loria), David Galindo (University of Birmingham), Ralf Kuesters and Johannes Mueller (University of Trier), and Tomasz Truderung (Polyas)
There have been intensive research efforts in the last two decades or so to design and deploy electronic voting (e-voting) protocols/systems which allow voters and/or external auditors to check that the votes were counted correctly. This security property, which not least was motivated by numerous problems in even national elections, is called verifiability. It is meant to defend against voting devices and servers that have programming errors or are outright malicious. In order to properly evaluate and analyze e-voting protocols w.r.t.~verifiability, one fundamental challenge has been to formally capture the meaning of this security property. While the first formal definitions of verifiability were devised in the late 1980s already, new verifiability definitions are still being proposed. The definitions differ in various aspects, including the classes of protocols they capture and even their formulations of the very core of the meaning of verifiability. This is an unsatisfying state of affairs, leaving the research on the verifiability of e-voting protocols in a fuzzy state. In this paper, we review all formal definitions of verifiability proposed in the literature and cast them in a framework proposed by Kuesters, Truderung, and Vogt (the KTV framework), yielding a uniform treatment of verifiability. This enables us to provide a detailed comparison of the various definitions of verifiability from the literature. We thoroughly discuss advantages and disadvantages, and point to limitations and problems. Finally, from these discussions and based on the KTV framework, we distill a general definition of verifiability, which can be instantiated in various ways, and provide precise guidelines for its instantiation. The concepts for verifiability we develop should be widely applicable also beyond the framework used here. Altogether, our work offers a well-founded reference point for future research on the verifiability of e-voting systems.
pASSWORD tYPOS and How to Correct Them Securely
Rahul Chatterjee (Cornell University), Anish Athalye (MIT), Devdatta Akhawe (Dropbox), and Ari Juels and Thomas Ristenpart (Cornell Tech)
We provide the first treatment of typo-tolerant password authentication for arbitrary user-selected passwords. Such a system, rather than simply rejecting a login attempt with an incorrect password, tries to correct common typographical errors on behalf of the user. Limited forms of typo-tolerance have been used in some industry settings, but to date there has been no analysis of the utility and security of such schemes. We quantify the kinds and rates of typos made by users via studies conducted on Amazon Mechanical Turk and via instrumentation of the production login infrastructure at Dropbox. The instrumentation at Dropbox did not record user passwords or otherwise change authentication policy, but recorded only the frequency of observed typos. Our experiments reveal that almost 10% of login attempts fail due to a handful of simple, easily correctable typos, such as capitalization errors. We show that correcting just a few of these typos would reduce login delays for a significant fraction of users as well as enable an additional 3% of users to achieve successful login. We introduce a framework for reasoning about typo-tolerance, and investigate the seemingly inherent tension here between security and usability of passwords. We use our framework to show that there exist typo-tolerant authentication schemes that can get corrections for "free": we prove they are as secure as schemes that always reject mistyped passwords. Building off this theory, we detail a variety of practical strategies for securely implementing typo-tolerance.
On the Practicality of Cryptographically Enforcing Dynamic Access Control Policies in the Cloud
William C. Garrison III (University of Pittsburgh), Adam Shull and Steven Myers (Indiana University), and Adam J. Lee (University of Pittsburgh)
The ability to enforce robust and dynamic access controls on cloud-hosted data while simultaneously ensuring confidentiality with respect to the cloud itself is a clear goal for many users and organizations. To this end, there has been much cryptographic research proposing the use of (hierarchical) identity-based encryption, attribute-based encryption, predicate encryption, functional encryption, and related technologies to perform robust and private access control on untrusted cloud providers. However, the vast majority of this work studies static models in which the access control policies being enforced do not change over time. This is contrary to the needs of most practical applications, which leverage dynamic data and/or policies. In this paper, we show that the cryptographic enforcement of dynamic access controls on untrusted platforms incurs computational costs that are likely prohibitive in practice. Specifically, we develop lightweight constructions for enforcing role-based access controls (i.e., RBAC0) over cloud-hosted files using identity-based and traditional public-key cryptography. This is done under a threat model as close as possible to the one assumed in the cryptographic literature. We prove the correctness of these constructions, and leverage real-world RBAC datasets and recent techniques developed by the access control community to experimentally analyze, via simulation, their associated computational costs. This analysis shows that supporting revocation, file updates, and other state change functionality is likely to incur prohibitive overheads in even minimally-dynamic, realistic scenarios. We identify a number of bottlenecks in such systems, and fruitful areas for future work that will lead to more natural and efficient constructions for the cryptographic enforcement of dynamic access controls. Our findings naturally extend to the use of more expressive cryptographic primitives (e.g., HIBE or ABE) and richer access control models (e.g., RBAC1 or ABAC).
Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts
Ahmed Kosba and Andrew Miller (University of Maryland), Elaine Shi and Zikai Wen (Cornell University), and Charalampos Papamanthou (University of Maryland)
Emerging smart contract systems over decentralized cryptocurrencies allow mutually distrustful parties to transact safely without trusted third parties. In the event of contractual breaches or aborts, the decentralized blockchain ensures that honest parties obtain commensurate compensation. Existing systems, however, lack transactional privacy. All transactions, including flow of money between pseudonyms and amount transacted, are exposed on the blockchain. We present Hawk, a decentralized smart contract system that does not store financial transactions in the clear on the blockchain, thus retaining transactional privacy from the public's view. A Hawk programmer can write a private smart contract in an intuitive manner without having to implement cryptography, and our compiler automatically generates an efficient cryptographic protocol where contractual parties interact with the blockchain, using cryptographic primitives such as zero-knowledge proofs. To formally define and reason about the security of our protocols, we are the first to formalize the blockchain model of cryptography. The formal modeling is of independent interest. We advocate the community to adopt such a formal model when designing applications atop decentralized blockchains.

Lunch

12:30PM - 01:30PM

Session #11: What? You want more?

01:30PM - 02:50PM

High-Speed Inter-domain Fault Localization
Cristina Basescu (ETH Zurich), Yue-Hsun Lin (Samsung Research America), Haoming Zhang (Carnegie Mellon University), and Adrian Perrig (ETH Zurich)
Data-plane fault localization enhances network availability and reliability by enabling localization and circumvention of malicious entities on a network path. Algorithms for data-plane fault localization exist for intra-domain settings, however, the per-flow or per-source state required at intermediate routers makes them prohibitively expensive in inter-domain settings. We present Faultprints, the first secure data-plane fault localization protocol that is practical for inter-domain settings. Faultprints enables a source to precisely localize malicious network links that drop, delay, or modify packets. We implemented an efficient version of Faultprints on a software router by taking advantage of the parallelism in the AES-NI module of Intel CPUs. Our evaluation on real-world traffic shows fast forwarding on a commodity server at 116.95 Gbps out of 120 Gbps capacity, and a goodput of 94 Gbps. Additionally, Faultprints achieves a high failure localization rate, while incurring a low communication overhead.
Beauty and the Beast: Diverting modern web browsers to build unique browser fingerprints
Pierre Laperdrix (INSA - INRIA), Walter Rudametkin (University of Lille - Inria), and Benoit Baudry (INRIA)
Worldwide, the number of people and the time spent browsing the web keeps increasing. Accordingly, the technologies to enrich the user experience are evolving at an amazing pace. Many of these evolutions provide for a more interactive web (e.g., boom of JavaScript libraries, weekly innovations in HTML5), a more available web (e.g., explosion of mobile devices), a more secure web (e.g., Flash is disappearing, NPAPI plugins are being deprecated), and a more private web (e.g., increased legislation against cookies, huge success of extensions such as Ghostery and AdBlock). Nevertheless, modern browser technologies, which provide the beauty and power of the web, also provide a darker side, a rich ecosystem of exploitable data that can be used to build unique browser fingerprints. Our work explores the validity of browser fingerprinting in today's environment. Over the past year, we have collected 118,934 fingerprints composed of 17 attributes gathered thanks to the most recent web technologies. We show that innovations in HTML5 provide access to highly discriminating attributes, notably with the use of the Canvas API which relies on multiple layers of the user's system. In addition, we show that browser fingerprinting is as effective on mobile devices as it is on desktops and laptops, albeit for radically different reasons due to their more constrained hardware and software environments. We also evaluate how browser fingerprinting could stop being a threat to user privacy if some technological evolutions continue (e.g., disappearance of plugins) or are embraced by browser vendors (e.g., standard HTTP headers).
Verena: End-to-End Integrity Protection for Web Applications
Nikolaos Karapanos and Alexandros Filios (ETH Zurich), Raluca Ada Popa (ETH Zurich and University of California, Berkeley), and Srdjan Capkun (ETH Zurich)
Web applications rely on web servers to protect the integrity of sensitive information. However, an attacker gaining access to web servers can tamper with the data and query computation results, and thus serve corrupted web pages to the user. Violating the integrity of the web page can have serious consequences, affecting application functionality and decision-making processes. Worse yet, data integrity violation may affect physical safety, as in the case of medical web applications which enable physicians to assign treatment to patients based on diagnostic information stored at the web server. This paper presents Verena, a web application platform that provides end-to-end integrity guarantees against attackers that have full access to the web and database servers. In Verena, a client's browser can verify the integrity of a web page by verifying the results of queries on data stored at the server. Verena provides strong integrity properties such as freshness, completeness, and correctness for a common set of database queries, by relying on a small trusted computing base. In a setting where there can be many users with different write permissions, Verena allows a developer to specify an integrity policy for query results based on our notion of trust contexts, and then enforces this policy efficiently. We implemented and evaluated Verena on top of the Meteor framework. Our results show that Verena can support real applications with modest overhead.
SoK: Towards Grounding Censorship Circumvention in Empiricism
Michael C. Tschantz and Sadia Afroz (ICSI), Anonymous (unaffiliated), and Vern Paxson (UC Berkeley and ICSI)
Effective evaluations of approaches to circumventing government Internet censorship require incorporating perspectives of how censors operate in practice. We undertake an extensive examination of real censors by surveying prior measurement studies and analyzing field reports and bug tickets from practitioners. We assess both deployed circumvention approaches and research proposals to consider the criteria employed in their evaluations and compare these to the observed behaviors of real censors, identifying areas where evaluations could more faithfully and effectively incorporate the practices of modern censors. These observations lead to an agenda realigning research with the predominant problems of today.

Break (20 Minutes)

02:50PM - 03:10PM

Session #12: Low-level attacks and defenses

03:10PM - 04:30PM

A Tough call: Mitigating Advanced Code-Reuse Attacks At The Binary Level
Victor van der Veen and Enes Goktas (Vrije Universiteit Amsterdam), Moritz Contag and Andre Pawlowski (Ruhr-Universitität Bochum), Xi Chen, Sanjay Rawat, and Herbert Bos (Vrije Universiteit Amsterdam), Thorsten Holz (Ruhr-Universitität Bochum), and Elias Athanasopoulos and Cristiano Giuffrida (Vrije Universiteit Amsterdam)
Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that overapproximates this set. In contrast, source-level solutions can accurately infer the targets of indirect calls and thus detect malicious control-flow transfers more precisely. Given that source code is not always available, however, offering similar quality of protection at the binary level is important, but, unquestionably, more challenging than ever: recent work demonstrates powerful attacks such as Counterfeit Object-oriented Programming (COOP), which made the community believe that protecting software against control-flow diversion attacks at the binary level is rather impossible. In this paper, we propose binary-level analysis techniques to significantly reduce the number of possible targets for indirect branches. More specifically, we reconstruct a conservative approximation of target function prototypes by means of use-def analysis at possible callees. We then couple this with liveness analysis at each indirect callsite to derive a many-to-many relationship between callsites and target callees with a much higher precision compared to prior binary-level solutions. Experimental results on popular server programs and on SPEC CPU2006 show that TypeArmor, a prototype implementation of our approach, is efficient - with a runtime overhead of less than 3%. Furthermore, we evaluate to what extent TypeArmor can mitigate COOP and other advanced attacks and show that our approach can significantly reduce the number of targets on the forward edge. Moreover, we show that TypeArmor breaks published COOP exploits, providing concrete evidence that strict binary-level CFI can still mitigate advanced attacks, despite the absence of source information or C++ semantics.
Return to the Zombie Gadgets: Undermining Destructive Code Reads via Code-Inference Attacks
Kevin Z. Snow, Roman Rogowski, Fabian Monrose (UNC Chapel Hill), Jan Werner (Renaissance Computing Institute), Hyungjoon Koo and Michalis Polychronakis (Stony Brook University)
The concept of destructive code reads is a new defensive strategy that prevents code reuse attacks by coupling fine-grained address space layout randomization with a mitigation for online knowledge gathering that destroys potentially useful gadgets as they are disclosed by an adversary. The intuition is that by destroying code as it is read, an adversary is left with no usable gadgets to reuse in a control-flow hijacking attack. In this paper, we examine the security of this new mitigation. We show that while the concept initially appeared promising, there are several unforeseen attack tactics that render destructive code reads ineffective in practice. Specifically, we introduce techniques for leveraging constructive reloads, wherein multiple copies of native code are loaded into a process' address space (either side-by-side or one-after-another). Constructive reloads allow the adversary to disclose one code copy, destroying it in the process, then use another code copy for their code reuse payload. For situations where constructive reloads are not viable, we show that an alternative, and equally powerful, strategy exists: leveraging code association via implicit reads, which allows an adversary to undo in-place code randomization by inferring the layout of code that follows already disclosed bytes. As a result, the implicitly learned code is not destroyed, and can be used in the adversary's code reuse attack. We demonstrate the effectiveness of our techniques with concrete instantiations of these attacks against popular applications. In light of our successes, we argue that the code inference strategies presented herein paint a cautionary tale for defensive approaches whose security blindly rests on the perceived inability to undo the application of in-place randomization.
Data-Oriented Programming: On the Expressiveness of Non-Control Data Attacks
Hong Hu, Shweta Shinde, Adrian Sendroiu, Zheng Leong Chua, Prateek Saxena, and Zhenkai Liang (National University of Singapore)
As control-flow hijacking defenses gain adoption, it is important to understand the remaining capabilities of adversaries via memory exploits. Non-control data exploits are used to mount information leakage attacks or privilege escalation attacks program memory. Compared to control-flow hijacking attacks, such non-control data exploits have limited expressiveness, however, the question is: what is the real expressive power of non-control data attacks? In this paper we show that such attacks are Turing-complete. We present a systematic technique called data-oriented programming (DOP) to construct expressive non-control data exploits for arbitrary x86 programs. In the experimental evaluation using 9 programs, we identified 7518 data-oriented x86 gadgets and 5052 gadget dispatchers, which are the building blocks for DOP. 8 out of 9 real-world programs have gadgets to simulate arbitrary computations and 2 of them are confirmed to be able to build Turing-complete attacks. We build 3 end-to-end attacks to bypass randomization defenses without leaking addresses, to run a network bot which takes commands from the attacker, and to alter the memory permissions. All the attacks work in the presence of ASLR and DEP, demonstrating how the expressiveness offered by DOP significantly empowers the attacker.
Dedup Est Machina: Memory Deduplication as an Advanced Exploitation Vector
Erik Bosman, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida (Vrije Universiteit Amsterdam)
Memory deduplication, a well-known technique to reduce the memory footprint across virtual machines, is now also a default-on feature inside the Windows 8.1 and Windows 10 operating systems. Deduplication maps multiple identical copies of a physical page onto a single shared copy with copy-on-write semantics. As a result, a write to such a shared page triggers a page fault and is thus measurably slower than a write to a normal page. Prior work has shown that an attacker able to craft pages on the target system can use this timing difference as a simple single-bit side channel to discover that certain pages exist in the system. In this paper, we demonstrate that the deduplication side channel is much more powerful than previously assumed, potentially providing an attacker with a weird machine to read arbitrary data in the system. We first show that an attacker controlling the alignment and reuse of data in memory is able to perform byte-by-byte disclosure of sensitive data (such as randomized 64 bit pointers). Next, even without control over data alignment or reuse, we show that an attacker can still disclose high-entropy randomized pointers using a birthday attack. To show these primitives are practical, we present an end-to-end JavaScript-based attack against the new Microsoft Edge browser, in absence of software bugs and with all defenses turned on. Our attack combines our deduplication-based primitives with a reliable Rowhammer exploit to gain arbitrary memory read and write access in the browser. We conclude by extending our JavaScript-based attack to cross-process system-wide exploitation (using the popular nginx web server as an example) and discussing mitigation strategies.

NITRD Panel: Federal Cybersecurity and Privacy R&D Strategic Plans

04:45PM - 05:45PM

Senior Federal Government representatives will discuss the new Federal cybersecurity and the upcoming privacy R&D strategic plans, research priorities envisioned by the strategic plans, and how the plans will shape activities of Federal agencies.

Panelists:
Jim Kurose, Assistant Director of NSF for the CISE Directorate
Douglas Maughan, Director, Cyber Security Division, DHS S&T
Bill Newhouse, Deputy Director, National Initiative for Cybersecurity Education (NICE), NIST
Greg Shannon, Assistant Director for Cybersecurity Strategy, Office of Science and Technology Policy (OSTP)
Tomas Vagoun, Cybersecurity and Privacy R&D Coordinator, NITRD

Speed Mentoring and Symposium/Workshops Bridging Reception

05:45PM - 07:45PM

May 26


Workshops Registration

07:00AM - 11:00AM

Workshops Breakfast

07:30AM - 08:30AM

Workshops Opening Remarks

08:45AM - 09:00AM

Workshops Session #1

09:00AM - 10:00AM

Workshops Break (30 minutes)

10:15AM - 10:45AM

Workshops Session #2

10:45AM - 12:30PM

Workshops Lunch

12:30PM - 01:30PM

Workshops Session #3

01:30PM - 03:15PM

Workshops Break (30 minutes)

03:15PM - 03:45PM

Workshops Session #4

03:45PM - 05:30PM

Closing Remarks

05:30PM - 05:45PM