MAY 20-22, 2019 AT THE HYATT REGENCY, SAN FRANCISCO, CA

40th IEEE Symposium on
Security and Privacy

Agenda


Sunday, May 19


Registration

Market Street Foyer

04:00PM - 07:00PM

Monday, May 20


Registration

Market Street Foyer

07:00AM - 05:00PM

Continental Breakfast

Atrium

07:30AM - 08:30AM

Opening Remarks

Grand Ballroom A/B

08:30AM - 08:50AM

Previews

Grand Ballroom A/B

08:50AM - 09:20AM

Break (40 Minutes)

09:20AM - 10:00AM

Session #1: Hardware Security

Grand Ballroom A

10:00AM - 12:10PM

Spectre Attacks: Exploiting Speculative Execution
Paul Kocher (Independent (www.paulkocher.com)), Jann Horn (Google Project Zero), Anders Fogh (G DATA Advanced Analytics), Daniel Genkin (University of Pennsylvania and University of Maryland), Daniel Gruss (Graz University of Technology), Werner Haas (Cyberus Technology), Mike Hamburg (Rambus, Cryptography Research Division), Moritz Lipp (Graz University of Technology), Stefan Mangard (Graz University of Technology), Thomas Prescher (Cyberus Technology), Michael Schwarz (Graz University of Technology), Yuval Yarom (University of Adelaide and Data61)
Modern processors use branch prediction and speculative execution to maximize performance. For example, if the destination of a branch depends on a memory value that is in the process of being read, CPUs will try to guess the destination and attempt to execute ahead. When the memory value finally arrives, the CPU either discards or commits the speculative computation. Speculative logic is unfaithful in how it executes, can access the victim's memory and registers, and can perform operations with measurable side effects.

Spectre attacks involve inducing a victim to speculatively perform operations that would not occur during correct program execution and which leak the victim's confidential information via a side channel to the adversary. This paper describes practical attacks that combine methodology from side channel attacks, fault attacks, and return-oriented programming that can read arbitrary memory from the victim's process. More broadly, the paper shows that speculative execution implementations violate the security assumptions underpinning numerous software security mechanisms, including operating system process separation, containerization, just-in-time (JIT) compilation, and countermeasures to cache timing and side-channel attacks. These attacks represent a serious threat to actual systems since vulnerable speculative execution capabilities are found in microprocessors from Intel, AMD, and ARM that are used in billions of devices.

While makeshift processor-specific countermeasures are possible in some cases, sound solutions will require fixes to processor designs as well as updates to instruction set architectures (ISAs) to give hardware architects and software developers a common understanding as to what computation state CPU implementations are (and are not) permitted to leak.
SoK: The Challenges, Pitfalls, and Perils of Using Hardware Performance Counters for Security
Sanjeev Das (University of North Carolina at Chapel Hill), Jan Werner (University of North Carolina at Chapel Hill), Manos Antonakakis (Georgia Institute of Technology), Michalis Polychronakis (Stony Brook University), Fabian Monrose (University of North Carolina at Chapel Hill)
Hardware Performance Counters (HPCs) have been available in processors for more than a decade. These counters can be used to monitor and measure events that occur at the CPU level. Modern processors provide hundreds of hardware events that can be monitored, and with each new processor architecture more are added. Yet, there has been little in the way of systematic studies on how performance counters can best be utilized to accurately monitor events in real-world settings. Especially when it comes to the use of HPCs for security applications, measurement imprecisions or incorrect assumptions regarding the measured values can undermine the offered protection.
To shed light on this issue, we embarked on a year-long effort to (i) study the best practices for obtaining accurate measurement of events using performance counters, (ii) understand the challenges and pitfalls of using HPCs in various settings, and (iii) explore ways to obtain consistent and accurate measurements across different settings and architectures. Additionally, we then empirically evaluated the way HPCs have been used throughout a wide variety of papers. Not wanting to stop there, we explored whether these widely used techniques are in fact obtaining performance counter data correctly. As part of that assessment, we (iv) extended the seminal work of Weaver and McKee from almost 10 years ago on non-determinism in HPCs, and applied our findings to 56 papers across various application domains.
In that follow-up study, we found the acceptance of HPCs in security applications is in stark contrast to other application areas — especially in the last five years. Given that, we studied an additional representative set of 41 works from the security literature that rely on HPCs, to better elucidate how the intricacies we discovered can impact the soundness and correctness of their approaches and conclusions. Toward that goal, we (i) empirically evaluated how failure to accommodate for various subtleties in the use of HPCs can undermine the effectiveness of security applications, specifically in the case of exploit prevention and malware detection. Lastly, we showed how (ii) an adversary can manipulate HPCs to bypass certain security defenses.
Theory and Practice of Finding Eviction Sets
Pepe Vila (IMDEA Software Institute/Technical University of Madrid (UPM)), Boris Köpf (Microsoft Research), José F. Morales (IMDEA Software Institute)
Many micro-architectural attacks rely on the capability of an attacker to efficiently find small eviction sets: groups of virtual addresses that map to the same cache set. This capability has become a decisive primitive for cache side-channel, rowhammer, and speculative execution attacks. Despite their importance, algorithms for finding small eviction sets have not been systematically studied in the literature.
In this paper, we perform such a systematic study. We begin by formalizing the problem and analyzing the probability that
a set of random virtual addresses is an eviction set. We then present novel algorithms, based on ideas from threshold group testing, that reduce random eviction sets to their minimal core in linear time, improving over the quadratic state-of-the-art.
We complement the theoretical analysis of our algorithms with a rigorous empirical evaluation in which we identify and isolate factors that affect their reliability in practice, such as adaptive cache replacement strategies and TLB thrashing. Our results indicate that our algorithms enable finding small eviction sets much faster than before, and under conditions where this was previously deemed impractical.
Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks
Lucian Cojocar (Vrije Universiteit Amsterdam), Kaveh Razavi (Vrije Universiteit Amsterdam), Cristiano Giuffrida (Vrije Universiteit Amsterdam), Herbert Bos (Vrije Universiteit Amsterdam)
Given the increasing impact of Rowhammer, and the dearth of adequate other hardware defenses, many in the security community have pinned their hopes on error-correcting code (ECC) memory as one of the few practical defenses against Rowhammer attacks. Specifically, the expectation is that the ECC algorithm will correct or detect any bits they manage to flip in memory in real-world settings. However, the extent to which ECC really protects against Rowhammer is an open research question, due to two key challenges. First, the details of the ECC implementations in commodity systems are not known. Second, existing Rowhammer exploitation techniques cannot yield reliable attacks in presence of ECC memory.
In this paper, we address both challenges and provide concrete evidence of the susceptibility of ECC memory to Rowhammer attacks. To address the first challenge, we describe a novel approach that combines a custom-made hardware probe, Rowhammer bit flips, and a cold boot attack to reverse engineer ECC functions on commodity AMD and Intel processors. To address the second challenge, we present ECCploit, a new Rowhammer attack based on composable, data-controlled bit flips and a novel side channel in the ECC memory controller. We show that, while ECC memory does reduce the attack surface for Rowhammer, ECCploit still allows an attacker to mount reliable Rowhammer attacks against vulnerable ECC memory on a variety of systems and
configurations. In addition, we show that, despite the non-trivial constraints imposed by ECC, ECCploit can still be powerful in practice and mimic the behavior of prior Rowhammer exploits.
Self-Encrypting Deception: Weaknesses in the Encryption of Solid State Drives
Carlo Meijer (Radboud University, the Netherlands), Bernard van Gastel (Open University of the Netherlands)
We have analyzed the hardware full-disk encryption of several solid state drives (SSDs) by reverse engineering their firmware. These drives were produced by three manufacturers between 2014 and 2018, and are both internal models using the SATA and NVMe interfaces (in a M.2 or 2.5" traditional form factor) and external models using the USB interface.

In theory, the security guarantees offered by hardware encryp- tion are similar to or better than software implementations. In reality, we found that many models using hardware encryption have critical security weaknesses due to specification, design, and implementation issues. For many models, these security weaknesses allow for complete recovery of the data without knowledge of any secret (such as the password).

BitLocker, the encryption software built into Microsoft Win- dows will rely exclusively on hardware full-disk encryption if the SSD advertises support for it. Thus, for these drives, data protected by BitLocker is also compromised.

We conclude that, given the state of affairs affecting roughly 60% of the market, currently one should not rely solely on hardware encryption offered by SSDs and users should take additional measures to protect their data.
RIDL: Rogue In-Flight Data Load
Stephan van Schaik (Vrije Universiteit Amsterdam), Alyssa Milburn (Vrije Universiteit Amsterdam), Sebastian Österlund (Vrije Universiteit Amsterdam), Pietro Frigo (Vrije Universiteit Amsterdam), Giorgi Maisuradze (Saarland University), Kaveh Razavi (Vrije Universiteit Amsterdam), Herbert Bos (Vrije Universiteit Amsterdam), Cristiano Giuffrida (Vrije Universiteit Amsterdam)
We present Rogue In-flight Data Load (RIDL), a new class of speculative unprivileged and constrained attacks to leak arbitrary data across ad- dress spaces and privilege boundaries (e.g., process, kernel, SGX, and even CPU-internal operations). Our reverse engineering efforts show such vulnerabilities originate from a variety of micro-optimizations per- vasive in commodity (Intel) processors, which cause the CPU to speculatively serve loads using extrane- ous CPU-internal in-flight data (e.g., in the line fill buffers). Contrary to other state-of-the-art speculative execution attacks, such as Spectre, Meltdown and Fore- shadow, RIDL can leak this arbitrary in-flight data with no assumptions on the state of the caches or translation data structures controlled by privileged software.

The implications are worrisome. First, RIDL attacks can be implemented even from linear execution with no invalid page faults, eliminating the need for excep- tion suppression mechanisms and enabling system-wide attacks from arbitrary unprivileged code (including JavaScript in the browser). To exemplify such attacks, we build a number of practical exploits that leak sensitive information from victim processes, virtual machines, kernel, SGX and CPU-internal components. Second, and perhaps more importantly, RIDL bypasses all existing “spot” mitigations in software (e.g., KPTI, PTE inversion) and hardware (e.g., speculative store bypass disable) and cannot easily be mitigated even by more heavyweight defenses (e.g., L1D flushing or disabling SMT). RIDL questions the sustainability of a per-variant, spot mitigation strategy and suggests more fundamental mitigations are needed to contain ever- emerging speculative execution attacks.

Session #2: Blockchain & Cryptocurrency

Grand Ballroom B

10:00AM - 12:10PM

Perun: Virtual Payment Hubs over Cryptocurrencies
Stefan Dziembowski (University of Warsaw), Lisa Eckey (TU Darmstadt), Sebastian Faust (TU Darmstadt), Daniel Malinowski (University of Warsaw)
Payment channels emerged recently as an efficient method for performing cheap micropayments in cryptocurrencies. In contrast to traditional on-chain transactions, payment channels have the advantage that they allow for nearly unlimited number of transactions between parties without involving the blockchain. In this work, we introduce Perun, an off-chain channel system that offers a new method for connecting channels that is more efficient than the existing technique of routing transactions'' over multiple channels. To this end, Perun introduces a technique called virtual payment channels'' that avoids involvement of the intermediary for each individual payment. In this paper we formally model and prove security of this technique in the case of one intermediary, who can be viewed as a payment hub'' that has direct channels with several parties. Our scheme works over any cryptocurrency that provides Turing-complete smart contracts. As a proof of concept, we implemented Perun's smart contracts in Ethereum.
Redactable Blockchain in the Permissionless Setting
Dominic Deuber (Friedrich-Alexander-University Erlangen-Nurnberg), Bernardo Magri (Aarhus University), Sri Aravinda Krishnan Thyagarajan (Friedrich-Alexander-University Erlangen-Nurnberg)
Bitcoin is an immutable permissionless blockchain system that has been extensively used as a public bulletin board by many different applications that heavily relies on its immutability. However, Bitcoin's immutability is not without its fair share of demerits. Interpol exposed the existence of harmful and potentially illegal documents, images and links in the Bitcoin blockchain, and since then there have been several qualitative and quantitative analysis on the types of data currently residing in the Bitcoin blockchain.
Although there is a lot of attention on blockchains, surprisingly the previous solutions proposed for data redaction in the permissionless setting are far from feasible, and require additional trust assumptions. Hence, the problem of harmful data still poses a huge challenge for law enforcement agencies like Interpol (Tziakouris, IEEE S&P'18).

We propose the first efficient redactable blockchain for the permissionless setting that is easily integrable into Bitcoin, and that does not rely on heavy cryptographic tools or trust assumptions. Our protocol uses a consensus-based voting and is parameterised by a policy that dictates the requirements and constraints for the redactions; if a redaction gathers enough votes the operation is performed on the chain. As an extra feature, our protocol offers public verifiability and accountability for the redacted chain. Moreover, we provide formal security definitions and proofs showing that our protocol is secure against redactions that were not agreed by consensus. Additionally, we show the viability of our approach with a proof-of-concept implementation that shows only a tiny overhead in the chain validation of our protocol when compared to an immutable one.
Proof-of-Stake Sidechains
Peter Gaži (IOHK), Aggelos Kiayias (University of Edinburgh, IOHK), Dionysis Zindros (University of Athens, IOHK)
Sidechains have long been heralded as the key enabler of blockchain scalability and interoperability. However, no modeling of the concept or a provably secure construction has so far been attempted.
We provide the first formal definition of what a sidechain system is and how assets can be moved between sidechains securely. We put forth a security definition that augments the known transaction ledger properties of liveness and safety to hold across multiple ledgers and enhance them with a new “firewall” security property which safeguards each blockchain from its sidechains, limiting the impact of an otherwise catastrophic sidechain failure.
We then provide a sidechain construction that is suitable for proof-of-stake (PoS) sidechain systems. As an exemplary concrete instantiation we present our construction for an epoch- based PoS system consistent with Ouroboros (Crypto 2017), the PoS blockchain protocol used in Cardano which is one of the largest pure PoS systems by market capitalisation, and we also comment how the construction can be adapted for other protocols such as Ouroboros Praos (Eurocrypt 2018), Ouroboros Genesis (CCS 2018), Snow White and Algorand. An important feature of our construction is merged-staking that prevents “goldfinger” attacks against a sidechain that is only carrying a small amount of stake. An important technique for pegging chains that we use in our construction is cross-chain certification which is facilitated by a novel cryptographic primitive we introduce called ad-hoc threshold multisignatures (ATMS) which may be of independent interest. We show how ATMS can be securely instantiated by regular and aggregate digital signatures as well as succinct arguments of knowledge such as STARKs and bulletproofs with varying degrees of storage efficiency.
Ouroboros Crypsinous: Privacy-Preserving Proof-of-Stake
Thomas Kerber (The University of Edinburgh & IOHK), Aggelos Kiayias (The University of Edinburgh & IOHK), Markulf Kohlweiss (The University of Edinburgh & IOHK), Vassilis Zikas (The University of Edinburgh & IOHK)
We present Ouroboros Crypsinous, the first formally analyzed privacy-preserving proof-of-stake blockchain protocol. To model its security we give a thorough treatment of private ledgers in the (G)UC setting that might be of independent interest.

To prove our protocol secure against adaptive attacks, we introduce a new coin evolution technique relying on SNARKs and key-private forward secure encryption. The latter primitive—and the associated construction—can be of independent interest. We stress that existing approaches to private blockchain, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.
Lay Down the Common Metrics: Evaluating Proof-of-Work Consensus Protocols' Security
Ren Zhang (Nervos and imec-COSIC, KU Leuven), Bart Preneel (imec-COSIC, KU Leuven)
Following Bitcoin's Nakamoto Consensus protocol (NC), hundreds of cryptocurrencies utilize proofs of work (PoW) to maintain their ledgers. However, research shows that NC fails to achieve perfect chain quality, allowing malicious miners to alter the public ledger in order to launch several attacks, i.e., selfish mining, double-spending and feather-forking. Some later designs, represented by Ethereum, Bitcoin-NG, DECOR+, Byzcoin and Publish or Perish, aim to solve the problem by raising the chain quality; other designs, represented by Fruitchains, DECOR+ and Subchains, claim to successfully defend against the attacks in the absence of perfect chain quality. As their effectiveness remains self-claimed, the community is divided on whether a secure PoW protocol is possible. In order to resolve this ambiguity and to lay down the foundation of a common body of knowledge, this paper introduces a multi-metric evaluation framework to quantitatively analyze PoW protocols' chain quality and attack resistance. Subsequently we use this framework to evaluate the security of these improved designs through Markov decision processes. We conclude that to date, no PoW protocol achieves ideal chain quality or is resistant against all three attacks. We attribute existing PoW protocols' imperfect chain quality to their unrealistic security assumptions, and their unsatisfactory attack resistance to a dilemma between "rewarding the bad" and "punishing the good". Moreover, our analysis reveals various new protocol-specific attack strategies. Based on our analysis, we propose future directions toward more secure PoW protocols and indicate several common pitfalls in PoW security analyses.
XCLAIM: Trustless, Interoperable, Cryptocurrency-Backed Assets
Alexei Zamyatin (Imperial College London), Dominik Harz (Imperial College London), Joshua Lind (Imperial College London), Panayiotis Panayiotou (Imperial College London), Arthur Gervais (Imperial College London), William J. Knottenbelt (Imperial College London)
Building trustless cross-blockchain trading protocols is challenging. Centralized exchanges thus remain the preferred route to execute transfers across blockchains. However, these services require trust and therefore undermine the very nature of the blockchains on which they operate. To overcome this, several decentralized exchanges have recently emerged which offer support for atomic cross-chain swaps (ACCS). ACCS enable the trustless exchange of cryptocurrencies across blockchains, and are the only known mechanism to do so. However, ACCS suffer significant limitations; they are slow, inefficient and costly, meaning that they are rarely used in practice. We present XCLAIM: the first generic framework for achieving trustless and efficient cross-chain exchanges using cryptocurrency-backed assets (CbAs). XCLAIM offers protocols for issuing, transferring, swapping and redeeming CbAs securely in a non-interactive manner on existing blockchains. We instantiate XCLAIM between Bitcoin and Ethereum and evaluate our implementation; it costs less than USD 0.50 to issue an arbitrary amount of Bitcoin-backed tokens on Ethereum. We show XCLAIM is not only faster, but also significantly cheaper than atomic cross-chain swaps. Finally, XCLAIM is compatible with the majority of existing blockchains without modification, and enables several novel cryptocurrency applications, such as cross-chain payment channels and efficient multi-party swaps.

Lunch

Atrium

12:10PM - 01:10PM

Session #3: Web Security

Grand Ballroom A

01:10PM - 03:00PM

Does Certificate Transparency Break the Web? Measuring Adoption and Error Rate
Emily Stark (Google), Ryan Sleevi (Google), Rijad Muminovic (University of Sarajevo), Devon O'Brien (Google), Eran Messeri (Google), Adrienne Porter Felt (Google), Brendan McMillion (Cloudflare), Parisa Tabriz (Google)
Certificate Transparency (CT) is an emerging system for enabling the rapid discovery of malicious or misissued certificates. Initially standardized in 2013, CT is now finally beginning to see widespread support. Although CT provides desirable security benefits, web browsers cannot begin requiring all websites to support CT at once, due to the risk of breaking large numbers of websites. We discuss challenges for deployment, analyze the adoption of CT on the web, and measure the error rates experienced by users of the Google Chrome web browser. We find that CT has so far been widely adopted with minimal breakage and warnings.

Security researchers often struggle with the tradeoff between security and user frustration: rolling out new security requirements often causes breakage. We view CT as a case study for deploying ecosystem-wide change while trying to minimize end user impact. We discuss the design properties of CT that made its success possible, as well as draw lessons from its risks and pitfalls that could be avoided in future large-scale security deployments.
EmPoWeb: Empowering Web Applications with Browser Extensions
Dolière Francis Somé (Université Côte d'Azur/Inria, France)
Browser extensions are third party programs, tightly integrated to browsers, where they execute with elevated privileges in order to provide users with additional functionalities. Unlike web applications, extensions are not subject to the Same Origin Policy (SOP) and therefore can read and write user data on any web application. They also have access to sensitive user information including browsing history, bookmarks, credentials (cookies) and list of installed extensions. They have access to a permanent storage in which they can store data as long as they are installed in the user's browser. They can trigger the download of arbitrary files and save them on the user's device.

For security reasons, browser extensions and web applications are executed in separate contexts. Nonetheless, in all major browsers, extensions and web applications can interact by exchanging messages. Through these communication channels, a web application can exploit extension privileged capabilities and thereby access and exfiltrate sensitive user information.

In this work, we analyzed the communication interfaces exposed to web applications by Chrome, Firefox and Opera browser extensions. As a result, we identified many extensions that web applications can exploit to access privileged capabilities. Through extensions' APIS, web applications can bypass SOP and access user data on any other web application, access user credentials (cookies), browsing history, bookmarks, list of installed extensions, extensions storage, and download and save arbitrary files in the user's device.

Our results demonstrate that the communications between browser extensions and web applications pose serious security and privacy threats to browsers, web applications and more importantly to users. We discuss countermeasures and proposals, and believe that our study and in particular the tool we used to detect and exploit these threats, can be used as part of extensions review process by browser vendors to help them identify and fix the aforementioned problems in extensions.
"If HTTPS Were Secure, I Wouldn't Need 2FA" - End User and Administrator Mental Models of HTTPS
Katharina Krombholz (CISPA Helmholtz Center for Information Security), Karoline Busse (Bonn University), Katharina Pfeffer (SBA Research), Matthew Smith (Bonn University / FhG FKIE), Emanuel von Zezschwitz (Bonn University / FhG FKIE)
HTTPS is one of the most important protocols used to secure communication and is, fortunately, becoming more pervasive. However, especially the long tail of websites is still not sufficiently secured.
HTTPS involves different types of users, e.g., end users who are forced to make critical security decisions when faced with warnings or administrators who are required to deal with cryptographic fundamentals and complex decisions concerning compatibility.

In this work, we present the first qualitative study of both end user and administrator mental models of HTTPS. We interviewed 18 end users and 12 administrators; our findings reveal misconceptions about security benefits and threat models from both groups. We identify protocol components that interfere with secure configurations and usage behavior and reveal differences between administrator and end user mental models.

Our results suggest that end user mental models are more conceptual while administrator models are more protocol-based. We also found that end users often confuse encryption with authentication, significantly underestimate the security benefits of HTTPS, and ignore and distrust security indicators while administrators often do not understand the interplay of functional protocol components. Based on the different mental models, we discuss implications and provide actionable recommendations for future designs of user interfaces and protocols.
Fidelius: Protecting User Secrets from Compromised Browsers
Saba Eskandarian (Stanford University), Jonathan Cogan (Stanford University), Sawyer Birnbaum (Stanford University), Peh Chang Wei Brandon (Stanford University), Dillon Franke (Stanford University), Forest Fraser (Stanford University), Gaspar Garcia (Stanford University), Eric Gong (Stanford University), Hung T. Nguyen (Stanford University), Taresh K. Sethi (Stanford University), Vishal Subbiah (Stanford University), Michael Backes (CISPA Helmholtz Center for Information Security), Giancarlo Pellegrino (Stanford University/CISPA Helmholtz Center for Information Security), Dan Boneh (Stanford University)
Users regularly enter sensitive data, such as passwords, credit card numbers, or tax information, into the browser window. While modern browsers provide powerful client-side privacy measures to protect this data, none of these defenses prevent a browser compromised by malware from stealing it. In this work, we present Fidelius, a new architecture that uses trusted hardware enclaves integrated into the browser to enable protection of user secrets during web browsing sessions, even if the entire underlying browser and OS are fully controlled by a malicious attacker.

Fidelius solves many challenges involved in providing protection for browsers in a fully malicious environment, offering support for integrity and privacy for form data, JavaScript execution, XMLHttpRequests, and protected web storage, while minimizing the TCB. Moreover, interactions between the enclave and the browser, the keyboard, and the display all require new protocols, each with their own security considerations. Finally, Fidelius takes into account UI considerations to ensure a consistent and simple interface for both developers and users.

As part of this project, we develop the first open source system that provides a trusted path from input and output peripherals to a hardware enclave with no reliance on additional hypervisor security assumptions. These components may be of independent interest and useful to future projects.

We implement and evaluate Fidelius to measure its performance overhead, finding that Fidelius imposes acceptable overhead on page load and user interaction for secured pages and has no impact on pages and page components that do not use its enhanced security features.
Postcards from the Post-HTTP World: Amplification of HTTPS Vulnerabilities in the Web Ecosystem
Stefano Calzavara (Università Ca' Foscari), Riccardo Focardi (Università Ca' Foscari, Cryptosense), Matus Nemec (Università Ca' Foscari, Masaryk University), Alvise Rabitti (Università Ca' Foscari), Marco Squarcina (TU Wien)
HTTPS aims at securing communication over the Web by providing a cryptographic protection layer that ensures the confidentiality and integrity of communication and enables client/server authentication. However, HTTPS is based on the SSL/TLS protocol suites that have been shown to be vulnerable to various attacks in the years. This has required fixes and mitigations both in the servers and in the browsers, producing a complicated mixture of protocol versions and implementations in the wild, which makes it unclear which attacks are still effective on the modern Web and what is their import on web application security. In this paper, we present the first systematic quantitative evaluation of web application insecurity due to cryptographic vulnerabilities. We specify attack conditions against TLS using attack trees and we crawl the Alexa Top 10k to assess the import of these issues on page integrity, authentication credentials and web tracking. Our results show that the security of a consistent number of websites is severely harmed by cryptographic weaknesses that, in many cases, are due to external or related-domain hosts. This empirically, yet systematically demonstrates how a relatively limited number of exploitable HTTPS vulnerabilities are amplified by the complexity of the web ecosystem.

Session #4: Privacy

Grand Ballroom B

01:10PM - 03:00PM

Towards Practical Differentially Private Convex Optimization
Roger Iyengar (Carnegie Mellon University), Joseph P. Near (University of California, Berkeley), Dawn Song (University of California, Berkeley), Om Thakkar (Boston University), Abhradeep Thakurta (University of California, Santa Cruz), Lun Wang (Peking University)
Building useful predictive models often involves learning from sensitive data. Training models with differential privacy can guarantee the privacy of such sensitive data. For convex optimization tasks, several differentially private algorithms are known, but none has yet been deployed in practice.

In this work, we make two major contributions towards practical differentially private convex optimization. First, we present Approximate Minima Perturbation, a novel algorithm that can leverage any off-the-shelf optimizer. We show that it can be employed without any hyperparameter tuning, thus making it an attractive technique for practical deployment. Second, we perform an extensive empirical evaluation of the state-of-the-art algorithms for differentially private convex optimization, on a range of publicly available benchmark datasets, and real-world datasets obtained through an industrial collaboration. We release open-source implementations of all the differentially private convex optimization algorithms considered, and benchmarks on as many as nine public datasets, four of which are high-dimensional.
PrivKV: Key-Value Data Collection with Local Differential Privacy
Qingqing Ye (Renmin University of China), Haibo Hu (Hong Kong Polytechnic University), Xiaofeng Meng (Renmin University of China), Huadi Zheng (Hong Kong Polytechnic University)
Local differential privacy (LDP), where each user perturbs her data locally before sending to an untrusted data collector, is a new and promising technique for privacy-preserving distributed data collection. The advantage of LDP is to enable the collector to obtain accurate statistical estimation on sensitive user data (e.g., location and app usage) without accessing them. However, existing work on LDP is limited to simple data types, such as categorical, numerical, and set-valued data. To the best of our knowledge, there is no existing LDP work on key-value data, which is an extremely popular NoSQL data model and the generalized form of set-valued and numerical data. In this paper, we study this problem of frequency and mean estimation on key-value data by first designing a baseline approach PrivKV within the same "perturbation-calibration" paradigm as existing LDP techniques. To address the poor estimation accuracy due to the clueless perturbation of users, we then propose two iterative solutions PrivKVM and PrivKVM+ that can gradually improve the estimation results through a series of iterations. An optimization strategy is also presented to reduce network latency and increase estimation accuracy by introducing virtual iterations in the collector side without user involvement. We verify the correctness and effectiveness of these solutions through theoretical analysis and extensive experimental results.
Differentially Private Model Publishing For Deep Learning
Lei Yu (Georgia Institute of Technology), Ling Liu (Georgia Institute of Technology), Calton Pu (Georgia Institute of Technology), Mehmet Emre Gursoy (Georgia Institute of Technology), Stacey Truex (Georgia Institute of Technology)
Deep learning techniques based on neural networks have shown significant success in a wide range of AI tasks. Large-scale training datasets are one of the critical factors for their success. However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage. The recent growing trend of the sharing and publishing of pre-trained models further aggravates such privacy risks. To tackle this problem, we propose a differentially private approach for training neural networks. Our approach includes several new techniques for optimizing both privacy loss and model accuracy. We employ a generalization of differential privacy called concentrated differential privacy(CDP), with both a formal and refined privacy loss analysis on two different data batching methods. We implement a dynamic privacy budget allocator over the course of training to improve model accuracy. Extensive experiments demonstrate that our approach effectively improves privacy loss accounting, training efficiency and model quality under a given privacy budget.
KHyperLogLog: Estimating Reidentifiability and Joinability of Large Data at Scale
Pern Hui Chia (Google), Damien Desfontaines (ETH Zurich / Google), Irippuge Milinda Perera (Google), Daniel Simmons-Marengo (Google), Chao Li (Google), Wei-Yen Day (Google), Qiushi Wang (Google), Miguel Guevara (Google)
Understanding the privacy relevant characteristics of data sets, such as reidentifiability and joinability, is crucial for data governance, yet can be difficult for large data sets. While computing the data characteristics by brute force is straightforward, the scale of systems and data collected by large organizations demands an efficient approach. We present KHyperLogLog (KHLL), an algorithm based on approximate counting techniques that can estimate the reidentifiability and joinability risks of very large databases using linear runtime and minimal memory. KHLL enables one to measure reidentifiability of data quantitatively, rather than based on expert judgement or manual reviews. Meanwhile, joinability analysis using KHLL helps ensure the separation of pseudonymous and identified data sets. We describe how organizations can use KHLL to improve protection of user privacy. The efficiency of KHLL allows one to schedule periodic analyses that detect any deviations from the expected risks over time as a regression test for privacy. We validate the performance and accuracy of KHLL through experiments using proprietary and publicly available data sets.
Characterizing Pixel Tracking through the Lens of Disposable Email Services
Hang Hu (Virginia Tech), Peng Peng (Virginia Tech), Gang Wang (Virginia Tech)
Disposable email services provide temporary email addresses, which allows people to register online accounts without exposing their real email addresses. In this paper, we perform the first measurement study on disposable email services with two main goals. First, we aim to understand what disposable email services are used for, and what risks (if any) are involved in the common use cases. Second, we use the disposable email services as a public gateway to collect a large-scale email dataset for measuring email tracking. Over three months, we collected a dataset from 7 popular disposable email services which contain 2.3 million emails sent by 210K domains. We show that online accounts registered through disposable email addresses can be easily hijacked, leading to potential information leakage and financial loss. By empirically analyzing email tracking, we find that third-party tracking is highly prevalent, especially in the emails sent by popular services. We observe that trackers are using various methods to hide their tracking behavior such as falsely claiming the size of tracking images or hiding real trackers behind redirections. A few top trackers stand out in the tracking ecosystem but are not yet dominating the market.

Break (30 Minutes)

03:00PM - 03:30PM

Session #5: Program Analysis

Grand Ballroom A

03:30PM - 05:20PM

Asm2Vec: Boosting Static Representation Robustness for Binary Clone Search against Code Obfuscation and Compiler Optimization
Steven H. H. Ding (McGill University), Benjamin C. M. Fung (McGill University), Philippe Charland (Defence R&D Canada - Valcartier, Canada)
Reverse engineering is a manually intensive but necessary technique for understanding the inner workings of new malware, finding vulnerabilities in existing systems, and detecting patent infringements in released software. An assembly clone search engine facilitates the work of reverse engineers by identifying those duplicated or known parts. However, it is challenging to design a robust clone search engine, since there exist various compiler optimization options and code obfuscation techniques that make logically similar assembly functions appear to be very different.


A practical clone search engine relies on a robust vector representation of assembly code. However, the existing clone search approaches, which rely on a manual feature engineering process to form a feature vector for an assembly function, fail to consider the relationships between features and identify those unique patterns that can statistically distinguish assembly functions. To address this problem, we propose to jointly learn the lexical semantic relationships and the vector representation of assembly functions based on assembly code. We have developed an assembly code representation learning model \emph{Asm2Vec}. It only needs assembly code as input and does not require any prior knowledge such as the correct mapping between assembly functions. It can find and incorporate rich semantic relationships among tokens appearing in assembly code. We conduct extensive experiments and benchmark the learning model with state-of-the-art static and dynamic clone search approaches. We show that the learned representation is more robust and significantly outperforms existing methods against changes introduced by obfuscation and optimizations.
Iodine: Fast Dynamic Taint Tracking Using Rollback-free Optimistic Hybrid Analysis
Subarno Banerjee (University of Michigan), David Devecsery (Georgia Institute of Technology), Peter M. Chen (University of Michigan), Satish Narayanasamy (University of Michigan)
Dynamic information-flow tracking (DIFT) is useful for enforcing security policies, but rarely used in practice, as it can slow down a program by an order of magnitude. Static program analyses can be used to prove safe execution states and elide unnecessary DIFT monitors, but the performance improvement from these analyses is limited by their need to maintain soundness.

In this paper, we present a novel optimistic hybrid analysis (OHA) to significantly reduce DIFT overhead while still guaranteeing sound results. It consists of a predicated whole-program static taint analysis, which assumes likely invariants gathered from profiles to dramatically improve precision. The optimized DIFT is sound for executions in which those invariants hold true, and recovers to a conservative DIFT for executions in which those invariants are false. We show how to overcome the main problem with using OHA to optimize live executions, which is the possibility of unbounded rollbacks. We eliminate the need for any rollback during recovery by tailoring our predicated static analysis to eliminate only safe elisions of noop monitors. Our tool, Iodine, reduces the overhead of DIFT for enforcing security policies to 9%, which is 4.4x lower than that with traditional hybrid analysis, while still being able to be run on live systems.
CaSym: Cache Aware Symbolic Execution for Side Channel Detection and Mitigation
Robert Brotzman (Pennsylvania State University), Shen Liu (Pennsylvania State University), Danfeng Zhang (Pennsylvania State University), Gang Tan (Pennsylvania State University), Mahmut Kandemir (Pennsylvania State University)
Cache-based side channels are becoming an important attack vector through which secret information can be leaked to malicious parties. implementations and Previous work on cache-based side channel detection, however, suffers from the code coverage problem or does not provide diagnostic information that is crucial for applying mitigation techniques to vulnerable software. We propose CaSym, a cache-aware symbolic execution to identify and report precise information about where side channels occur in an input program. Compared with existing work, CaSym provides several unique features: (1) CaSym enables verification against various attack models and cache models, (2) unlike many symbolic-execution systems for bug finding, CaSym verifies all program execution paths in a sound way, (3) CaSym uses two novel abstract cache models that provide good balance between analysis scalability and precision, and (4) CaSym provides sufficient information on where and how to mitigate the identified side channels through techniques including preloading and pinning. Evaluation on a set of crypto and database benchmarks shows that CaSym is effective at identifying and mitigating side channels, with reasonable efficiency.
Towards Automated Safety Vetting of PLC Code in Real-World Plants
Mu Zhang (Cornell University), Chien-Ying Chen (University of Illinois at Urbana-Champaign), Bin-Chou Kao (University of Illinois at Urbana-Champaign), Yassine Qamsane (University of Michigan), Yuru Shao (University of Michigan), Yikai Lin (University of Michigan), Elaine Shi (Cornell University), Sibin Mohan (University of Illinois at Urbana-Champaign), Kira Barton (University of Michigan), James Moyne (University of Michigan), Z. Morley Mao (University of Michigan)
Safety violations in programmable logic controllers (PLCs), caused either by faults or attacks, have recently garnered significant attention. However, prior efforts at PLC code vetting suffer from many drawbacks. Static analyses and verification cause significant false positives and cannot reveal specific runtime contexts. Dynamic analyses and symbolic execution, on the other hand, fail due to their inability to handle real-world PLC programs that are event-driven and timing sensitive. In this paper, we propose VetPLC, a temporal context-aware, program analysis-based approach to produce timed event sequences that can be used for automatic safety vetting. To this end, we (a) perform static program analysis to create timed event causality graphs in order to understand causal relations among events in PLC code and (b) mine temporal invariants from data traces collected in Industrial Control System (ICS) testbeds to quantitatively gauge temporal dependencies that are constrained by machine operations. Our VetPLC prototype has been implemented in 15K lines of code. We evaluate it on 10 real-world scenarios from two different ICS settings. Our experiments show that VetPLC outperforms state-of-the-art techniques and can generate event sequences that can be used to automatically detect hidden safety violations.
Using Safety Properties to Generate Vulnerability Patches
Zhen Huang (Pennsylvania State Universityand University of Toronto), David Lie (University of Toronto), Gang Tan (Pennsylvania State University), Trent Jaeger (Pennsylvania State University)
Security vulnerabilities are among the most critical software defects in existence. When identified, programmers aim to produce patches that prevent the vulnerability as quickly as possible, motivating the need for automatic program repair (APR) methods to generate patches automatically. Unfortunately, most current APR methods fall short because they approximate the properties necessary to prevent the vulnerability using examples. Approximations result in patches that either do not fix the vulnerability comprehensively, or may even introduce new bugs. Instead, we propose property-based APR, which uses human-specified, program-independent and vulnerability-specific safety properties to derive source code patches for security vulnerabilities. Unlike properties that are approximated by observing the execution of test cases, such safety properties are precise and complete. The primary challenge lies in mapping such safety properties into source code patches that can be instantiated into an existing program.

To address these challenges, we propose Senx, which, given a set of safety properties and a single input that triggers the vulnerability, detects the safety property violated by the vulnerability input and generates a corresponding patch that enforces the safety property and thus, removes the vulnerability. Senx solves several challenges with property-based APR: it identifies the program expressions and variables that must be evaluated to check safety properties and identifies the program scopes where they can be evaluated, it generates new code to selectively compute the values it needs if calling existing program code would cause unwanted side effects, and it uses a novel access range analysis technique to avoid placing patches inside loops where it could incur performance overhead. Our evaluation shows that the patches generated by Senx successfully fix 32 of 42 real-world vulnerabilities from 11 applications including various tools or libraries for manipulating graphics/media files, a programming language interpreter, a relational database engine, a collection of programming tools for creating and managing binary programs, and a collection of basic file, shell, and text manipulation tools.

Session #6: Protocols and Authentication

Grand Ballroom B

03:30PM - 05:20PM

Reasoning Analytically About Password-Cracking Software
Alex Liu (University of Chicago), Amanda Nakanishi (University of Chicago), Maximilian Golla (Ruhr-University Bochum), David Cash (University of Chicago), Blase Ur (University of Chicago)
A rich literature has presented efficient techniques for estimating password strength by modeling password-cracking algorithms. Unfortunately, these previous techniques only apply to probabilistic password models, which real attackers seldom use. In this paper, we introduce techniques to reason analytically and efficiently about transformation-based password cracking in software tools like John the Ripper and Hashcat. We define two new operations, rule inversion and guess counting, with which we analyze these tools without needing to enumerate guesses. We implement these techniques and find orders-of-magnitude reductions in the time it takes to estimate password strength. We also present four applications showing how our techniques enable increased scientific rigor in optimizing these attacks' configurations. In particular, we show how our techniques can leverage revealed password data to improve orderings of transformation rules and to identify rules and words potentially missing from an attack configuration. Our work thus introduces some of the first principled mechanisms for reasoning scientifically about the types of password-guessing attacks that occur in practice.
True2F: Backdoor-Resistant Authentication Tokens
Emma Dauterman (Stanford University and Google), Henry Corrigan-Gibbs (Stanford University), David Mazières (Stanford University), Dan Boneh (Stanford University), Dominic Rizzo (Google)
We present True2F, a system for second-factor authentication that provides the benefits of conventional authentication tokens in the face of phishing and software compromise, while also providing strong protection against token faults and backdoors. To do so, we develop new lightweight two-party protocols for generating cryptographic keys and ECDSA signatures, and we implement new privacy defenses to prevent cross-origin token-fingerprinting attacks. To facilitate real-world deployment, our system is backwards-compatible with today’s U2F-enabled web services and runs on commodity hardware tokens after a firmware modification. A True2F-protected authentication takes just 57ms to complete on the token, compared with 23ms for unprotected U2F.
Beyond Credential Stuffing: Password Similarity Models using Neural Networks
Bijeeta Pal (Cornell University), Tal Daniel (Technion), Rahul Chatterjee (Cornell University), Thomas Ristenpart (Cornell Tech)
Attackers increasingly use passwords leaked from one website to compromise associated accounts on other websites. Such targeted attacks work because users reuse, or pick similar, passwords for different websites. We recast one of the core technical challenges underlying targeted attacks as the task of modeling similarity of human-chosen passwords. We show how to learn good password similarity models using a compilation of 1.4 billion leaked email, password pairs. Using our trained models of password similarity, we exhibit the most damaging targeted attack to date. Simulations indicate that our attack compromises more than 16% of user accounts in less than a thousand guesses, should one of their other passwords be known to the attacker and despite the use of state-of-the art countermeasures. We show via a case study involving a large university authentication service that the attacks are also effective in practice. We go on to propose the first-ever defense against such targeted attacks, by way of personalized password strength meters (PPSMs). These are password strength meters that can warn users when they are picking passwords that are vulnerable to attacks, including targeted ones that take advantage of the user’s previously compromised passwords. We design and build a PPSM that can be compressed to less than 3 MB, making it easy to deploy in order to accurately estimate the strength of a password against all known guessing attacks.
The 9 Lives of Bleichenbacher's CAT: New Cache ATtacks on TLS Implementations
Eyal Ronen (Tel Aviv University), Robert Gillham (University of Adelaide), Daniel Genkin (University of Michigan), Adi Shamir (Weizmann Institute), David Wong (NCC Group), Yuval Yarom (University of Adelaide / Data61)
At CRYPTO'98, Bleichenbacher published his seminal paper which described a padding oracle attack against RSA implementations that follow the PKCS #1 v1.5 standard. Over the last twenty years researchers and implementors had spent a huge amount of effort in developing and deploying numerous mitigation techniques which were supposed to plug all the possible sources of Bleichenbacher-like leakages. However, as we show in this paper, most implementations are still vulnerable to several novel types of attack based on leakage from various microarchitectural side channels: Out of nine popular implementations of TLS that we tested, we were able to break the security of seven implementations with practical proof-of-concept attacks. We demonstrate the feasibility of using those Cache-like ATacks (CATs) to perform a downgrade attack against any TLS connection to a vulnerable server, using a BEAST-like Man in the Browser attack. The main difficulty we face is how to perform the thousands of oracle queries required before the browser's imposed timeout (which is 30 seconds for almost all browsers, with the exception of Firefox which can be tricked into extending this period). Due to its use of adaptive chosen ciphertext queries, the attack seems to be inherently sequential, but we describe a new way to parallelize Bleichenbacher-like padding attacks by exploiting any available number of TLS servers that share the same public key certificate. With this improvement, we can demonstrate the feasibility of a downgrade attack which could recover all the 2048 bits of the RSA plaintext (including the premaster secret value, which suffices to establish a secure connection) from five available TLS servers in under 30 seconds. This sequential-to-parallel transformation of such attacks can be of independent interest, speeding up and facilitating other side channel attacks on RSA implementations.
An Extensive Formal Security Analysis of the OpenID Financial-grade API
Daniel Fett (YES.com AG), Pedram Hosseyni (University of Stuttgart), Ralf Kuesters (University of Stuttgart)
Forced by regulations and industry demand, banks worldwide are working to open their customers' online banking accounts to third-party services via web-based APIs. By using these so-called Open Banking APIs, third-party companies, such as FinTechs, are able to read information about and initiate payments from their users' bank accounts. Such access to financial data and resources needs to meet particularly high security requirements to protect customers. One of the most promising standards in this segment is the OpenID Financial-grade API (FAPI), currently under development in an open process by the OpenID Foundation and backed by large industry partners. The FAPI is a profile of OAuth 2.0 designed for high-risk scenarios and aiming to be secure against very strong attackers. To achieve this level of security, the FAPI employs a range of mechanisms that have been developed to harden OAuth 2.0, such as Code and Token Binding (including mTLS and OAUTB), JWS Client Assertions, and Proof Key for Code Exchange. In this paper, we perform a rigorous, systematic formal analysis of the security of the FAPI, based on an existing comprehensive model of the web infrastructure - the Web Infrastructure Model (WIM) proposed by Fett, Küsters, and Schmitz. To this end, we first develop a precise model of the FAPI in the WIM, including different profiles for read-only and read-write access, different flows, different types of clients, and different combinations of security features, capturing the complex interactions in a web-based environment. We then use our model of the FAPI to precisely define central security properties. In an attempt to prove these properties, we uncover partly severe attacks, breaking authentication, authorization, and session integrity properties. We develop mitigations against these attacks and finally are able to formally prove the security of a fixed version of the FAPI. Although financial applications are high-stakes environments, this work is the first to formally analyze and, importantly, verify an Open Banking security profile. By itself, this analysis is an important contribution to the development of the FAPI since it helps to define exact security properties and attacker models, and to avoid severe security risks before the first implementations of the standard go live. Of independent interest, we also uncover weaknesses in the aforementioned security mechanisms for hardening OAuth 2.0. We illustrate that these mechanisms do not necessarily achieve the security properties they have been designed for.

Poster Reception

Atrium

05:30PM - 07:30PM

Short Talks

Waterfront C/D/E

05:45PM - 06:45PM

NSF Secure and Trustworthy Cyberspace (SaTC) Overview
Dr. Shannon Beck (National Science Foundation)
Fallout: Reading Kernel Writes From User Space
Marina Minkin (University of Michigan)
A Framework for Designing Usable and Secure Protocols
Jay Prakash (Singapore University of Technology and Design (SUTD) and University of Illinois at Urbana-Champaign (UIUC))
Machine Learning with Membership Privacy using Adversarial Regularization
Milad Nasr (University of Massachusetts Amherst)
Verifiably Robust Machine Learning for Security
Yizheng Chen (Columbia University)
Network Traffic Fingerprinting using Machine Learning and Evolutionary Computing
Ahmet Aksoy (University of Nevada, Reno)
IoT Inspector: Analyzing Smart Home Traffic in the Wild
Danny Yuxing Huang (Princeton University)
Can we reach full decentralization in permissionless blockchains?
Yujin Kwon (Korea Advanced Institute of Science and Technology (KAIST))
Pili: Simplest-Ever Distributed Consensus Protocols
Elaine Shi (Thunder Research and Cornell University)
4th Interdisciplinary Summerschool on Privacy (ISP 2019)
Jaap-Henk Hoepman (Radboud University)

Speed Mentoring

Waterfront C/D/E

07:00PM - 09:00PM

Tuesday, May 21


Continental Breakfast

Atrium

07:30AM - 08:30AM

Registration

Market Street Foyer

07:30AM - 05:00PM

Awards

Grand Ballroom A/B

08:30AM - 08:50AM

Previews

Grand Ballroom A/B

08:50AM - 09:20AM

Break (40 Minutes)

09:20AM - 10:00AM

Session #7: Mobile and Location Security

Grand Ballroom A

10:00AM - 12:10PM

Short Text, Large Effect: Measuring the Impact of User Reviews on Android App Security & Privacy
Duc Cuong Nguyen (CISPA, Saarland University), Erik Derr (CISPA, Saarland University), Michael Backes (CISPA Helmholtz Center i.G.), Sven Bugiel (CISPA Helmholtz Center i.G.)
Application markets streamline the end-users’ task of finding and installing applications. They also form an immediate communication channel between app developers and their end-users in form of app reviews, which allow users to provide developers feedback on their apps. However, it is unclear to which extent users employ this channel to point out their security and privacy concerns about apps, about which aspects of apps users express concerns, and how developers react to such security- and privacy-related reviews.
In this paper, we present the first study of the relationship between end-user reviews and security- & privacy-related changes in apps. Using natural language processing on 4.5M user reviews for the top 2,583 apps in Google Play, we identified 5,527 security and privacy relevant reviews (SPR). For each app version mentioned in the SPR, we use static code analysis to extract permission-protected features mentioned in the reviews. We successfully mapped SPRs to privacy-related changes in app updates in 60.77% of all cases. Using exploratory data analysis and regression analysis we are able to show that preceding SPR are a significant factor for predicting privacy-related app updates, indicating that user reviews in fact lead to privacy improvements of apps. Our results further show that apps that adopt runtime permissions receive a significantly higher number of SPR, showing that runtime permissions put privacy-jeopardizing actions better into users’ minds. Further, we can attribute about half of all privacy-relevant app changes exclusively to third-party library code. This hints at larger problems for app developers to adhere
to users’ privacy expectations and markets’ privacy regulations.
Our results make a call for action to make app behavior more transparent to users in order to leverage their reviews in creating incentives for developers to adhere to security and privacy best practices, while our results call at the same time for better tools to support app developers in this endeavor.
Demystifying Hidden Privacy Settings in Mobile Apps
Yi Chen (Indiana University Bloomington, University of Chinese Academy of Sciences), Mingming Zha (Institute of Information Engineering, Chinese Academy of Sciences), Nan Zhang (Indiana University Bloomington), Dandan Xu (Institute of Information Engineering, Chinese Academy of Sciences), Qianqian Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Xuan Feng (Institute of Information Engineering, Chinese Academy of Sciences), Kan Yuan (Indiana University Bloomington), Fnu Suya (The University of Virginia), Yuan Tian (The University of Virginia), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences), XiaoFeng Wang (Indiana University Bloomington), Wei Zou (Institute of Information Engineering, Chinese Academy of Sciences)
Mobile apps include privacy settings that allow their users to configure how their data should be shared. These settings, however, are often hard to locate and hard to understand by the users, even in popular apps, such as Facebook. More seriously, they are often set to share user data by default, exposing her privacy without proper consent. In this paper, we report the first systematic study on the problem, which is made possible through an in-depth analysis of user perception of the privacy settings. More specifically, we first conduct two user studies (involving nearly one thousand users) to understand privacy settings from the user’s perspective, and identify these hard-to-find settings. Then we select 14 features that uniquely characterize such hidden privacy settings and utilize a novel technique called semantics- based UI tracing to extract them from a given app. On top of these features, a classifier is trained to automatically discover the hidden privacy settings, which together with other innovations, has been implemented into a tool called Hound. Over our labeled data set, the tool achieves an accuracy of 93.54%. Further running it on 100,000 latest apps from both Google Play and third-party markets, we find that over a third (36.29%) of the privacy settings identified from these apps are “hidden”. Looking into these settings, we observe that they become hard to discover and hard to understand primarily due to the problematic categorization on the apps’ user interfaces and/or confusing descriptions. Further importantly, though more privacy options have been offered to the user over time, also discovered is the persistence of their usability issue, which becomes even more serious, e.g., originally easy-to-find settings now harder to locate. And among all such hidden privacy settings, 82.16% are set to leak user privacy by default. We provide suggestions for improving the usability of these privacy settings at the end of our study.
Security of GPS/INS based On-road Location Tracking Systems
Sashank Narain (Northeastern University), Aanjhan Ranganathan (Northeastern University), Guevara Noubir (Northeastern University)
Location information is critical to a wide variety of navigation and tracking applications. GPS, today's de-facto outdoor localization system has been shown to be vulnerable to signal spoofing attacks. Inertial Navigation Systems (INS) are emerging as a popular complementary system, especially in road transportation systems as they enable improved navigation and tracking as well as offer resilience to wireless signals spoofing and jamming attacks. In this paper, we evaluate the security guarantees of INS-aided GPS tracking and navigation for road transportation systems. We consider an adversary required to travel from a source location to a destination and monitored by an INS-aided GPS system. The goal of the adversary is to travel to alternate locations without being detected. We develop and evaluate algorithms that achieve this goal, providing the adversary significant latitude. Our algorithms build a graph model for a given road network and enable us to derive potential destinations an attacker can reach without raising alarms even with the INS-aided GPS tracking and navigation system. The algorithms render the gyroscope and accelerometer sensors useless as they generate road trajectories indistinguishable from plausible paths (both in terms of turn angles and roads curvature). We also design, build and demonstrate that the magnetometer can be actively spoofed using a combination of carefully controlled coils. To experimentally demonstrate and evaluate the feasibility of the attack in real-world, we implement a first real-time integrated GPS/INS spoofer that accounts for traffic fluidity, congestion, lights, and dynamically generates corresponding spoofing signals. Furthermore, we evaluate our attack on ten different cities using driving traces and publicly available city plans. Our evaluations show that it is possible for an attacker to reach destinations that are as far as 30 km away from the actual destination without being detected. We also show that it is possible for the adversary to reach almost 60--80% of possible points within the target region in some cities. Such results are only a lower-bound, as an adversary can adjust our parameters to spend more resources (e.g., time) on the target source/destination than we did for our performance evaluations of thousands of paths. We propose countermeasures that limit an attacker's ability, without the need for any hardware modifications. Our system can be used as the foundation for countering such attacks, both detecting and recommending paths that are difficult to spoof.
Understanding the Security of ARM Debugging Features
Zhenyu Ning (Wayne State University), Fengwei Zhang (Wayne State University)
Processors nowadays are consistently equipped with debugging features to facilitate the program analysis. Specifically, the ARM debugging architecture involves a series of CoreSight components and debug registers to aid the system debugging, and a group of debug authentication signals are designed to restrict the usage of these components and registers. Meantime, the security of the debugging features is under-examined since it normally requires physical access to use these features in the traditional debugging model. However, ARM introduces a new debugging model that requires no physical access since ARMv7, which exacerbates our concern on the security of the debugging features. In this paper, we perform a comprehensive security analysis of the ARM debugging features, and summarize the security and vulnerability implications. To understand the impact of the implications, we also investigate a series of ARM-based platforms in different product domains (i.e., development boards, IoT devices, cloud servers, and mobile devices). We consider the analysis and investigation expose a new attacking surface that universally exists in ARM-based platforms. To verify our concern, we further craft Nailgun attack, which obtains sensitive information (e.g., AES encryption key and fingerprint image) and achieves arbitrary payload execution in a high-privilege mode from a low-privilege mode via misusing the debugging features. This attack does not rely on software bugs, and our experiments show that almost all the platforms we investigated are vulnerable to the attack. The potential mitigations are discussed from different perspectives in the ARM ecosystem.
Tap 'n Ghost: A Compilation of Novel Attack Techniques against Smartphone Touchscreens
Seita Maruyama (Waseda University), Satohiro Wakabayashi (Waseda University), Tatsuya Mori (Waseda University / RIKEN AIP)
We present a novel attack named "Tap 'n Ghost", which aims to attack the touchscreens of NFC-enabled mobile devices such as smartphones. Tap 'n Ghost consists of two striking attack techniques --- "Tag-based Adaptive Ploy (TAP)" and "Ghost Touch Generator." First, using a NFC card emulator embedded in a common object such as table, a TAP system performs tailored attacks on the victim's smartphone by employing device fingerprinting; e.g., popping up a customized dialogue box asking whether or not to connect to an attacker's Bluetooth mouse. Further, Ghost Touch Generator forces the victim to connect to the mouse even if she or he aimed to cancel the dialogue by touching the "cancel" button; i.e., it alters the selection of a button on a screen. After the connection is established, the attacker can remotely take control of the smartphone, with the knowledge about the layout of the screen derived from the device fingerprinting. To evaluate the reality of the attack, we perform an online survey with 300 respondents and a user study involving 16 participants. The results demonstrate that the attack is realistic. We additionally discuss the possible countermeasures against the threats posed by Tap 'n Ghost.
SensorID: Sensor Calibration Fingerprinting for Smartphones
Jiexin Zhang (University of Cambridge), Ian Sheret (Polymath Insight Limited), Alastair R. Beresford (University of Cambridge)
Sensors are an essential component of many com- puter systems today. Mobile devices are a good example, con- taining a vast array of sensors from accelerometers and GPS units, to cameras and microphones. Data from these sensors are accessible to application programmers who can use this data to build context-aware applications. Good sensor accuracy is often crucial, and therefore manufacturers often use per- device factory calibration to compensate for systematic errors introduced during manufacture. In this paper we explore a new type of fingerprinting attack on sensor data: calibration fingerprinting. A calibration fingerprinting attack infers the per- device factory calibration data from a device by careful analysis of the sensor output alone. Such an attack does not require direct access to any calibration parameters since these are often embedded inside the firmware of the device and are not directly accessible by application developers. We demonstrate the potential of this new class of attack by performing calibration fingerprinting attacks on the inertial measurement unit sensors found in iOS and Android devices. These sensors are good candidates because access to these sensors does not require any special permissions, and the data can be accessed via both a native app installed on a device and also by JavaScript when visiting a website on an iOS and Android device. We find we are able to perform a very effective calibration fingerprinting attack: our approach requires fewer than 100 samples of sensor data and takes less than one second to collect and process into a device fingerprint that does not change over time or after factory reset. We demonstrate that our approach is very likely to produce globally unique fingerprints for iOS devices, with an estimated 67 bits of entropy in the fingerprint for iPhone 6S devices. In addition, we find that the accelerometer of Google Pixel 2 and Pixel 3 devices can also be fingerprinted by our approach.

Session #8: Machine Learning

Grand Ballroom B

10:00AM - 12:10PM

Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lecuyer (Columbia University), Vaggelis Atlidakis (Columbia University), Roxana Geambasu (Columbia University), Daniel Hsu (Columbia University), Suman Jana (Columbia University)
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google’s Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
DeepSec: A Uniform Platform for Security Analysis of Deep Learning Models
Xiang Ling (Zhejiang University), Shouling Ji (Zhejiang University), Jiaxu Zou (Zhejiang University), Jiannan Wang (Zhejiang University), Chunming Wu (Zhejiang University), Bo Li (UC Berkeley), Ting Wang (Lehigh University)
Deep learning (DL) models are inherently vulnerable to adversarial examples – maliciously crafted inputs to trigger target DL models to misbehave – which significantly hinders the application of DL in security-sensitive domains. Intensive research on adversarial learning has led to an arms race between adversaries and defenders. Such plethora of emerging attacks and defenses raise many questions: Which attacks are more evasive, preprocessing-proof, or transferable? Which defenses are more effective, utility-preserving, or general? Are ensembles of multiple defenses more robust than individuals? Yet, due to the lack of platforms for comprehensive evaluation on adversarial attacks and defenses, these critical questions remain largely unsolved. In this paper, we present the design, implementation, and evaluation of DEEPSEC, a uniform platform that aims to bridge this gap. In its current implementation, DEEPSEC incorporates 16 state-of-the-art attacks with 10 attack utility metrics, and 13 state-of-the-art defenses with 5 defensive utility metrics. To our best knowledge, DEEPSEC is the first platform that enables researchers and practitioners to (i) measure the vulnerability of DL models, (ii) evaluate the effectiveness of various attacks/defenses, and (iii) conduct comparative studies on attacks/defenses in a comprehensive and informative manner. Leveraging DEEPSEC, we systematically evaluate the existing adversarial attack and defense methods, and draw a set of key findings, which demonstrate DEEPSEC’s rich functionality, such as (1) the trade-off between misclassification and imperceptibility is empirically confirmed; (2) most defenses that claim to be universally applicable can only defend against limited types of attacks under restricted settings; (3) it is not necessary that adversarial examples with higher perturbation magnitude are easier to be detected; (4) the ensemble of multiple defenses cannot improve the overall defense capability, but can improve the lower bound of the defense effectiveness of individuals. Extensive analysis on DEEPSEC demonstrates its capabilities and advantages as a benchmark platform which can benefit future adversarial learning research.
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis (University College London), Congzheng Song (Cornell University), Emiliano De Cristofaro (University College London), Vitaly Shmatikov (Cornell Tech)
Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates.
We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier.
We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
Bolun Wang (UC Santa Barbara), Yuanshun Yao (University of Chicago), Shawn Shan (University of Chicago), Huiying Li (University of Chicago), Bimal Viswanath (Virginia Tech), Haitao Zheng (University of Chicago), Ben Y. Zhao (University of Chicago)
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor attacks, where hidden associations or triggers override normal classification to produce unexpected results. For example, a model with a
backdoor always identifies a face as Bill Gates if a specific symbol is present in the input. Backdoors can stay hidden indefinitely until activated by an input, and present a serious security risk to many security or safety related applications, e.g. biometric authentication systems or self-driving cars.

We present the first robust and generalizable detection and mitigation system for DNN backdoor attacks. Our techniques identify backdoors and reconstruct possible triggers. We identify multiple mitigation techniques via input filters, neuron pruning and unlearning. We demonstrate their efficacy via extensive experiments on a variety of DNNs, against two types of backdoor injection methods identified by prior work. Our techniques also prove robust against a number of variants of the backdoor attack.
Helen: Maliciously Secure Coopetitive Learning for Linear Models
Wenting Zheng (UC Berkeley), Raluca Ada Popa (UC Berkeley), Joseph E. Gonzalez (UC Berkeley), Ion Stoica (UC Berkeley)
Many organizations wish to collaboratively train machine learning models on their combined datasets for a common benefit (e.g., better medical research, or fraud detection). However, they often cannot share their plaintext datasets due to privacy concerns and/or business competition. In this paper, we design and build Helen, a system that allows multiple parties to train a linear model without revealing their data, a setting we call coopetitive learning. Compared to prior secure training systems, Helen protects against a much stronger adversary who is malicious and can compromise m−1 out of m parties. Our evaluation shows that Helen can achieve up to five orders of magnitude of performance improvement when compared to training using an existing state-of-the-art secure multi-party computation framework.
Comprehensive Privacy Analysis of Deep Learning
Milad Nasr (University of Massachusetts Amherst), Reza Shokri (National University of Singapore (NUS)), Amir Houmansadr (University of Massachusetts Amherst)
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

Lunch

Atrium

12:10PM - 01:10PM

Session #9: Fuzzing

Grand Ballroom A

01:10PM - 03:00PM

Razzer: Finding Kernel Race Bugs through Fuzzing
Dae R. Jeong (KAIST), Kyungtae Kim (Purdue University), Basavesh Shivakumar (Purdue University), Byoungyoung Lee (Seoul National University, Purdue University), Insik Shin (KAIST)
A data race in a kernel is an important class of bugs, critically
impacting the reliability and security of the associated system. As a
result of a race, the kernel may become unresponsive. Even worse,
an attacker may launch a privilege escalation attack to acquire
root privileges.
In this paper, we propose Razzer, a tool to find race bugs in kernels.
The core of Razzer is in guiding fuzz testing towards potential
data race spots in the kernel. Razzer employs two techniques to
find races efficiently: a static analysis and a deterministic thread
interleaving technique. Using a static analysis, Razzer identifies
over-approximated potential data race spots, guiding the fuzzer to
search for data races in the kernel more efficiently. Using the
deterministic thread interleaving technique implemented at the
hypervisor, Razzer tames the non-deterministic behavior of the kernel
such that it can deterministically trigger a race. We implemented a
prototype of Razzer and ran the latest Linux kernel (from v4.16-rc3
to v4.18-rc3) using Razzer. As a result, Razzer discovered 30 new
races in the kernel, with 16 subsequently confirmed and accordingly
patched by kernel developers after they were reported.
ProFuzzer: On-the-fly Input Type Probing for Better Zero-day Vulnerability Discovery
Wei You (Purdue University), Xueqiang Wang (Indiana University Bloomington), Shiqing Ma (Purdue University), Jianjun Huang (Renmin University of China), Xiangyu Zhang (Purdue University), XiaoFeng Wang (Indiana University Bloomington), Bin Liang (Renmin University of China)
Existing mutation based fuzzers tend to randomly mutate the input of a program without understanding its underlying syntax and semantics. In this paper, we propose a novel on-the-fly probing technique (called ProFuzzer) that automatically recovers and understands input fields of critical importance to vulnerability discovery during a fuzzing process and intelligently adapts the mutation strategy to enhance the chance of hitting zero-day targets. Since such probing is transparently piggybacked to the regular fuzzing, no prior knowledge of the input specification is needed. During fuzzing, individual bytes are first mutated and their fuzzing results are automatically analyzed to link those related together and identify the type for the field connecting them; these bytes are further mutated together following type-specific strategies, which substantially prunes the search space. We define the probe types generally across all applications, thereby making our technique application agnostic. Our experiments on standard benchmarks and real-world applications show that ProFuzzer substantially outperforms AFL and its optimized version AFLFast, as well as other state-of-art fuzzers including VUzzer, Driller and QSYM. Within two months, it exposed 42 zero-days in 10 intensively tested programs, generating 30 CVEs.
Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing
Stefan Nagy (Virginia Tech), Matthew Hicks (Virginia Tech)
Coverage-guided fuzzing is one of the most successful approaches for discovering software bugs and security vulnerabilities. Of its three main components: (1) test case generation, (2) code coverage tracing, and (3) crash triage, code coverage tracing is a dominant source of overhead. Coverage-guided fuzzers trace every test case's code coverage through either static or dynamic binary instrumentation, or more recently, using hardware support. Unfortunately, tracing all test cases incurs significant performance penalties–-even when the overwhelming majority of test cases and their coverage information are discarded because they do not increase code coverage. To eliminate needless tracing by coverage-guided fuzzers, we introduce the notion of coverage-guided tracing. Coverage-guided tracing leverages two observations: (1) only a fraction of generated test cases increase coverage, and thus require tracing; and (2) coverage-increasing test cases become less frequent over time. Coverage-guided tracing encodes the current frontier of coverage in the target binary so that it self-reports when a test case produces new coverage–-without tracing. This acts as a filter for tracing; restricting the expense of tracing to only coverage-increasing test cases. Thus, coverage-guided tracing trades increased time handling coverage-increasing test cases for decreased time handling non-coverage-increasing test cases. To show the potential of coverage-guided tracing, we create an implementation based on the static binary instrumentor Dyninst called UnTracer. We evaluate UnTracer using eight real-world binaries commonly used by the fuzzing community. Experiments show that after only an hour of fuzzing, UnTracer's average overhead is below 1%, and after 24-hours of fuzzing, UnTracer approaches 0% overhead, while tracing every test case with popular white- and black-box-binary tracers AFL-Clang, AFL-QEMU, and AFL-Dyninst incurs overheads of 36%, 612%, and 518%, respectively. We further integrate UnTracer with the state-of-the-art hybrid fuzzer QSYM and show that in 24-hours of fuzzing, QSYM-UnTracer executes 79% and 616% more test cases than QSYM-Clang and QSYM-QEMU, respectively.
NEUZZ: Efficient Fuzzing with Neural Program Smoothing
Dongdong She (Columbia University), Kexin Pei (Columbia University), Dave Epstein (Columbia University), Junfeng Yang (Columbia University), Baishakhi Ray (Columbia University), Suman Jana (Columbia University)
Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function.

However, gradient-guided approaches are not directly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradient-based methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the target program’s discrete branching behavior. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program's branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process.

Our extensive evaluations demonstrate that NEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 popular real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 previously unknown bugs (including two CVEs) that other fuzzers failed to find in 10 real-world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers over 24 hour runs. Furthermore, NEUZZ also outperformed existing fuzzers on both LAVA-M and DARPA CGC bug datasets.
Fuzzing File Systems via Two-Dimensional Input Space Exploration
Wen Xu (Georgia Institute of Technology), Hyungon Moon (Ulsan National Institute of Science and Technology), Sanidhya Kashyap (Georgia Institute of Technology), Po-Ning Tseng (Georgia Institute of Technology), Taesoo Kim (Georgia Institute of Technology)
File systems, a basic building block of an OS, are too big and too complex to be bug free. Nevertheless, file systems rely on regular stress-testing tools and formal checkers to find bugs, which are limited due to the ever-increasing complexity of both file systems and OSes. Thus, fuzzing, proven to be an effective and a practical approach, becomes a preferable choice, as it does not need much knowledge about a target. However, three main challenges exist in fuzzing file systems: mutating a large image blob that degrades overall performance, generating image-dependent file operations, and reproducing found bugs, which is difficult for existing OS fuzzers.
Hence, we present JANUS, the first feedback-driven fuzzer that explores the two-dimensional input space of a file system, i.e., mutating metadata on a large image, while emitting image-directed file operations. In addition, JANUS relies on a library OS rather than on traditional VMs for fuzzing, which enables JANUS to load a fresh copy of the OS, thereby leading to better reproducibility of bugs. We evaluate JANUS on eight file systems and found 90 bugs in the upstream Linux kernel, 62 of which have been acknowledged. Forty-three bugs have been fixed with 32 CVEs assigned. In addition, JANUS achieves higher code coverage on all the file systems after fuzzing 12 hours, when compared with the state-of-the-art fuzzer Syzkaller for fuzzing file systems. JANUS visits 4.19x and 2.01x more code paths in Btrfs and ext4, respectively. Moreover, JANUS is able to reproduce 88–100% of the crashes, while Syzkaller fails on all of them.

Session #10: Side Channels and Data Leakage

Grand Ballroom B

01:10PM - 03:00PM

F-BLEAU: Fast Black-box Leakage Estimation
Giovanni Cherubin (Royal Holloway University of London), Kostas Chatzikokolakis (LIX, École Polytechnique), Catuscia Palamidessi (INRIA, École Polytechnique)
We consider the problem of measuring how much a system reveals about its secret inputs. We work in the black-box setting: we assume no prior knowledge of the system's internals, and we run the system for choices of secrets and measure its leakage from the respective outputs. Our goal is to estimate the Bayes risk, from which one can derive some of the most popular leakage measures (e.g., min-entropy leakage). The state-of-the-art method for estimating these leakage measures is the frequentist paradigm, which approximates the system's internals by looking at the frequencies of its inputs and outputs. Unfortunately, this does not scale for systems with large output spaces, where it would require too many input-output examples. Consequently, it also cannot be applied to systems with continuous outputs (e.g., time side channels, network traffic). In this paper, we exploit an analogy between Machine Learning (ML) and black-box leakage estimation to show that the Bayes risk of a system can be estimated by using a class of ML methods: the universally consistent learning rules; these rules can exploit patterns in the input-output examples to improve the estimates' convergence, while retaining formal optimality guarantees. We focus on a set of them, the nearest neighbor rules; we show that they significantly reduce the number of black-box queries required for a precise estimation whenever nearby outputs tend to be produced by the same secret; furthermore, some of them can tackle systems with continuous outputs. We illustrate the applicability of these techniques on both synthetic and real-world data, and we compare them with the state-of-the-art tool, leakiEst, which is based on the frequentist approach.
Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels
Daniel Genkin (University of Michigan), Mihir Pattani (University of Pennsylvania), Roei Schuster (Tel Aviv University and Cornell Tech), Eran Tromer (Tel Aviv University and Columbia University)
We show that subtle acoustic noises emanating from within computer screens can be used to detect the content displayed on the screens. This sound can be picked up by ordinary microphones built into webcams or screens, and is inadvertently transmitted to other parties, e.g., during a videoconference call or archived recordings.
It can also be recorded by a smartphone or ``smart speaker'' placed on a desk next to the screen, or from as far as 10 meters away using a parabolic microphone.

Empirically demonstrating various attack scenarios, we show how this channel can be used for real-time detection of on-screen text, or users' input into on-screen virtual keyboards. We also demonstrate how an attacker can analyze the audio received during video call (e.g., on Google Hangout) to infer whether the other side is browsing the web in lieu of watching the video call, and which web site is displayed on their screen.
Port Contention for Fun and Profit
Alejandro Cabrera Aldaya (Universidad Tecnológica de la Habana (CUJAE), Habana, Cuba), Billy Bob Brumley (Tampere University, Tampere, Finland), Sohaib ul Hassan (Tampere University, Tampere, Finland), Cesar Pereida García (Tampere University, Tampere, Finland), Nicola Tuveri (Tampere University, Tampere, Finland)
Simultaneous Multithreading (SMT) architectures are attractive targets for side-channel enabled attackers, with their inherently broader attack surface that exposes more per physical core microarchitecture components than cross-core attacks. In this work, we explore SMT execution engine sharing as a side-channel leakage source. We target ports to stacks of execution units to create a high-resolution timing side-channel due to port contention, inherently stealthy since it does not depend on the memory subsystem like other cache or TLB based attacks. Implementing our channel on Intel Skylake and Kaby Lake architectures featuring Hyper-Threading, we mount an end-to-end attack that recovers a P-384 private key from an OpenSSL-powered TLS server using a small number of repeated TLS handshake attempts. Furthermore, we show that traces targeting shared libraries, static builds, and SGX enclaves are essentially identical, hence our channel has wide target application.
Attack Directories, Not Caches: Side Channel Attacks in a Non-Inclusive World
Mengjia Yan (University of Illinois at Urbana Champaign), Read Sprabery (University of Illinois at Urbana Champaign), Bhargava Gopireddy (University of Illinois at Urbana Champaign), Christopher Fletcher (University of Illinois at Urbana Champaign), Roy Campbell (University of Illinois at Urbana Champaign), Josep Torrellas (University of Illinois at Urbana Champaign)
Although clouds have strong virtual memory isolation guarantees, cache attacks stemming from shared caches have proved to be a large security problem. However, despite the past effectiveness of cache attacks, their viability has recently been called into question on modern systems, due to trends in cache hierarchy design moving away from inclusive cache hierarchies.
In this paper, we reverse engineer the structure of the directory in a sliced, non-inclusive cache hierarchy, and prove that the directory can be used to bootstrap conflict-based cache attacks on the last-level cache. We design the first cross-core Prime+Probe attack on non-inclusive caches. This attack works with minimal assumptions: the adversary does not need to share any virtual memory with the victim, nor run on the same processor core. We also show the first high-bandwidth Evict+Reload attack on the same hardware. We demonstrate both attacks by extracting key bits during RSA operations in GnuPG on a state-of-the-art non-inclusive Intel Skylake-X server.
Hard Drive of Hearing: Disks that Eavesdrop with a Synthesized Microphone
Andrew Kwong (University of Michigan), Wenyuan Xu (Zhejiang University), Kevin Fu (University of Michigan)
Security conscious individuals may take considerable measures to disable sensors in order to protect their privacy. However, they often overlook the cyberphysical attack surface exposed by devices that were never designed to be sensors in the first place. Our research demonstrates that the mechanical components in magnetic hard disk drives behave as microphones with sufficient precision to extract and parse human speech. These unintentional microphones sense speech with high enough fidelity for the Shazam service to recognize a song recorded through the hard drive. This proof of concept attack sheds light on the possibility of invasion of privacy even in absence of traditional sensors. We also present defense mechanisms, such as the use of ultrasonic aliasing, that can mitigate acoustic eavesdropping by synthesized microphones in hard disk drives.

Break (40 Minutes)

03:00PM - 03:40PM

40th Celebration Plenary Session

Grand Ballroom A/B

03:40PM - 05:15PM

While we are celebrating this 40th IEEE Symposium on Security and Privacy throughout the conference, this session is dedicated to dive into particular aspects of the past and the future of the symposium. Mathias Payer (SP'19 40th Celebration Chair) introduces the session with a brief analysis of past symposia, the ebb and flow of technical topics, and how the conference evolved over time. We then reflect on four decades of the symposium and glance into what the future may bring with five esteemed guests who helped shape our community: Peter Neumann (SP'82 program chair), Dorothy Denning (SP'84 program chair), Dick Kemmerer (SP'93 program chair), Deborah Frincke (SP'11 general chair), and Giovanni Vigna (SP'11 program chair).

Panelists:
Dorothy Denning, Naval Postgraduate School
Deborah Frincke, NSA/CSS Director of Research
Richard Kemmerer, UC Santa Barbara
Peter Neumann, SRI

Moderator: Giovanni Vigna, UC Santa Barbara

Test of Time Awards

Grand Ballroom A/B

05:15PM - 05:45PM

The IEEE Security and Privacy "Test of Time" award recognizes papers published at IEEE’s flagship security conference that have made a lasting impact on the field. To qualify, a paper must have been published at the symposium between 10 and 12 years prior. We award our first inaugural Test of Time awards, followed by a special set of Test of Time awards recognizing papers from the first fifteen years of the symposium.

40th Celebration Light Reception

05:45PM - 06:45PM

S&P TC Business Meeting

Grand Ballroom A/B

06:30PM - 07:30PM

Wednesday, May 22


Continental Breakfast

Atrium

07:30AM - 08:30AM

Registration

Market Street Foyer

07:30AM - 06:00PM

Closing Remarks

Grand Ballroom A/B

08:30AM - 08:50PM

Previews

Grand Ballroom A/B

08:50AM - 09:20AM

Break (40 Minutes)

09:20AM - 10:00AM

Session #11: Systems and Applied Security

Grand Ballroom A

10:00AM - 12:10PM

"Should I Worry?" A Cross-Cultural Examination of Account Security Incident Response
Elissa M. Redmiles (University of Maryland)
Digital security technology is able to identify and prevent many threats to users accounts. However, some threats remain that, to provide reliable security, require human intervention: e.g., through users paying attention to warning messages or completing secondary authentication procedures. While prior work has broadly explored people's mental models of digital security threats, we know little about users' precise, in-the-moment response process to in-the-wild threats. In this work, we conduct a series of qualitative interviews (n=67) with users who had recently experienced suspicious login incidents on their real Facebook accounts in order to explore this process of account security incident response. We find a common process across participants from five countries -- with differing online and offline cultures -- allowing us to identify areas for future technical development to best support user security. We provide additional insights on the unique nature of incident-response information seeking, known attacker threat models, and lessons learned from a large, cross-cultural qualitative study of digital security.
Bitcoin vs. Bitcoin Cash: Coexistence or Downfall of Bitcoin Cash?
Yujin Kwon (KAIST), Hyoungshick Kim (Sungkyunkwan University), Jinwoo Shin (KAIST), Yongdae Kim (KAIST)
Bitcoin has become the most popular cryptocurrency based on a peer-to-peer network. In Aug. 2017, Bitcoin was split into the original Bitcoin (BTC) and Bitcoin Cash (BCH). Since then, miners have had a choice between BTC and BCH mining because they have compatible proof-of-work algorithms. Therefore, they can freely choose which coin to mine for higher profit, where the profitability depends on both the coin price and mining difficulty. Some miners can immediately switch the coin to mine only when mining difficulty changes because the difficulty changes are more predictable than that for the coin price, and we call this behavior fickle mining.
In this paper, we study the effects of fickle mining by modeling a game between two coins. To do this, we consider both fickle miners and some factions (e.g., BITMAIN for BCH mining) that stick to mining one coin to maintain that chain. In this model, we show that fickle mining leads to a Nash equilibrium in which only a faction sticking to its coin mining remains as a loyal miner to the less valued coin (e.g., BCH), where loyal miners refer to those who conduct mining even after coin mining difficulty increases. This situation would cause severe centralization, weakening the security of the coin system.
To determine which equilibrium the competing coin systems (e.g., BTC vs. BCH) are moving toward, we traced the historical changes of mining power for BTC and BCH and found that BCH often lacked loyal miners until Nov. 13, 2017, when the difficulty adjustment algorithm of BCH mining was changed. However, the change in difficulty adjustment algorithm of BCH mining led to a state close to the stable coexistence of BTC and BCH. We also demonstrate that the lack of BCH loyal miners may still be reached when a fraction of miners automatically and repeatedly switches to the most profitable coin to mine (i.e., automatic
mining). According to our analysis, as of Dec. 2018, loyal miners to BCH would leave if more than about 5% of the total mining capacity for BTC and BCH has engaged in the automatic mining. In addition, we analyze the recent “hash war” between Bitcoin ABC and SV, which confirms our theoretical analysis. Finally, we note that our results can be applied to any competing cryptocurrency systems in which the same hardware (e.g., ASICs or GPUs) can be used for mining. Therefore, our study brings new and important angles in competitive coin markets: a coin can intentionally weaken the security and decentralization level of the other rival coin when mining hardware is shared between them, allowing for automatic mining.
Stealthy Porn: Understanding Real-World Adversarial Images for Illicit Online Promotion
Kan Yuan (Indiana University Bloomington), Di Tang (Chinese University of Hong Kong), Xiaojing Liao (Indiana University Bloomington), XiaoFeng Wang (Indiana University Bloomington), Xuan Feng (Indiana University Bloomington/Chinese Academy of Sciences), Yi Chen (Indiana University Bloomington/Chinese Academy of Sciences), Menghan Sun (Chinese University of Hong Kong), Haoran Lu (Indiana University Bloomington), Kehuan Zhang (Chinese University of Hong Kong)
Recent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today’s strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male`na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others’ images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.
LBM: A Security Framework for Peripherals within the Linux Kernel
Dave (Jing) Tian (University of Florida), Grant Hernandez (University of Florida), Joseph Choi (University of Florida), Vanessa Frost (University of Florida), Peter Johnson (Middlebury College), Kevin Butler (University of Florida)
Modern computer peripherals are diverse in their capabilities and functionality, ranging from keyboards and print- ers to smartphones and external GPUs. In recent years, periph- erals increasingly connect over a small number of standardized communication protocols, including USB, Bluetooth, and NFC. The host operating system is responsible for managing these devices; however, malicious peripherals can request additional functionality from the OS resulting in system compromise, or can craft data packets to exploit vulnerabilities within OS software stacks. Defenses against malicious peripherals to date only partially cover the peripheral attack surface and are limited to specific protocols (e.g., USB). In this paper, we propose Linux (e)BPF Modules (LBM), a general security framework that provides a unified API for enforcing protection against malicious peripherals within the Linux kernel. LBM leverages the eBPF packet filtering mechanism for performance and extensibility and we provide a high-level language to facilitate the development of powerful filtering functionality. We demonstrate how LBM can provide host protection against malicious USB, Bluetooth, and NFC devices; we also instantiate and unify existing defenses under the LBM framework. Our evaluation shows that the overhead introduced by LBM is within 1 μs per packet in most cases, application and system overhead is negligible, and LBM outperforms other state-of-the-art solutions. To our knowledge, LBM is the first security framework designed to provide comprehensive protection against malicious peripherals within the Linux kernel.
SoK: Shining Light on Shadow Stacks
Nathan Burow (Purdue University), Xinping Zhang (Purdue University), Mathias Payer (EPFL)
Control-Flow Hijacking attacks are the dominant
attack vector against C/C++ programs. Control-Flow Integrity
(CFI) solutions mitigate these attacks on the forward edge,
i.e., indirect calls through function pointers and virtual calls.
Protecting the backward edge is left to stack canaries, which are
easily bypassed through information leaks. Shadow Stacks are
a fully precise mechanism for protecting backwards edges, and
should be deployed with CFI mitigations.
We present a comprehensive analysis of all possible shadow
stack mechanisms along three axes: performance, compatibil-
ity, and security. For performance comparisons we use SPEC
CPU2006, while security and compatibility are qualitatively
analyzed. Based on our study, we renew calls for a shadow
stack design that leverages a dedicated register, resulting in
low performance overhead, and minimal memory overhead,
but sacrifices compatibility. We present case studies of our
implementation of such a design, Shadesmar, on Phoronix and
Apache to demonstrate the feasibility of dedicating a general
purpose register to a security monitor on modern architectures,
and Shadesmar’s deployability. Our comprehensive analysis,
including detailed case studies for our novel design, allows
compiler designers and practitioners to select the correct shadow
stack design for different usage scenarios.
Shadow stacks belong to the class of defense mechanisms
that require metadata about the program’s state to enforce
their defense policies. Protecting this metadata for deployed
mitigations requires in-process isolation of a segment of the
virtual address space. Prior work on defenses in this class has
relied on information hiding to protect metadata. We show that
stronger guarantees are possible by repurposing two new Intel
x86 extensions for memory protection (MPX), and page table
control (MPK). Building on our isolation efforts with MPX
and MPK, we present the design requirements for a dedicated
hardware mechanism to support intra-process memory isolation,
and discuss how such a mechanism can empower the next wave of
highly precise software security mitigations that rely on partially
isolated information in a process.
Kiss from a Rogue: Evaluating Detectability of Pay-at-the-Pump Card Skimmers
Nolen Scaife (University of Florida), Jasmine Bowers (University of Florida), Christian Peeters (University of Florida), Grant Hernandez (University of Florida), Imani N. Sherman (University of Florida), Patrick Traynor (University of Florida), Lisa Anthony (University of Florida)
Credit and debit cards enable financial transactions at unattended "pay-at-the-pump" gas station terminals across North America. Attackers discreetly open these pumps and install skimmers, which copy sensitive card data. While EMV (“chip-and-PIN”) has made substantial inroads in traditional retailers, such systems have virtually no deployment at pay-at-the-pump terminals due to dramatically higher costs and logistical/regulatory constraints, leaving consumers vulnerable in these contexts. In an effort to improve security, station owners have deployed security indicators such as low-cost tamper-evident seals, and technologists have developed skimmer detection apps for mobile phones. Not only do these solutions put the onus on consumers to notice and react to security concerns at the pump, but the efficacy of these solutions has not been measured. In this paper, we evaluate the indicators available to consumers to detect skimmers. We perform a comprehensive teardown of all known skimmer detection apps for iOS and Android devices, and then conduct a forensic analysis of real-world gas pump skimmer hardware recovered by multiple law enforcement agencies. Finally, we analyze anti-skimmer mechanisms deployed by pump owners/operators, and augment this investigation with an analysis of skimmer reports and accompanying security measures collected by the Florida Department of Agriculture and Consumer Services over four years, making this the most comprehensive long-term study of such devices. Our results show that common gas pump security indicators are not only ineffective at empowering consumers to detect tampering, but may be providing a false sense of security. Accordingly, stronger, reliable, inexpensive measures must be developed to protect consumers and merchants from fraud.

Session #12: Cryptography & Encrypted Data

Grand Ballroom B

10:00AM - 12:10PM

Blind Certificate Authorities
Liang Wang (UW Madison), Gilad Asharov (Cornell Tech), Rafael Pass (Cornell Tech), Thomas Ristenpart (Cornell Tech), Abhi Shelat (Northeastern University)
We explore how to build a blind certificate authority (CA). Unlike conventional CAs, which learn the exact identity of those registering a public key, a blind CA can simultaneously validate an identity and provide a certificate binding a public key to it, without ever learning the identity. Blind CAs would therefore allow bootstrapping truly anonymous systems in which no party ever learns who participates. In this work we focus on constructing blind CAs that can bind an email address to a public key.

To do so, we first introduce secure channel injection (SCI) protocols. These allow one party (in our setting, the blind CA) to insert a private message into another party's encrypted communications. We construct an efficient SCI protocol for communications delivered over TLS, and use it to realize anonymous proofs of account ownership for SMTP servers. Combined with a zero-knowledge certificate signing protocol, we build the first blind CA that allows Alice to obtain a X.509 certificate binding her email address alice@domain.com to a public key of her choosing without ever revealing ``alice'' to the CA. We show experimentally that our system works with standard email server implementations as well as Gmail.
Data Recovery on Encrypted Databases With k-Nearest Neighbor Query Leakage
Evgenios Kornaropoulos (Brown University), Charalampos Papamanthou (University of Maryland), Roberto Tamassia (Brown University)
Recent works by Kellaris et al. (CCS’16) and Lacharite et al. (SP’18) demonstrated attacks of data recovery for encrypted databases that support rich queries such as range queries. In this paper, we develop the first data recovery attacks on encrypted databases supporting one-dimensional k-nearest neighbor (k-NN) queries, which are widely used in spatial data management. Our attacks exploit a generic k-NN query leakage profile: the attacker observes the identifiers of matched records. We consider both unordered responses, where the leakage is a set, and ordered responses, where the leakage is a k-tuple ordered by distance from the query point. As a first step, we perform a theoretical feasibility study on exact reconstruction, i.e., recovery of the exact plaintext values of the encrypted database. For ordered responses, we show that exact reconstruction is feasible if the attacker has additional access to some auxiliary information that is normally not available in practice. For unordered responses, we prove that exact reconstruction is impossible due to the infinite number of valid reconstructions. As a next step, we propose practical and more realistic approximate reconstruction attacks so as to recover an approximation of the plaintext values. For ordered responses, we show that after observing enough query responses, the attacker can approximate the client’s encrypted database with considerable accuracy. For unordered responses we characterize the set of valid reconstructions as a convex polytope in a k-dimensional space and present a rigorous attack that reconstructs the plaintext database with bounded approximation error. As multidimensional spatial data can be efficiently processed by mapping it to one dimension via Hilbert curves, we demonstrate our approximate reconstruction attacks on privacy-sensitive geolocation data. Our experiments on real-world datasets show that our attacks reconstruct the plaintext values with relative error ranging from 2.9% to 0.003%.
Threshold ECDSA from ECDSA Assumptions: The Multiparty Case
Jack Doerner (Northeastern University), Yashvanth Kondi (Northeastern University), Eysa Lee (Northeastern University), abhi shelat (Northeastern University)
Cryptocurrency applications have spurred a resurgence of interest in the computation of ECDSA signatures using threshold protocols---that is, protocols in which the signing key is secret-shared among n parties, of which any subset of size t must interact in order to compute a signature. Among the resulting works to date, that of Doerner et al. requires the most natural assumptions while also achieving the best practical signing speed. It is, however, limited to the setting in which the threshold is two. We propose an extension of their scheme to arbitrary thresholds, and prove it secure against a malicious adversary corrupting up to one party less than the threshold under only the Computational Diffie-Hellman assumption in the Random Oracle model, an assumption strictly weaker than those under which ECDSA is proven. Whereas the best current schemes for threshold-two ECDSA signing use a Diffie-Hellman Key Exchange to calculate each signature's nonce, a direct adaptation of this technique to a larger threshold t would incur a round count linear in t; thus we abandon it in favor of a new mechanism that yields a protocol requiring log(t)+6 rounds in total. We design a new consistency check, similar in spirit to that of Doerner et al., but suitable for an arbitrary number of participants, and we optimize the underlying two-party multiplication protocol on which our scheme is based, reducing its concrete communication and computation costs. We implement our scheme and evaluate it among groups of up to 256 of co-located and 128 geographically-distributed parties, and among small groups of embedded devices. We find that in the LAN setting, our scheme outperforms all prior works by orders of magnitude, and that it is efficient enough for use even on smartphones or hardware tokens. In the WAN setting we find that, despite its logarithmic round count, our protocol outperforms the best constant-round protocols in realistic scenarios.
Learning to Reconstruct: Statistical Learning Theory and Encrypted Database Attacks
Paul Grubbs (Cornell University), Marie-Sarah Lacharité (Royal Holloway, University of London), Brice Minaud (Ecole Normale Supérieure, CNRS, PSL University and Inria), Kenneth G. Paterson (Royal Holloway, University of London)
We show that the problem of reconstructing encrypted databases from access pattern leakage is closely related to statistical learning theory. This new viewpoint enables us to develop broader attacks that are supported by streamlined performance analyses. First, we address the problem of ε-approximate database reconstruction (ε-ADR) from range query leakage, giving attacks whose query cost scales only with the relative error ε, and is independent of the size of the database, or the number N of possible values of data items. This already goes significantly beyond the state-of-the-art for such attacks, as represented by Kellaris et al. (ACM CCS 2016) and Lacharité et al. (IEEE S&P 2018). We also study the new problem of ε-approximate order reconstruction (ε-AOR), where the adversary is tasked with reconstructing the order of records, except for records whose values are approximately equal. We show that as few as O(ε^−1 log ε^−1) uniformly random range queries suffice. Our analysis relies on an application of learning theory to PQ-trees, special data structures tuned to compactly record certain ordering constraints. We then show that when an auxiliary distribution is available, ε-AOR can be enhanced to achieve ε-ADR; using real data, we show that devastatingly small numbers of queries are needed to attain very accurate database reconstruction. Finally, we generalize from ranges to consider what learning theory tells us about the impact of access pattern leakage for other classes of queries, focusing on prefix and suffix queries. We illustrate this with both concrete attacks for prefix queries and with a general lower bound for all query classes. We also show a very general reduction from reconstruction with known or chosen queries to PAC learning.
On the Security of Two-Round Multi-Signatures
Manu Drijvers (DFINITY, ETH Zurich), Kasra Edalatnejad (EPFL), Bryan Ford (EPFL), Eike Kiltz (Ruhr-Universität Bochum), Julian Loss (Ruhr-Universität Bochum), Gregory Neven (DFINITY), Igors Stepanovs (UCSD)
A multi-signature scheme allows a group of signers to collaboratively sign a message, creating a single signature that convinces a verifier that every individual signer approved the message. The increased interest in technologies to decentralize trust has triggered the proposal of highly efficient two-round Schnorr-based multi-signature schemes designed to scale up to thousands of signers, namely BCJ by Bagherzandi et al. (CCS 2008), MWLD by Ma et al. (DCC 2010), CoSi by Syta et al. (S&P 2016), and MuSig by Maxwell et al. (ePrint 2018). In this work, we point out serious security issues in all currently known two-round multi-signature schemes (without pairings). First, we prove that none of the schemes can be proved secure without radically departing from currently known techniques. Namely, we show that if the one-more discrete-logarithm problem is hard, then no algebraic reduction exists that proves any of these schemes secure under the discrete-logarithm or one-more discrete-logarithm problem. We point out subtle flaws in the published security proofs of the above schemes (except CoSi, which was not proved secure) to clarify the contradiction between our result and the existing proofs. Next, we describe practical sub-exponential attacks on all schemes, providing further evidence to their insecurity. Being left without two-round multi-signature schemes, we present mBCJ, a variant of the BCJ scheme that we prove secure under the discrete-logarithm assumption in the random-oracle model. Our experiments show that mBCJ barely affects scalability compared to CoSi, allowing 16384 signers to collaboratively sign a message in about 2 seconds, making it a highly practical and provably secure alternative for large-scale deployments.
New Primitives for Actively-Secure MPC mod $2^k$ with Applications to Private Machine Learning
Ivan Damgård (Aarhus University), Daniel Escudero (Aarhus University), Tore Frederiksen (Alexandra Institute), Marcel Keller (Data61), Peter Scholl (Aarhus University), Nikolaj Volgushev (Alexandra Institute)
At CRYPTO 2018 Cramer et al. presented SPDZ2k , a new secret-sharing based protocol for actively secure multi-party computation against a dishonest majority, that works over rings instead of fields. Their protocol uses slightly more communication than competitive schemes working over fields. However, implementation-wise, their approach allows for arithmetic to be carried out using native 32 or 64-bit CPU operations rather than modulo a large prime. The authors thus conjectured that the increased communication would be more than made up for by the increased efficiency of implementations. In this work we answer their conjecture in the affirmative. We do so by implementing their scheme, and designing and implementing new efficient protocols for equality test, comparison, and truncation over rings. We further show that these operations find application in the machine learning domain, and indeed significantly outperform their field-based competitors. In particular, we implement and benchmark oblivious algorithms for decision tree and support vector machine (SVM) evaluation.

Lunch

Atrium

12:10PM - 01:10PM

Session #13: Network Security

Grand Ballroom A

01:10PM - 03:00PM

Breaking LTE on Layer Two
David Rupprecht (Ruhr-University Bochum), Katharina Kohls (Ruhr-University Bochum), Thorsten Holz (Ruhr-University Bochum), Christina Pöpper (New York University Abu Dhabi)
Long Term Evolution (LTE) is the latest mobile communication standard and has a pivotal role in our information society: LTE combines performance goals with modern security mechanisms and serves casual use cases as well as critical infrastructure and public safety communications. Both scenarios are demanding towards a resilient and secure specification and implementation of LTE, as outages and open attack vectors potentially lead to severe risks. Previous work on LTE protocol security identified crucial attack vectors for both the physical (layer one) and network (layer three) layers. Data link layer (layer two) protocols, however, remain a blind spot in existing LTE security research.
In this paper, we present a comprehensive layer two security analysis and identify three attack vectors. These attacks impair the confidentiality and/or privacy of LTE communication. More specifically, we first present a passive identity mapping attack that matches volatile radio identities to longer lasting network identities, enabling us to identify users within a cell and serving as a stepping stone for follow-up attacks. Second, we demonstrate how a passive attacker can abuse the resource allocation as a side channel to perform website fingerprinting that enables the attacker to learn the websites a user accessed. Finally, we present the A LTE R attack that exploits the fact that LTE user data is encrypted in counter mode (AES-CTR) but not integrity protected, which allows us to modify the message payload. As a proof-of-concept demonstration, we show how an active attacker can redirect DNS requests and then perform a DNS spoofing attack. As a result, the user is redirected to a malicious website. Our experimental analysis demonstrates the real-world applicability of all three attacks and emphasizes the threat of open attack vectors on LTE layer two protocols.
HOLMES: Real-time APT Detection through Correlation of Suspicious Information Flows
Sadegh M. Milajerdi (University of Illinois at Chicago), Rigel Gjomemo (University of Illinois at Chicago), Birhanu Eshete (University of Illinois at Chicago), R. Sekar (Stony Brook University), Venkat Venkatakrishnan (University of Illinois at Chicago)
In this paper, we present HOLMES, a system that implements a new approach to the detection of Advanced and Persistent Threats (APTs). HOLMES is inspired by several case studies of real-world APTs that highlight some common goals of APT actors. In a nutshell, HOLMES aims to produce a detection signal that indicates the presence of a coordinated set of activities that are part of an APT campaign. One of the main challenges addressed by our approach involves developing a suite of techniques that make the detection signal robust and reliable. At a high-level, the techniques we develop effectively leverage the correlation between suspicious information flows that arise during an attacker campaign. In addition to its detection capability, HOLMES is also able to generate a high-level graph that summarizes the attacker’s actions in real-time. This graph can be used by an analyst for an effective cyber response. An evaluation of our approach against some real-world APTs indicates that HOLMES can detect APT campaigns with high precision and low false alarm rate. The compact high-level graphs produced by HOLMES effectively summarizes an ongoing attack campaign and can assist real-time cyber-response operations.
Touching the Untouchables: Dynamic Security Analysis of the LTE Control Plane
Hongil Kim (Korea Advanced Institute of Science and Technology (KAIST)), Jiho Lee (Korea Advanced Institute of Science and Technology (KAIST)), Eunkyu Lee (Korea Advanced Institute of Science and Technology (KAIST)), Yongdae Kim (Korea Advanced Institute of Science and Technology (KAIST))
This paper presents our extensive investigation of the security aspects of control plane procedures based on dynamic testing of the control components in operational Long Term Evolution (LTE) networks. For dynamic testing in LTE networks, we implemented a semi-automated testing tool, named LTEFuzz, by using open-source LTE software over which the user has full control. We systematically generated test cases by defining three basic security properties by closely analyzing the standards. Based on the security property, LTEFuzz generates and sends the test cases to a target network, and classifies the problematic behavior by only monitoring the device-side logs. Accordingly, we uncovered 36 vulnerabilities, which have not been disclosed previously. These findings are categorized into five types: Improper handling of (1) unprotected initial procedure, (2) crafted plain requests, (3) messages with invalid integrity protection, (4) replayed messages, and (5) security procedure bypass. We confirmed those vulnerabilities by demonstrating proof-of-concept attacks against operational LTE networks. The impact of the attacks is to either deny LTE services to legitimate users, spoof SMS messages, or eavesdrop/manipulate user data traffic. Precise root cause analysis and potential countermeasures to address these problems are presented as well. Cellular carriers were partially involved to maintain ethical standards as well as verify our findings in commercial LTE networks.
On the Feasibility of Rerouting-Based DDoS Defenses
Muoi Tran (National University of Singapore), Min Suk Kang (National University of Singapore), Hsu-Chun Hsiao (National Taiwan University), Wei-Hsuan Chiang (National Taiwan University), Shu-Po Tung (National Taiwan University), Yu-Su Wang (National Taiwan University)
Large botnet-based flooding attacks have recently demonstrated unprecedented damage. However, the best-known end-to-end availability guarantees against flooding attacks require costly global-scale coordination among autonomous systems (ASes). A recent proposal called routing around congestion (or RAC) attempts to offer strong end-to-end availability to a selected critical flow by dynamically rerouting it to an uncongested detour path without requiring any inter-AS coordination. This paper presents an in-depth analysis of the (in)feasibility of the RAC defense and points out that its rerouting approach, though intriguing, cannot possibly solve the challenging flooding problem. An effective RAC solution should find an inter-domain detour path for its critical flow with the two following desired properties: (1) it guarantees the establishment of an arbitrary detour path of its choice, and (2) it isolates the established detour path from non-critical flows so that the path is used exclusively for its critical flow. However, we show a fundamental trade-off between the two desired properties, and as a result, only one of them can be achieved but not both. Worse yet, we show that failing to achieve either of the two properties makes the RAC defense not just ineffective but nearly unusable. When the newly established detour path is not isolated, a new adaptive adversary can detect it in real time and immediately congest the path, defeating the goals of the RAC defense. Conversely, when the establishment of an arbitrary detour path is not guaranteed, more than 80% of critical flows we test have only a small number (e.g., three or less) of detour paths that can actually be established and disjoint from each other, which significantly restricts the available options for the reliable RAC operation. The first lesson of this study is that BGP-based rerouting solutions in the current inter-domain infrastructure seem to be impractical due to implicit assumptions (e.g., the invisibility of poisoning messages) that are unattainable in BGP's current practice. Second, we learn that the analysis of protocol specifications alone is insufficient for the feasibility study of any new defense proposal and, thus, additional rigorous security analysis and various network evaluations, including real-world testing, are required. Finally, our findings in this paper agree well with the conclusion of the major literature about end-to-end guarantees; that is, strong end-to-end availability should be a security feature of the Internet routing by design, not an ad hoc feature obtained via exploiting current routing protocols.
Resident Evil: Understanding Residential IP Proxy as a Dark Service
Xianghang Mi (Indiana University Bloomington), Xuan Feng (Indiana University Bloomington), Xiaojing Liao (Indiana University Bloomington), Baojun Liu (Tsinghua University), XiaoFeng Wang (Indiana University Bloomington), Feng Qian (Indiana University Bloomington), Zhou Li (IEEE member), Sumayah Alrwais (King Saud University), Limin Sun (Institute of Information Engineering, CAS), Ying Liu (Tsinghua University)
An emerging Internet business is residential proxy (RESIP) as a service, in which a provider utilizes the hosts within residential networks (in contrast to those running in a datacenter) to relay their customers’ traffic, in an attempt to avoid server- side blocking and detection. With the prominent roles the services could play in the underground business world, little has been done to understand whether they are indeed involved in Cybercrimes and how they operate, due to the challenges in identifying their RESIPs, not to mention any in-depth analysis on them.
In this paper, we report the first study on RESIPs, which sheds light on the behaviors and the ecosystem of these elusive gray services. Our research employed an infiltration framework, including our clients for RESIP services and the servers they visited, to detect 6 million RESIP IPs across 230+ countries and 52K+ ISPs. The observed addresses were analyzed and the hosts behind them were further fingerprinted using a new profiling system. Our effort led to several surprising findings about the RESIP services unknown before. Surprisingly, despite the providers’ claim that the proxy hosts are willingly joined, many proxies run on likely compromised hosts including IoT devices. Through cross-matching the hosts we discovered and labeled PUP (potentially unwanted programs) logs provided by a leading IT company, we uncovered various illicit operations RESIP hosts performed, including illegal promotion, Fast fluxing, phishing, malware hosting, and others. We also reverse engi- neered RESIP services’ internal infrastructures, uncovered their potential rebranding and reselling behaviors. Our research takes the first step toward understanding this new Internet service, contributing to the effective control of their security risks.

Session #14: Program Languages

Grand Ballroom B

01:10PM - 03:00PM

Simple High-Level Code For Cryptographic Arithmetic -- With Proofs, Without Compromises
Andres Erbsen (MIT CSAIL), Jade Philipoom (MIT CSAIL), Jason Gross (MIT CSAIL), Robert Sloan (MIT CSAIL), Adam Chlipala (MIT CSAIL)
We introduce a new approach for implementing cryptographic arithmetic in short high-level code with machine-checked proofs of functional correctness. We further demonstrate that simple partial evaluation is sufficient to transform into the fastest-known C code, breaking the decades-old pattern that the only fast implementations are those whose instruction-level steps were written out by hand. These techniques were used to build an elliptic-curve library that achieves competitive performance for 80 prime fields and multiple CPU architectures, showing that implementation and proof effort scales with the number and complexity of conceptually different algorithms, not their use cases. As one outcome, we present the first verified high-performance implementation of P-256, the most widely used elliptic curve. implementations from our library were included in BoringSSL to replace existing specialized code, for inclusion in several large deployments for Chrome, Android, and CloudFlare.
SoK: General Purpose Compilers for Secure Multi-Party Computation
Marcella Hastings (University of Pennsylvania), Brett Hemenway (University of Pennsylvania), Daniel Noble (University of Pennsylvania), Steve Zdancewic (University of Pennsylvania)
Secure multi-party computation (MPC) allows a group of mutually distrustful parties to compute a joint function on their inputs without revealing any information beyond the result of the computation. This type of computation is extremely powerful and has wide-ranging applications in academia, industry, and government. Protocols for secure computation have existed for decades, but only recently have general-purpose compilers for executing MPC on arbitrary functions been developed. These projects rapidly improved the state of the art, and began to make MPC accessible to non-expert users.
However, the field is changing so rapidly that it is difficult even for experts to keep track of the varied capabilities of modern frameworks.

In this work, we survey general-purpose compilers for secure multi-party computation.
These tools provide high-level abstractions to describe arbitrary functions and execute secure computation protocols. We consider eleven systems: EMP-toolkit, Obliv-C, ObliVM, TinyGarble, SCALE-MAMBA (formerly SPDZ), Wysteria, Sharemind, PICCO, ABY,
Frigate and CBMC-GC. We evaluate these systems on a range of criteria, including language expressibility, capabilities of the cryptographic back-end, and accessibility to developers. We advocate for improved documentation of MPC frameworks, standardization within the community, and make recommendations for future directions in compiler development. Installing and running these systems can be challenging, and for each system, we also provide a complete virtual environment (Docker container) with all the necessary dependencies to run the compiler and our example programs.
The Code That Never Ran: Modeling Attacks on Speculative Evaluation
Craig Disselkoen (University of California San Diego, Mozilla Research Internship), Radha Jagadeesan (DePaul University), Alan Jeffrey (Mozilla Research), James Riely (DePaul University)
This paper studies information flow caused by speculation mechanisms in hardware and software. The Spectre attack shows that there are practical information flow attacks which use an interaction of dynamic security checks, speculative evaluation and cache timing. Previous formal models of program execution are designed to capture computer architecture, rather than micro-architecture, and so do not capture attacks such as Spectre. In this paper, we propose a model based on pomsets which is designed to model speculative evaluation. The model is abstract with respect to specific micro-architectural features, such as caches and pipelines, yet is powerful enough to express known attacks such as Spectre and Prime+Abort, and verify their countermeasures. The model also allows for the prediction of new information flow attacks. We derive two such attacks, which exploit compiler optimizations, and validate these experimentally against gcc and clang.
Formally Verified Cryptographic Web Applications in WebAssembly
Jonathan Protzenko (Microsoft Research), Benjamin Beurdouche (Inria), Denis Merigoux (Inria), Karthikeyan Bhargavan (Inria)
After suffering decades of high-profile attacks, the need for formal verification of security-critical software has never been clearer. Verification-oriented programming languages like F* are now being used to build high-assurance cryptographic libraries and implementations of standard protocols like TLS. In this paper, we seek to apply these verification techniques to modern Web applications, like WhatsApp, that embed sophisticated custom cryptographic components. The problem is that these components are often implemented in JavaScript, a language that is both hostile to cryptographic code and hard to reason about. So we instead target WebAssembly, a new instruction set that is supported by all major JavaScript runtimes.
We present a new toolchain that compiles Low*, a low-level subset of the F* programming language, into WebAssembly. Unlike other WebAssembly compilers like Emscripten, our compilation pipeline is focused on compactness and auditability: we formalize the full translation rules in the paper and implement it in a few thousand lines of OCaml. Using this toolchain, we present two case studies. First, we build WHACL*, a WebAssembly version of the existing, verified HACL* cryptographic library. Then, we present LibSignal*, a brand new, verified implementation of the Signal protocol in WebAssembly, that can be readily used by messaging applications like WhatsApp, Skype, and Signal.
SoK: Sanitizing for Security
Dokyung Song (University of California, Irvine), Julian Lettner (University of California, Irvine), Prabhu Rajasekaran (University of California, Irvine), Yeoul Na (University of California, Irvine), Stijn Volckaert (University of California, Irvine), Per Larsen (University of California, Irvine), Michael Franz (University of California, Irvine)
The C and C++ programming languages are notoriously insecure yet remain indispensable. Developers therefore resort to a multi-pronged approach to find security issues before adversaries. These include manual, static, and dynamic program analysis. Dynamic bug finding tools—henceforth "sanitizers"—can find bugs that elude other types of analysis because they observe the actual execution of a program, and can therefore directly observe incorrect program behavior as it happens.
A vast number of sanitizers have been prototyped by academics and refined by practitioners. We provide a systematic overview of sanitizers with an emphasis on their role in finding security issues. Specifically, we taxonomize the available tools and the security vulnerabilities they cover, describe their performance and compatibility properties, and highlight various trade-offs.

Break (30 Minutes)

03:00PM - 03:30PM

Session #15: Web and Cloud Security

Grand Ballroom A

03:30PM - 05:00PM

Why Does Your Data Leak? Uncovering the Data Leakage in Cloud from Mobile Apps
Chaoshun Zuo (The Ohio State University), Zhiqiang Lin (The Ohio State University), Yinqian Zhang (The Ohio State University)
Increasingly, more and more mobile applications (apps for short) are using the cloud as the back-end, in particular the cloud APIs, for data storage, data analytics, message notification, and monitoring. Unfortunately, we have recently witnessed massive data leaks from the cloud, ranging from personally identifiable information to corporate secrets. In this paper, we seek to understand why such significant leaks occur and design tools to automatically identify them. To our surprise, our study reveals that lack of authentication, misuse of various keys (e.g., normal user keys and superuser keys) in authentication, or misconfiguration of user permissions in authorization are the root causes. Then, we design a set of automated program analysis techniques including obfuscation-resilient cloud API identification and string value analysis, and implement them in a tool called LeakScope to identify the potential data leakage vulnerabilities from mobile apps based on how the cloud APIs are used. Our evaluation with over 1.6 million mobile apps from the Google Play Store has uncovered 15, 098 app servers managed by mainstream cloud providers such as Amazon, Google, and Microsoft that are subject to data leakage attacks. We have made responsible disclosure to each of the cloud service providers, and they have all confirmed the vulnerabilities we have identified and are actively working with the mobile app developers to patch their vulnerable services.
Measuring and Analyzing Search Engine Poisoning of Linguistic Collisions
Matthew Joslin (University of Texas at Dallas), Neng Li (Shanghai Jiao Tong University), Shuang Hao (University of Texas at Dallas), Minhui Xue (Macquarie University), Haojin Zhu (Shanghai Jiao Tong University)
Misspelled keywords have become an appealing target in search poisoning, since they are less competitive to promote than the correct queries and account for a considerable amount of search traffic. Search engines have adopted several countermeasure strategies, e.g., Google applies automated corrections on queried keywords and returns search results of the corrected versions directly. However, a sophisticated class of attack, which we term as linguistic-collision misspelling, can evade auto-correction and poison search results. Cybercriminals target special queries where the misspelled terms are existent words, even in other languages (e.g., "idobe", a misspelling of the English word "adobe", is a legitimate word in the Nigerian language).

In this paper, we perform the first large-scale analysis on linguistic-collision search poisoning attacks. In particular, we check 1.77 million misspelled search terms on Google and Baidu and analyze both English and Chinese languages, which are the top two languages used by Internet users. We leverage edit distance operations and linguistic properties to generate misspelling candidates. To more efficiently identify linguistic-collision search terms, we design a deep learning model that can improve collection rate by 2.84x compared to random sampling. Our results show that the abuse is prevalent: around 1.19% of linguistic-collision search terms on Google and Baidu have results on the first page directing to malicious websites. We also find that cybercriminals mainly target categories of gambling, drugs, and adult content. Mobile-device users disproportionately search for misspelled keywords, presumably due to small screen for input. Our work highlights this new class of search engine poisoning and provides insights to help mitigate the threat.
How Well Do My Results Generalize? Comparing Security and Privacy Survey Results from MTurk, Web, and Telephone Samples
Elissa M. Redmiles (University of Maryland), Sean Kross (University of California San Diego), Michelle L. Mazurek (University of Maryland)
Security and privacy researchers often rely on data collected from Amazon Mechanical Turk (MTurk) to evaluate security tools, to understand users' privacy preferences and to measure online behavior. Yet, little is known about how well Turkers' survey responses and performance on security- and privacy-related tasks generalizes to a broader population. This paper takes a first step toward understanding the generalizability of security and privacy user studies by comparing users' self-reports of their security and privacy knowledge, past experiences, advice sources, and behavior across samples collected using MTurk (n=480), a census-representative web-panel (n=428), and a probabilistic telephone sample (n=3,000) statistically weighted to be accurate within 2.7% of the true prevalence in the U.S.

Surprisingly, the results suggest that: (1) MTurk responses regarding security and privacy experiences, advice sources, and knowledge are more representative of the U.S. population than are responses from the census-representative panel; (2) MTurk and general population reports of security and privacy experiences, knowledge, and advice sources are quite similar for respondents who are younger than 50 or who have some college education; and (3) respondents' answers to the survey questions we ask are stable over time and robust to relevant, broadly-reported news events. Further, differences in responses cannot be ameliorated with simple demographic weighting, possibly because MTurk and panel participants have more internet experience compared to their demographic peers. Together, these findings lend tempered support for the generalizability of prior crowdsourced security and privacy user studies; provide context to more accurately interpret the results of such studies; and suggest rich directions for future work to mitigate experience- rather than demographic-related sample biases.
PhishFarm: A Scalable Framework for Measuring the Effectiveness of Evasion Techniques Against Browser Phishing Blacklists
Adam Oest (Arizona State University), Yeganeh Safaei (Arizona State University), Adam Doupé (Arizona State University), Gail-Joon Ahn (Arizona State University, Samsung Research), Brad Wardman (PayPal), Kevin Tyers (PayPal)
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks— including those based on geolocation, device type, or JavaScript— were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms— but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.

Session #16: IoT Security

Grand Ballroom B

03:30PM - 05:00PM

SoK: Security Evaluation of Home-Based IoT Deployment
Omar Alrawi (Georgia Institute of Technology), Chaz Lever (Georgia Institute of Technology), Manos Antonakakis (Georgia Institute of Technology), Fabian Monrose (University of North Carolina at Chapel Hill)
Home-based IoT devices have a bleak reputation regarding their security practices. On the surface, the insecurities of IoT devices seem to be caused by integration problems that may be addressed by simple measures, but this work finds that to be a naive assumption. The truth is, IoT deployments, at their core, utilize traditional compute systems, such as embedded, mobile, and network. These components have many unexplored challenges such as the effect of over-privileged mobile applications on embedded devices. Our work proposes a methodology that researchers and practitioners could employ to analyze security properties for home-based IoT devices. We systematize the literature for home-based IoT using this methodology in order to understand attack techniques, mitigations, and stakeholders. Further, we evaluate 45 devices to augment the systematized literature in order to identify neglected research areas. To make this analysis transparent and easier to adapt by the community, we provide a public portal to share our evaluation data and invite the community to contribute their independent findings.
Dangerous Skills: Understanding and Mitigating Security Risks of Voice-Controlled Third-Party Functions on Virtual Personal Assistant Systems
Nan Zhang (Indiana University, Bloomington), Xianghang Mi (Indiana University, Bloomington), Xuan Feng (Indiana University, Bloomington; Beijing Key Laboratory of IOT Information Security Technology, Institute of Information Engineering, CAS, China), XiaoFeng Wang (Indiana University, Bloomington), Yuan Tian (University of Virginia), Feng Qian (Indiana University, Bloomington)
Virtual personal assistants (VPA) (e.g., Amazon Alexa and Google Assistant) today mostly rely on the voice channel to communicate with their users, which however is known to be vulnerable, lacking proper authentication (from the user to the VPA). A new authentication challenge, from the VPA service to the user, has emerged with the rapid growth of the VPA ecosystem, which allows a third party to publish a function (called skill) for the service and therefore can be exploited to spread malicious skills to a large audience during their interactions with smart speakers like Amazon Echo and Google Home. In this paper, we report a study that concludes such remote, large-scale attacks are indeed realistic. We discovered two new attacks: voice squatting in which the adversary exploits the way a skill is invoked (e.g., ``open capital one''), using a malicious skill with a similarly pronounced name (e.g., ``capital won'') or a paraphrased name (e.g., ``capital one please'') to hijack the voice command meant for a legitimate skill (e.g., ``capital one''), and voice masquerading in which a malicious skill impersonates the VPA service or a legitimate skill during the user's conversation with the service to steal her personal information. These attacks aim at the way VPAs work or the user's misconceptions about their functionalities, and are found to pose a realistic threat by our experiments (including user studies and real-world deployments) on Amazon Echo and Google Home. The significance of our findings has already been acknowledged by Amazon and Google, and further evidenced by the risky skills found on Alexa and Google markets by the new squatting detector we built. We further developed a technique that automatically captures an ongoing masquerading attack and demonstrated its efficacy.
Drones' Cryptanalysis - Smashing Cryptography with a Flicker
Ben Nassi (Ben-Gurion University of the Negev), Raz Ben-Netanel (Ben-Gurion University of the Negev), Adi Shamir (Weizmann Institute of Science), Yuval Elovici (Ben-Gurion University of the Negev)
In an "open skies" era in which drones fly among us, a new question arises: how can we tell whether a passing drone is being used by its operator for a legitimate purpose (e.g., delivering pizza) or an illegitimate purpose (e.g., taking a peek at a person showering in his/her own house)? Over the years, many methods have been suggested to detect the presence of a drone in a specific location, however since populated areas are no longer off limits for drone flights, the previouslysuggested methods for detecting a privacy invasion attack are irrelevant. In this paper, we present a new method that can detect whether a specific POI (point of interest) is being video streamed by a drone. We show that applying a periodic physical stimulus on a target/victim being video streamed by a drone causes a watermark to be added to the encrypted video traffic that is sent from the drone to its operator and how this watermark can be detected using interception. Based on this method, we present an algorithm for detecting a privacy invasion attack. We analyze the performance of our algorithm using four commercial drones (DJI Mavic Air, Parrot Bebop 2, DJI Spark, and DJI Mavic Pro). We show how our method can be used to (1) determine whether a detected FPV (first-person view) channel is being used to video stream a POI by a drone, and (2) locate a spying drone in space; we also demonstrate how the physical stimulus can be applied covertly. In addition, we present a classification algorithm that differentiates FPV transmissions from other suspicious radio transmissions. We implement this algorithm in a new invasion attack detection system which we evaluate in two use cases (when the victim is inside his/her house and when the victim is being tracked by a drone while driving his/her car); our evaluation shows that a privacy invasion attack can be detected by our system in about 2-3 seconds.
Dominance as a New Trusted Computing Primitive for the Internet of Things
Meng Xu (Georgia Institute of Technology), Manuel Huber (Fraunhofer AISEC), Zhichuang Sun (Northeastern University), Paul England (Microsoft Research), Marcus Peinado (Microsoft Research), Sangho Lee (Microsoft Research), Andrey Marochko (Microsoft Research), Dennis Mattoon (Microsoft Research), Rob Spiger (Microsoft), Stefan Thom (Microsoft)
The Internet of Things (IoT) is rapidly emerging as one of the dominant computing paradigms of this decade. Applications range from in-home entertainment to large-scale industrial deployments such as controlling assembly lines and monitoring traffic. While IoT devices are in many respects similar to traditional computers, user expectations and deployment scenarios as well as cost and hardware constraints are sufficiently different to create new security challenges as well as new opportunities. This is especially true for large-scale IoT deployments in which a central entity deploys and controls a large number of IoT devices with minimal human interaction.

Like traditional computers, IoT devices are subject to attack and compromise. Large IoT deployments consisting of many nearly identical devices are especially attractive targets. At the same time, recovery from root compromise by conventional means becomes costly and slow, even more so if the devices are dispersed over a large geographical area. In the worst case, technicians have to travel to all devices and manually recover them. Data center solutions such as the Intelligent Platform Management Interface (IPMI) which rely on separate service processors and network connections are not only not supported by existing IoT hardware, but are unlikely to be in the foreseeable future due to the cost constraints of mainstream IoT devices.

This paper presents Cider, a system that can recover IoT devices within a short amount of time, even if attackers have taken root control of every device in a large deployment. The recovery requires minimal manual intervention. After the administrator has identified the compromise and produced an updated firmware image, he/she can instruct Cider to force the devices to reset and to install the patched firmware on the devices. We demonstrate the universality and practicality of Cider by implementing it on three popular IoT platforms (HummingBoard Edge, Raspberry Pi Compute Module 3 and Nucleo-L476RG) spanning the range from high to low end. Our evaluation shows that the performance overhead of Cider is generally negligible.

Symposium/Workshops Bridging Reception

Atrium

05:30PM - 07:00PM

Thursday, May 23


Workshops Continental Breakfast

Waterfront C/D/E

07:30AM - 08:30AM

Registration

07:30AM - 11:00AM

Workshops Opening Remarks

08:45AM - 09:00AM

Workshops Session #1

09:00AM - 10:15AM

Workshops Break (30 Minutes)

Seacliff Foyer

10:15AM - 10:45AM

Workshops Session #2

10:45AM - 12:30PM

Workshops Lunch

Waterfront C/D/E

12:30PM - 01:30PM

Workshops Session #3

01:30PM - 03:15PM

Workshops Break (30 Minutes)

Seacliff Foyer

03:15PM - 03:45PM

Workshops Session #4

03:45PM - 05:30PM

Workshops Closing Remarks

05:30PM - 05:45PM