Review of the
Security and Privacy Symposium, technical sessions,
Berkeley/Oakland, California, USA
May 17-19, 2010
Ben Ransford, University of Massachusetts Amherst
Sruthi Bandhakavi, University of Illinois at Urbana-Champaign
Joana Trindade, University of Illinois at Urbana-Champaign
May 31, 2010
This year's Oakland (the 31st) was the best attended ever. The organizers had to cap registrations very early in the process. The initial cap of 320 was achieved a day after the early registration closed. Working with a waiting list, the cap was eventually settled at 350. The conference room was reconfigured to hold that many people, but it didn't get too crowded after all.
Attendees were the usual mix of academic, government, industry (and other) origin, leading to a fruitful get together for many discussions in and outside the conference room. Complementing this year's symposium, other than the 30th birthday celebration on Monday night, were a government funding workshop (NITRD) on Wednesday afternoon, which was open to all symposium attendees, and two workshops on Thursday: Web 2.0 Security and Privacy (W2SP) and Forensics (SADFE).
And now, without further ado...
The following writeups were performed by three students:
Ben Ransford, University of Massachusetts Amherst, ransford @ cs.umass.edu
Sruthi Bandhakavi, University of Illinois at Urbana-Champaign,
sbandha2 @ illinois.edu
Joana Trindade, University of Illinois at Urbana-Champaign, trindade @ illinois.edu
Each write-up is marked by their respective initials, BR, SB, and JT.
Opening Remarks (BR):
Ulf Lindqvist started the program with details about the conference venue and the events. David Evans followed Ulf with an entertaining comparison between IEEE S&P in 1980 when it started and in 2010. In 2010 there were 50 program committee members and several other external reviewers. 26 research papers were accepted out of 237 submissions. On an average there were 5 reviewers per paper who evaluated the research contributions and novelty of research. 5 systematization of knowledge papers were accepted, which needed to evaluate existing work and put it in context. The evaluation criteria for accepting these papers was their value to the community.
SESSION 1: MALWARE ANALYSIS
Chair: Jon Giffin, Georgia Institute of Technology
Inspector Gadget: Automated Extraction of Proprietary Gadgets from Malware Binaries
Clemens Kolbitsch (Vienna University of Technology), Thorsten Holz (Vienna University of Technology),WRITEUP BY: BR
Christopher Kruegel (University of California,Santa Barbara),
Engin Kirda (Institute Eurecom)
Clemens Kolbitsch led off the Symposium's first session with a talk on his whimsically titled "Inspector Gadget" paper. The paper concerns "Inspector", a system that isolates and extracts interesting program behaviors one-at-a-time from Windows malware executables. The authors built "Inspector" to address several problems with traditional malware analysis techniques. With tens of thousands of new malware samples appearing each day, researchers cannot practically examine the inner workings of each one. Compounding that problem, malware often exhibits divergent behavior at different times or under different circumstances. Existing tools for malware analysis produce cluttered results that make isolating program behaviors difficult. "Inspector" extracts single program behaviors into small standalone "gadgets" that can be run and analyzed separately, simplifying the task of analyzing a malware executable.
"Inspector" performs four steps on an input binary. First, it uses the Anubis malware analyzer to run the binary while recording instructions, OS interactions, and other information. Second, the user chooses a detected behavior that "Inspector" has classified as salient; "Inspector" grabs the corresponding code from the executable. Third, "Inspector" performs backward program slicing to discard code sequences that are irrelevant to the selected behavior; it also searches forward to find related behavior. Finally, "Inspector" writes the resulting minimized code to a standalone shared library and tidies up outstanding references as necessary; it also replaces system calls with special hooks. The resulting shared library can be loaded like any other Windows DLL. The authors load such DLLs into a "gadget player" that confines the program and mediates accesses to the operating system. The gadget player optionally ignores the system calls or manipulates their return values to educe interesting behavior from gadgets. To illustrate the system's operation, Kolbitsch demonstrated that "Inspectorf" could extract the Conficker worm's complicated, environment-dependent domain generation algorithm into a separately analyzable gadget.
"Inspector" is implemented as an extension to Anubis. The authors are willing to share with interested researchers the gadgets they extracted from malware during their evaluation of "Inspector".
In the question and answer period, Juan Caballero asked how the authors' analysis ensured adequate coverage of conditional code in gadgets. Kolbitsch said that "Inspector" replaces conditional jumps with unconditional ones when doing so will not affect program output, and otherwise replaces branches not seen during dynamic analysis with triggers that pass exceptions to the gadget player. David Brumley speculated that malware authors might circumvent "Inspector" by using their own implementations of API functions, but Kolbitsch assured him that such attempts would be caught at the system call interface. Rob Cunningham asked whether "Inspector's" deconstruction would break malware's self-checks for integrity; Kolbitsch remarked that the authors had not yet noticed such self-checks with their system but he believed that such checks, orthogonal as they are to program behavior, would be correctly excluded from extracted gadgets.
Synthesizing Near-Optimal Malware Specifications from Suspicious Behaviors
Matt Fredrikson (University of Wisconsin),Mihai presented.
Mihai Christodorescu (IBM Research),
Somesh Jha (University of Wisconsin),
Reiner Sailer (IBM Research),
Xifeng Yan (University of California, Santa Barbara.
New malware is still spreading -- more than a billion virus signatures per year are detected. Antivirus programs use syntactic signatures, which have one to one correspondence to viruses and only has 50% detection rate. Ideally one would like to use behavior based detection techniques, which detect malware based on certain high-level descriptions. Typically a behavior would be a sequence of system call or library call operations showing that certain system resources have been accessed. The behavior graph is represented as a set nodes linked with edges that represent data dependencies. Since most of the times the behavior graphs are large and there is a lot of malware, one has to automate the behavior extraction approach so that it is scalable. In the talk, Mihai described the main features of their technique to extract significant malware behavior automatically. The authors used machine learning techniques to compare malware and benign samples to extract the behavior graphs of malware. Additionally, they use correlation between the graphs of different malware samples to mine for significant behavior. Significant behavior is obtained in the form of graphs of operations, where the significance is measured based on metrics like information gain, cross entropy, etc. Finally, Mihai described their experimental results. He showed that their technique had 0% false positives and 86.5% detection rate, which is better than a few of the commercial Anti Virus software.
Discussion: Angelos Stavrou commented that the technique in the paper works best if malware is a standalone binary. He asked what would happen if the malware manifested itself to an application e.g. IE or a PDF? Mihai responded that he did not consider that carefully, however their approach should be able to differentiate between standard IE and a malware. Matt Van Gundy asked what the tool would do if the malware had patterns that mislead the signature generation algorithms. Mihai was not sure how easy it is to generate such misleading patterns. He agreed that their learning could be mislead but they hedge against this by looking at different malware samples within a malware family to check if the behaviors exist in a majority of them.
Identifying Dormant Functionality in Malware Programs AUTHORS:
Paolo Milani Comparetti (Technical University Vienna),WRITEUP BY: JT
Guido Salvaneschi (Politecnico di Milano),
Clemens Kolbitsch (Technical University Vienna),
Engin Kirda (Institut Eurecom),
Christopher Kruegel (University of California, Santa Barbara),
Stefano Zanero (Politecnico di Milano)
Paolo presented. Analysis of malware is complex, due to the large number of samples. Existing solutions rely on automated dynamic analysis. The question then is "how can we learn more on these malware samples?". Paolo's approach is to leverage code reuse between malware samples. He has developed a tool called "Reanimator", which runs malware in a monitored environment, and detects malicious behavior, which the authors refer to as a "genotype". The idea is to automatically define genotype models, and match these against all samples. Specifically, the authors run the malware in an instrumented sandbox (Anubis), and observe its behavior. Based on this, they extract genotype models, and use these models on other samples. Genotype models extraction is a multi-step process consisting of (i) identify code responsible for behavior, (ii) model genotype as complete and precise as possible (e.g., it has to contain all code that is needed for that behavior, and at the same time only the code that is specific to the genotype). Specifically, to a model the genotype the authors use a number of techniques. The first one is slicing, in which they start from relevant system calls, and adds backward and forward slices, disregarding control flow dependencies. The second technique is filtering, where they filter out code that is not specific to a genotype (e.g., libc). Additionally, the authors limit how back backward slicing can go. Otherwise, it would include initialization and unpacking code. This effect is not desired, because a genotype that includes unpacking code would potentially match against all malware programs that use the same packer. Instructions that are specific to a genotype are those that manipulate tainted data every time they are executed. Authors also discard whitelisted code, which is obtained from other executions of the same binary. Next, germination is performed. The slice is not complete, because it only uses data flow dependencies (it disregards control flow dependencies). Hence, in germination instructions add instructions that cannot be executed without executing at least one instruction in slice set. This is based on graph reachability analysis on the control flow graph (CFG). For their evaluation, the authors investigated accuracy, robustness, and detection without ground truth. For accuracy, ground truth is a dataset of 208 bots with source code. Authors use MOSS, a source code plagiarism detection tool, to compare solution against ground truth. Their results show a small number of false negatives (1.5%), in mostly small genotypes, and 0 false negatives. For robustness, the results show that their approach is robust against change of compiler options and version. It is not robust, however, to change of compiler, in which case it generates more than 80% false negatives. Finally, in the detection without ground truth, Reanimator detected more security-relevant behaviors than dynamic analysis.
Discussion: Ian Goldberg, from University of Waterloo, was interested on the consequences of Reanimator not being robust to change of compiler. He then asked what happens if the bad guys start coming up with automated CFG modifications (e.g., another binary that has the same behavior but CFG is different, or CFG obfuscators). Paolo answered that the authors model the code with whatever way they can think of, including optimizations that could make it more robust against different CFG. However, he pointed out, attackers could also write their malware to simply download malicious code, which can defeat reanimator. Jingpeng Wei, from Florida International University asked how much effort is involved in generating genotype models, and if it is completely automated. Paolo answered that once they have the dynamic detection rules, everything is automated. Finally, Jon Giffin pointed out that all presentaitons are about automatically detecting malware behavior. He then asked if this a reasonable assumption, that is, that malware will always present behavior that will be present in all or at least many malware programs. Paolo replied that in their case, they are not trying to do detection in an end-user system. Instead, we are doing it in a controlled environment, which makes our life easier. In general, what characterizes malicious behavior and valid user behavior is all heuristics as of now.
SESSION 2: INFORMATION FLOW
Chair: David Molnar, Microsoft Research
Reconciling Belief and Vulnerability in Information Flow
Sardaouna Hamadou (University of Southampton),WRITEUP BY: SB
Vladimiro Sassone (University of Southampton),
Catuscia Palamidessi (&EACUTE;cole Polytechnique);
Discussion: Fred Cohen commented that there have been some past approaches where cognitive models were used to describe the belief system of the adversary. He asked if the authors used those models or if their characterization of belief is purely mathematical? Sardouna answered that the goal is to determine impact of adversary's knowledge. If accuracy is higher than a certain value then the program is vulnerable. Miro Enev asked if differential privacy can be used to decrease the effectiveness of the vulnerability? Sardouna answered that if one knows that the adversary can have access to extra knowledge, it's a good idea to introduce noise into the output to confuse the adversary.
Towards Static Flow-Based Declassification for Legacy and Untrusted Programs
Bruno P.S. Rocha (Eindhoven University of Technology),WRITEUP BY: BR
Sruthi Bandhakavi (University of Illinois at Urbana Champaign),
Jerry I. den Hartog (Eindhoven University of Technology),
William H. Winsborough (University of Texas at San Antonio),
Sandro Etalle (Eindhoven University of Technology)
Bruno P.S. Rocha spoke about the problem of declassifying information. He argued that correct declassification is too important to leave entirely in the hands of programmers. Rocha introduced declassification in the context of non-interference, a familiar concept in information flow: changes to a system's publicly readable inputs cause changes in publicly readable outputs, but changes to private inputs do not. Strict non-interference is not an appropriate goal when the system can make some information newly publicly available, such as ciphertext resulting from an encryption. Some existing security-typed languages, such as Jif, allow programmers to effect declassification, but such languages fail to separate code from policy. The authors' contribution is a new static flow-based analysis tool that supports declassification, separate policy specification, and unannotated input programs.
The authors' analysis tool identifies input/output operations from a program, builds expressions encoding the inputs and outputs of those operations, and compares those expressions against a policy database that is separate from the program. Rocha detailed the code analysis techniques the authors used, then formally described the system's mechanism for policy-controlled release (PCR), which replaces non-interference and adds support for declassification. He concluded by pointing out that certain aspects of the system remained unimplemented at the time of the talk, most crucially an implementation of the policy matching algorithm.
In the question and answer period, Venkata Pingali expressed concern that policies would eventually approach the complexity of programs to which they referred, and asked why one would not simply program in the policy language. Rocha answered that separating program from policy was a key goal of the work that programming in the policy language would contradict, and pointed out that a policy maker's job is to encode only the exceptions (which are few) to a default rule, which naturally keeps policy databases small.
Non-Interference Through Secure Multi-Execution
Dominique Devriese,WRITEUP BY: JT
Frank Piessens (K. U. Leuven)
Discussion: Helen Wang, from MSR, asked how do the authors tag I/O inputs. Dominique answered that they intercept calls in the browser, and analyze them. Helen followed saying that the api is very insensitive. Dominique replied that every single call has to be intercepted and analyzed, but he thinks it can be done. Venkata pointed out that the idea of using multiple executions to prevent information leaks has already been presented in one of his papers in 2008. Dominique responded that the novelty in his work is that you use both outputs from both executions, and select which ones you want. Fred Cohen commented that this is great if you execute one at a time (e.g., low-execution after high-execution), and asked what would happen if one executes both at the same time. Dominique replied that one just has to make sure that whatever one executes, it is executed completely. Finally, David Molnar asked what would change if interceptions are executed at a proxy instead of at a browser. Dominique answered that there would be a need for intercepting in both directions (low and high).
Object Capabilities and Isolation of Untrusted Web Applications
Sergio Maffeis (Imperial College London),WRITEUP BY: SB
John C. Mitchell (Stanford University),
Ankur Taly (Stanford University).
Mashups are formed by mixing content from different providers where untrusted applications can share the same page. This leads to the isolation problem where the page and it's components have to protect their resources from malicious components. In this talk Ankur talked about their static technique to prove inter-component isolation, where heap resources should be disjoint. They consider isolation in a programming language context in which mashups are different components where each component is a sequential program written with some operational semantics. The main idea is to consider heaps as a resource. Every program is then associated with capabilities that are a means of designating and accessing the heap resources. Ankur described their approach, where they formally define capability systems for programming languages, formally define capability safety and then they derive a sufficient check for inter-component isolation using capability safety. In the talk, Ankur described some features of their capability system and also defined the notions of capability-safety and authority-safety. He then gave an example of how inter-component isolation can be achieved using mashups by giving every component capabilities with disjoint authorities. To build safe mashups, one has to prove that the underlying language is capability-safe or authority-safe and derive an enforcement function that provides authority isolation. Their main contributions are describing the notions of capability safety and authority safety and showing how they lead to isolation. They also show that the language Jsafe is authority safe and Cajita is capability safe. In the future, they want to define the isolation problem for mashups with interacting components.
SESSION 3: ROOT OF TRUST
Chair: Radu Sion, Stony Brook University
TrustVisor: Efficient TCB Reduction and Attestation
Jonathan McCune (Carnegie Mellon University),WRITEUP BY: JT
Yanlin Li (Carnegie Mellon University),
Ning Qu (Nvidia),
Zongwei Zhou (Carnegie Mellon University), Anupam Datta (Carnegie Mellon University),
Virgil Gligor (Carnegie Mellon University),
Adrian Perrig (Carnegie Mellon University)
Jonathan presented. Their motivation is to protect most critical data (e.g., ssl private keys, password files, ACL). However, OSes today have too much power, which makes it difficult to protect data. For example, computers have unrestricted access to application data. In addition, trusted computing base (TCB) for S (i.e., sensitive code region that has to be protected) includes majority of drivers and applications. S may be managing sensitive resources, similar to a Flicker session. TrustVisor is a tiny hypervisor for isolation of code S. Hypervisor does not have to handle IPC or scheduling. Here, the goal is to have efficient transitions between OS and S. In addition, secondary goal is to ensure trusted verification property (i.e., external verification that output equals S(input)). S may have sensitive data, thus it would require protected storage. External verification consists of remote attestation using a TPM. Verifier asks "what code is this machine running?", and the answer is a digital signature over a hash of inputs/outputs combined with a private key. External verification gives a client the ability to evaluate server before it sends data. For protected storage, the application can register S, which then becomes attestable. Access is controlled by identity of S through a hash computed over its binary code. The goal in this verification step is to keep TCB small. If compared to other approaches (e.g., VM, monolithic kernel, vTPM, Overshadow, microkernel and Flicker), TrustVisor is small, can provide fine-grained protections, and its performance is good. In terms of implementation, TrustVisor virtualizes memory space and processor, whilst device drivers are not virtualized (e.g., DMA). Because of that, TrustVisor implements rules for ensuring protection against DMA devices. In addition, it provides isolation using hardware virtualization support. Finally, to enable attestation and protected storage, TrustVisor provides a microTPM. S is registered to TrustVisor by using a hypercall. Timeline of an application sensitive code in TrustVisor consists of multiple invocations during a single registration cycle. Once the application finishes, it can be unregistered. Long-term secrets have to be protected with micro-TPM, which provides a small subset of hardware TPM operations for protected storage and external verification. Current performance of micro-TPM is not good, and it can become a bottleneck if put in critical path (latency). However, a software implementation of micro-TPM reduces latency by orders of magnitude. Evaluation results over a prototype SSL webserver with protections for private key show an overhead of 12%, due to context switching between trusted system and sensitive code.
Discussion: Ruby Lee from Princeton asked how the authors could scale TrustVisor to support multiple sessions, and how would they protect memory without hardware support. Jonathan answered that current implementation does support multiple sessions in one processor (i.e., multiple S registered at the same time). Therefore, the authors protect S by taking advantage of hardware virtualization using nested paging with two full sets of page tables. Someone asked if the authors use memory integrity trees. Jonathan replied that they do not hash memory, and that Overshadow uses a crypto mechanism to keep the pages intact. Their implementation manages the nested tables such that the OS never has the opportunity to touch pages. Prashant asked how do the authors make sure someone has not modified the code. Jonathan answered that the code can be modified. When code is registered, memory protections are put in place, and then TPM seals it. Cynthia Irvine from NPS asked how do the authors control entries to the secure code once it has been verified, and if applications could potentialy bypass it. Jonathan said that this is part of the registration process. Specifically, all the pages that are part of sensitive code are marked into a nested table, and TrustVisor has to decide if it would allow application to continue. Someone from TUBerlin asked if it is possible to a system to intercept the registration and claim that code has been registered when it has not. Jonathan answered that it is important to architect the application to determine whether the code pages have the correct hash value. In addition, he said that they are at the mercy of the OS. If the OS does not want to let a session run, then it will not run.
Overcoming an Untrusted Computing Base: Detecting and Removing Malicious Hardware Automatically
Matthew Hicks (University of Illinois),WRITEUP BY: BR
Murph Finnicum (University of Illinois),
Samuel T. King (University of Illinois),
Milo M. K. Martin (University of Pennsylvania),
Jonathan M. Smith (University of Pennsylvania)
Matthew Hicks presented a photo-filled talk on BlueChip, a hardware and software system that detects and protects against the design-time insertion of malicious circuitry. Hicks argued that hardware, which is becoming increasingly complex, should no longer be assumed trustworthy by software. BlueChip, which is intended for use by managers or researchers with access to design descriptions, examines VHDL descriptions of hardware and identifying suspicious circuits via a technique the authors call unused circuit identification: circuits whose inputs always equal their outputs during testing are considered suspicious. BlueChip replaces VHDL descriptions of suspicious circuits with exception generation hardware that BlueChip's software component catches at run time.
Hicks described the run-time operation of BlueChip. When an instruction triggers a security exception, BlueChip uses alternative instruction sequences (e.g., XOR and NAND instructions replacing a faulting OR instruction) to emulate the instruction's behavior, enabling forward program progress without re-triggering the exception. Hicks demonstrated BlueChip's effectiveness by describing a prototype implementation and some of the attacks it prevented (e.g., a privilege escalation attack triggered by a special sequence of instructions). He asserted that BlueChip added less than 5% of overhead to otherwise unmodified programs in the authors' testing.
In the question and answer period, Peng Ning suggested that attacks using arbitrarily obscure activation sequences might not be detected by BlueChip; Hicks responded that sequences not observed during testing would not be allowed by BlueChip during run time. Aleksey Nogin suggested that BlueChip generate new test cases when it detects malicious circuitry, which Hicks agreed was a good idea for future work.
Tamper Evident Microprocessors
Adam Waksman, Simha Sethumadhavan (Columbia University).WRITEUP BY: SB
The talk is about the design of microprocessors that are able to detect hardware backdoors. Hardware is trusted and it is usually the root of trust. However, it is complex and designed by a large number of people. In the talk, Adam gave an overview of the chip design and said that most of the previous approaches assumed that the designs are benign and malicious hardware can be implanted during fabrication. In this talk, Adam talked about their threat model where the designers can easily implant backdoors to the hardware and their approach of detecting their attacks. Adam then presented a taxonomy of attacks and said that their approach can detect all the attacks. Their approach assumes that there is large design team involved in the chip design, with full knowledge of the design. They also have an equal distrust assumption, where they have no clue which designer is malicious i.e. every unit is equally untrusted. The key idea is that the microprocessor pipeline trusts everything that is given to it; there is no way for the chips at the end of the pipeline to check up on previous chips that send data to it. If there is a way in which the chips at the end communicate with the previous chips to verify certain high level properties about the information passed the them, then the attacks could be prevented. Their solution does exactly this. Adam then demonstrated their approach using a simple example. For evaluating their technique, they developed several attacks and they had no false positives or negatives. They also their evaluation of the cost of their approach with respect to the extra space on the board and coverage where they analyzed the area on the chip they cover. To cover areas for which they could not find easy properties to verify, they used duplication. They also did not cover components like microcode, ALU, etc. Adam then ended the talk by discussing some of the extensions that could be done to their approach.
Discussion: Fred Cohen commented that their approach is still vulnerable to electrical side-channel attacks and microcode attacks. Adam agreed that they don't cover microcode and the explanation to this is given in the paper. He said that, although they are looking at logic level attacks, the attacks below logic levels which break the patterns could still be caught by their approach. Simon Ou asked what would happen if two designers collude with each other? Adam agreed that they assume that there is only one bad guy. He said that their approach can be generalized to detect multiple attackers, however that would incur them a penalty of increasing the area cost. Peter Neumann commented that he liked the research to begin with and the area looks promising, but he had reservations about building trustworthy systems over untrustworthy platforms. Adam agreed, but he said that they are trying to build a golden model at the register transfer level. Mohamed Aboutabl asked what metrics they are using to measure performance and whether they consider the fact that power and heat could reduce the performance. Adam replied that they are not adding any new gates or wires on critical paths and therefore don't affect the performance so much. He agreed that they could increase power, heat, etc., but this doesn't affect the clock frequency. Ross Anderson said that since the 12th century people were interested in double-entry bookkeeping; 8 years ago his team did a project for G3 cards, which had hardware redundancy to prevent attacks where they required three times as many gates as the original design. He had reservations about the proposed approach because if they implemented it end-to-end (including the ALUs), it might need at least twice the number of gates on the CPU. Adam said that ALUs are a special case, where there is no simple property to verify the correctness and therefore the whole ALU has to be duplicated. However, in other cases simple high level properties exist for verification and there is no need for duplication.
SESSION 4: INFORMATION ABUSE
Chair: Patrick Traynor, Georgia Institute of Technology
Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow
Shuo Chen (Microsoft Research),WRITEUP BY: BR
Rui Wang (Indiana University Bloomington),
XiaoFeng Wang (Indiana University Bloomington),
Kehuan Zhang (Indiana University Bloomington)
Shuo Chen spoke on a type of side-channel information leak that endangers privacy on a number of popular websites. The authors discovered that they could determine a user's activities on those websites simply by sniffing the user's browser's encrypted network traffic. The key idea is that highly interactive websites that incur round-trip traffic while users enter information can exhibit packet length and timing patterns that leak information about what the user is entering. To illustrate the leak, Chen described a leading tax preparation website that allows users to navigate through a large state space by answering questions and filling forms. After measuring network traffic while mapping the state space themselves, the authors found that they could determine a user's position in the state space -- which may be parameterized on private information -- by measuring the user's packets. Similarly, for an investing website, the authors found that they could identify the mutual funds in a user's portfolio by examining packets from a price chart image server.
Chen suggested that fixing the problem he described would be difficult. For the time being, he recommended content or packet padding to defend against eavesdropping, but he also pointed out some subtle difficulties packet padding imports. He suggested that padding would be applicable only on a case-by-case basis once vulnerabilities had been identified.
In the question and answer period, Patrick Traynor asked how the vendors responded to the authors' findings; Chen remarked that the investment website in particular was interested in reinforcing their users' privacy by mitigating the flaw. Emiliano De Cristofaro asked whether the traffic patterns were different for authenticated and unauthenticated users on search engines like Google that suggest completions for partially entered queries. Chen responded in the affirmative and remarked that logging into such sites might be a good mitigation factor for the leak because different users might induce different, harder-to-guess patterns depending on their query histories.
Investigation of Triangular Spamming: a Stealthy and Efficient Spamming Technique
Zhiyun Qian (University of Michigan),WRITEUP BY: JT
Z. Morley Mao (University of Michigan),
Yinglian Xie (Microsoft Research Silicon Valley),
Fang Yu (Microsoft Research Silicon Valley)
Zhiyun presented. Spam is an arms race, where new techniques are proposed, and counter techniques are invented that bypass it. For the network level spamming arms race, existing work focuses on ip-based blacklist, and port 25 blocking. In ip-based blacklisting, ip addresses are important resources to spammers. If spam is too fast, it will be blacklisted too quickly. Port 25 blocking, on the other hand, is a way in which smtp servers attempt to reduce spam traffic originated by their own networks (e.g., AT&T). In this case, defense effectively elliminates ip addresses for spammers to use. Triangular spamming is a pretty much unknown attack, but it is a real one. The author's goal in this work is to study how prevalent is triangular spamming. In this kind of attack, there are 3 entities involved: a high bandwidth bot, a relay bot, and a legitimate mail server. In addition, it requires ip spoofing and network level package relay. To establish a tcp connection, high bandwidth bot puts relay bot as source bot. The legitimate mail server will respond to relay bot, instead of high bandwidth bot. As a result, attack can be very stealthy and efficient. It can evade port 25 policy, because relay bot ISP would not block port 25 traffic. (thus it can relay through there). Goals of this study is to evaluate how stealthy triangular spamming can be by looking at mainly 2 factors. First one is how easy it is to evade ip-based blacklisting. The authors do this by comparing 2 strategies: (i) all bots directly send spam at their full speed, and (ii) triangular spamming is used where only high bandwidth bots send spam. Second factor is how easy it is to evade port 25 blocking, and their experiments show that 97% of blocking networks are vulnerable. The authors used two techniques, mainly selectively relaying packets, and aggressive pipelining. In addition, for port 25 blocking, two settings were experimented: blocking outgoing traffic with dst port 25, and not blocking incoming traffic with src port 25. Specifically, they instrumented multiple websites to check if ISP is blocking port 25 or not by inserting flash scripts that attempt a connection at that port. If the connection is not successful, it could be either because isp is blocking, or because a host firewall is blocking. To determine which, the authors use active probing to check if ISP is blocking. Only 9.8% block port 25, which shows that ISPs are not doing a good job of spam blocking. In the case of triangular spamming, results show that only 3.2% of prefixes performed IN blocking, and the remaining number of hosts are vulnerable. For ip blacklisting, only 44% of prefixes are listed in the blacklist, and most of them are not coverted by port 25 blocking, nor blacklists. To prevent and detect triangular spamming, the authors suggest that prevention could be deployed in ISPs to prevent IP spoofing. In addition, ISPs could block incoming traffic with src port 25, as well as have a stateful firewall to disable relay bots. Detection is achieved by looking at ip addresses that are blocked for port 25, and at different network characteristics (e.g., topology, delay). The authors have collected network traces over a 7-day period at a mail server. Their results show that 1% of all ip addresses have port 25 blocking behavior, and spam ratio for these ip addresses is 99.9%. In conclusion, the authors have shown that today's isp port 25 blocking policy is vulnerable, and it allows triangular spamming.
Discussion: Patrick Traynor asked if the authors had considered any of the ingress filtering. Zhiyun answered that spammers do not have to get a lot of high bandwidth bots, and they can use bots with fair amount of speed, which do not do blocking. Juan Caballero asked what is the reason for high bandwidth bots not sending as much spam as possible. Zhiyun replied that high bandwidth bots would not be blocked for port 25 traffic. In addition, he said that they assume relay bot ISPs would block port 25 traffic, but only in outgoing direction. Hence, relay bot could related it to the appropriate packet. Jinpeng Wei asked how do the authors differentiate inbound and outbound filtering. Spefically, he asked if the ipid is a global or local property. If it is global, then in the meantime some other pc in the internet could contact the mail server and generate a different ipid. Zhiyun answered that they have to get the ipid ahead of time without the probing packet. Then, they can check if the ipid has increased. In addition, he said that they do nott expect the bandwidth to increase significantly.
A Practical Attack to De-Anonymize Social Network Users
Gilbert Wondracek (Vienna University of Technology),WRITEUP BY: BR
Thorsten Holz (Vienna University of Technology),
Engin Kirda (Institute Eurecom),
Christopher Kruegel (University of California, Santa Barbara)
Gilbert Wondracek described a way in which website operators can de-anonymize visitors to their websites who are also members of social networking sites. The attack builds on a previous history-stealing attack. In the history-stealing attack, a website stealthily executes yes-or-no queries against users' browsing histories by generating links to URLs and testing the resulting links' colors to determine whether those URLs have been visited. The authors' new attack uses history stealing to find visited social network URLs; a website operator can refine her history queries until she discovers a pattern that is unique, in some cases, to a single person. The attack benefits from several properties of the web: first, most web browsers are configured with long history expiration times. Second, many social networks are constructed such that user and group identifiers appear in the URLs users visit. Third, many affinity groups on social networks have public user listings or are easy to join, meaning that some users can be identified by their group memberships. The authors tested their technique against several major websites including Xing, Facebook, and LinkedIn. In a public experiment with around 10,000 volunteers, the authors de-anonymized 12.1% of the volunteers.
Wondracek suggested several mitigation techniques. If social networks included less identifying information in URLs (by, e.g., using POST instead of GET to transmit data), identifying users by their URL histories would be more difficult; however, such a change threatens to make public pages more difficult to find. Wondracek also suggested that browsers should adopt a "same-origin" policy for history queries similar to the policy browsers use to protect web documents from interacting unsafely.
In the question and answer session, Steven Murdoch pointed out that the default configuration of Tor disables browsing history and therefore stops the attack. Leo Meyerovich asked about the effect of history queries on a user's bandwidth; Wondracek remarked that there are many ways to generate links and that some combination of techniques might make the bandwidth hit small. Kosta Beznosov asked whether users care about being de-anonymized; Wondracek responded that the volunteers in the authors' user study were shocked. Collin Jackson asked whether a similar attack could exploit cache timing differences even in browsers that adopt a same-origin policy for history queries, and Wondracek said yes. Daphne Yao asked whether the authors considered using geographical cues to de-anonymize people further; Wondracek responded that such information is indeed helpful to the attack.
SCiFI - A System for Secure Face Identification
Margarita Osadchy, Benny Pinkas, Ayman Jarrous, Boaz Moskovich (University of Haifa).WRITEUP BY: SB
There are two types of application for face recognition; one is identification of faces during login, which happens in a controlled environment and the other one is during surveillance to identify suspects while they pass cameras placed in public locations, which is harder. Traditionally surveillance is done by an operator who tries to find scanning faces to find suspects. This could have privacy implications where the operator can track the activities of non-suspects. One idea to prevent this is to store the suspect list at the client side on the cameras and perform the face recognition at the client side. But, what if the suspect list is confidential? The solution then is to place suspect list at server and allow the camera to do the surveillance and matching. The problem now is how to define the matching. In this talk Benny presented their solution for this problem. He described their new face identification algorithm for secure matching which employs state of the art recognition techniques and uses only a single image for matching. He also described the design and implementation of the protocol SciFI, for the matching process. During the face-recognition and matching process it is important to minimize the online latency and therefore they proposed methods to pre-process the face information and use efficient functions for different computations. In their experiments, their algorithm was robust to illumination changes, small changes to pose and partial occlusions. Benny also showed an experiment with a list of 15 faces. The suspect's eyes were occluded because of a glare from his glasses and showed that the protocol identifies the suspect.
Discussion: Deb Downs asked how they handle profile pictures? Benny replied that they cannot handle profiles yet. Somebody asked if the protocol is secure in honest but curious case and what the implciations were for the malicious case. Benny replied that they have a version for covert case which assumes that if one of the involved parties tries to break the protocol, they can be caught with reasonable certainty. Lior Malka asked if something could be inferred if one of the parties is cheating. Benny replied that they have proof that if someone does not follow the protocol then they can catch them with visible probability. Xuxiang Jiang commented that there could be a simple attack where the attacker can fool the camera using someone else's picture. Would that attack work here? Benny said that that particular attack works for login application and does not hold for surveillance tasks.
SPECIAL GALA DINNER CELEBRATING THE 30TH ANNIVERSARY OF SECURITY AND PRIVACY Master of Ceremonies: Peter G. Neumann WRITEUP BY: SB
The celebration of the 30th anniversary of the IEEE Symposium on Security and Privacy took place at the Pauley Ballroom, UC Berkeley on Monday, May 17, 2010. This even was sponsored by the IEEE Computer Society Technical Committee on Security and Privacy. It started with a reception, followed by dinner. Awards and recognitions followed the dinner. Peter G. Neumann was the master of ceremonies. Jerome H. Saltzer was presented the National Computer Security Award; Jerome was introduced by Tim Grance. Carl Landwehr was awarded the IEEE Computer Security Distinguished Service award and Computer Society TC-SP Outstanding Community Service Award. W. Douglas Maughan and John Markoff were also awarded the Computer Society TC-SP Outstanding Community Service Award. Terry Benzel was awarded the Computer Society Award for Continuing Service. Everybody in the audience, young and old members and attendees of the symposium, were commended and applauded. The awards ceremony ended with Hillarie Orman and Sven Dietrich presenting a specially made security blanket to Peter Neumann. The celebrations ended with a comedy sketch by the Galileo players, a Chicago based sketch comedy troupe.
SESSION 5: NETWORK SECURITY
Chair: Nikita Borisov, University of Illinois at Urbana-Champaign
Haowen Chan, Adrian Perrig (Carnegie Mellon University)WRITEUP BY: BR
Haowen Chan presented an efficient authentication protocol for sensor networks with fixed topologies. The authors considered the problem of authenticating messages in sensor networks that have multiple receivers and a designated sender all arranged in a specific predetermined (fixed) way. Traditional approaches for sender authentication involve either computationally intensive public key cryptography or a shared-key arrangement in which the sender transmits per-node message authentication codes (MACs) in a series whose length grows with the number of receivers. In search of a communication-efficient protocol that makes minimal assumptions about nodes' computational power, the authors fix their assumptions as follows: the sender knows the network's topology and shares a unique symmetric key with each receiver. If the nodes are arranged in a tree topology, each attempt at transmission induces a hash tree that allows every node to authenticate the message after three passes up and down the tree of nodes. The authors designed a scheme that requires only one pass if the tree is flattened into a simple path in which each node has at most two neighbors. Chan argued that a path topology is common as a subtopology of other topologies. He described several simple optimizations that reduced the constant factor associated with path traversal from 3n (with n nodes) to 2n to 1n, at a cost of increased communication overhead.
In the question and answer session, Mohamed Aboutabl remarked that the authors' assumptions about the network seemed significant, to which Chan remarked on the amenability of path-structured networks with a fixed sender to be robust against certain failures. Peng Ning remarked that such networks might be vulnerable to attacks by misbehaving nodes; Chan replied that a denial-of-service attack would be possible with misbehaving nodes, but that the authors had optimized for the common case in which nodes do not misbehave.
Revocation Systems with Very Small Private Keys
Allison Lewko (University of Texas at Austin),WRITEUP BY: JT
Amit Sahai (University of California, Los Angeles),
Brent Waters (University of Texas at Austin)
Allison presented. She presented a scenario where sender would like to send an encrypted message to multiple receivers. Here, a sender sends a ciphertext, and each user can decrypt it with its own key. The process of blocking a user from decrypting a message is called revocation. A simple solution for revocation is to encrypt a message to each user. However, this does not scale. The authors' goal is to devise a revocation system and an encryption algorithm that allows everyone, except the revoked users, to decrypt a message. Such a system should also be efficient for a large number users and a small number of revoked users. Basic algorithms in revocation systems have a threat model that is comprised of collusion attacks (i.e., two revoked users, where each of their private keys alone cannot decrypt it, but combined they can). In addition, they have an adaptive security definition. The authors' technique considers two equations. The revocation system provides users with these two equations, and two unknowns. In this case, equations are dependent, and if an user is revoked, she cannot isolate the two unknowns. Otherwise, the user receives 2 independent equations, which can be solved. Nevertheless, this solution is still vulnerable to collusion, because 2 users together could potentially solve it. To address that, the authors personalize unknowns to each user, thus they cannot be combined to perform a collusion attack. Furthermore, shares are assigned to each user, and shares cannot be combined because each share is personalized. With this, the authors can obtain adaptive security. In addition, the authors chose small public keys because the public key sie does not grow with number of users, and adding new users does not require changing public key. With respect to previous work, the authors state that it is selectively secure, while theirs is both selectively secure and adaptively secure. Finally, the authors claim that their work applies to attribute based encryption (ABE) in the following manner. A ciphertext is associated with attributes, and secret keys are associated with access formulas. To achieve monotonic ABE and revocation, for each negater attribute, system parameter alpha is replaced by secret share of alpha for that negated attribute.
Discussion: David Jacobsen from Qualcomm asked whether sending out the revocation list revealed the identities of the revoked parties; Lewko said it did. Nikita Borisov asked whether it was possible to define the security of a standard revocation scheme in the UC framework; Lewko said she did not know.
Authenticating Primary Users' Signals in Cognitive Radio Networks via Integrated Cryptographic and Wireless Link Signatures
Yao Liu,WRITEUP BY: JT
Huaiyu Dai (North Carolina State University)
Yao presented. There is an increasing demand for wireless bandwidth. As they become more popular, unlicensend channels become over crowded. A naive solution is to allocate more channels. However, spectrum has a limited capacity, and can only take so many channels. To control usage of channel capacity, the authors focus on primary user detection. That is, a secondary user monitors for the presence of a primary user's signal on target channels. Here, two methods exist: energy detection, and feature detection. These are vulnerable to primary user emulation attacks (PUE), however, and this is mainly possible because of lack of authentication. The threat model consists of primary users at fixed locations, under physical protection. In addition, an attacker's objective is to get an unfair share of the bandwidth by blocking other user's accesses to target channels. The attacker's capacities include forging packets, inserting packets, and high transmission power. The author's approach is to assume primary user and attacker are at different location, hence generate thus different location/link signatures. Based on this, acquisition of authentic link signatures is performed by authenticating users, and authentication is performed by acquisition of authentic link signatures (chicken/egg problem). In addition, their system requires two levels of authentication: authentication at a secondary user through link signature, and authentication at a helper node. The authentication at a helper node can have both false negatives and false alarms. Both false negatives and false alarm decrease exponentially as a a function of the increase in minimum distance. Evaluation consisted of using a CDF of width (amplitude) threshold versus the probability of a false alarm. The authors also conducted a feasibility study by implementing a prototype using the GNUradio toolkit. The results based on their prototype simulator show that the distance between helper node and primary user is on average 0.5 m, and the distance between the helper node and the attacker is 15 m. Finally, authors have also measured a number of performance parameters for cryptographic operations. As future work, the authors envision a study on users in a moving vehicle.
Discussion: Someone from University of waterloo asked if the link signature varies over time. Yao answered that link signature changes over time, and also that their experiments have shown that the distance between attacker and receptor varies. As a result, the authors can detect the dist of legitimate user with a probability of 1. However, she said, they would need a training process to do it correctly. The person then followed saying that the authors assume that helper node signature will be similar to second user's, but there is a difference in transmit power if compared to primary user. As a result, helper node would not be heard.. He then asked if Yao was sure that link signature would be the same even if transmit power were different among nodes. Yao replied that helper node and primary user operate on the same channel due to normalization, so the helper node would be heard, because it does not need a high transmit power precisely because of the normalization. Someone asked if the attacker could add another helper node of its own. Yao replied that they assume public keys are known before-hand. Another person asked if the authors assume that there is only one attacker. Yao said that they use one attacker and one helper user in their system and evaluation, but she also said that their scheme could be applied to multiple users. This person then asked if changing the assumption to multiple users would break their scheme. Yao answered that, as long as distance between attacker and helper node is large enough, their scheme still works. According to her, number of users is not a problem, and the distance between nodes is more important. Finally, someone asked if the attacker could emulate the signature, in case an attacker stays between helper node and receiver node. Yao replied that even if an attacker sits in the middle, as long as the distance between attacker and helper node is large enough, their system still works. However, if attacker gets closer to primary user, then their system might be compromised. Even in this case, it is reasonable to assume that distance between these nodes would be large enough.
SESSION 6: SYSTEMATIZATION OF KNOWLEDGE I
Chair: Z. Morley Mao
Outside the Closed World: On Using Machine Learning For Network Intrusion Detection
Robin Sommer (International Computer Science Institute / Lawrence Berkeley National Laboratory),WRITEUP BY: SB
Vern Paxson (International Computer Science Institute / University of California, Berkeley).
In this talk Robin describes reasons why Machine Learning techniques have not seen much success in network intrusion detection systems, while they are very popular in many other domains. The classic machine learning setting is a classification problem and trains the classifier with many instances of all possible cases. NIDS however need to do outlier detection, for which the system can be trained only with normal activity. Such an approach has a much smaller margin for errors. This leads to a semantic gap between what the system reports and what the operator expects, resulting in too many false positives and thereby reducing the operator confidence in the IDS. Additionally verifying false positives is not cheap as compared to other domains where ML is successfully employed like OCR, translation, image analysis, etc. Robin went on to present some of their suggestions for building a good ML based NIDS. He claimed that ML could be useful if enough care is used to select the right features and the right approaches.
Discussion: Susan Landau commended the authors saying that it's terrific to see a paper that says something doesn't work. She asked if the authors have tried talking to the machine learning community? Robin replied that they did. Infact, several of the ideas presented in the paper resulted from the discussions with the ML community and he learned that machine learning is better at finding commonalities. The other thing is that you just need to understand what you are doing. Vijay commented that in some cases, for example with neural networks it is impossible to determine how the algorithm works. He asked how one could bridge this gap where one doesn't understand why a particular algorithm works? Robin agreed that there are some approaches where one can't understand how they come to their conclusions. If that is the case, his suggestion is to use the classifier as an exploratory tool but not as the final detector. Vijay also pointed out that a lot of ML techniques retrain themselves once they make a mistake. He asked if Robin knew of any work that uses such approaches? Robin said that although there is certainly such work, it will not solve the problem because the false alarms are really bad for the operator so much so that he begins not to trust the system. Rob Cunningham said that he is from the ML community and he knows that there are some ML infrastructures that could make the results better. Robin agreed and said that he is not dismissing using ML but just saying that it should be pursued carefully in this domain.
All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but might have been afraid to ask)
Thanassis Avgerinos, Edward Schwartz, David Brumley (Carnegie Mellon University)WRITEUP BY: JT
Thanassis presented. Motivation is how to make computers automatically analyze programs at runtime. The most commonly used analysis techniques are dynamic taint analysis and forward symbolic execution. Given a running program, dynamic analysis aims at analyzing the program given a flow of its execution (i.e., discover what values can be derived from user input). Forward symbolic execution, on the other hand, attempts to find what input would have to be provided to a program to make its execution reach an specific line. The authors' contribution is to take all these english descriptions and turn them into an algorithm, and define its operational semantics. One of the challenges is that there is an extensible amount of work in these two areas. Hence, there are common patterns that repeat themselves time and again. The authors systematically extracted a number of these common problems and solutions, and summarized them. First, Thanassis described how dynamic analysis works, what are its desired properties, and what are the main issues in this area. He displayed an example source code, with a mapping of variables to values at runtime. Variable taint analysis introduces a new mapping from variables to a taint status. Whenever there is an input expression, the result of this expression is tainted if and only if the source of this input is untrusted. Following this definition, whenever a computation is based on tainted values, the result is also going to be tainted. As an example of policy, he showed a goto that is only allowed to execute if the target address is not tainted. For memory tainting, a new mapping is introduced that maps addresses to taint status. As another example of policy, he showed one where if either the address or the memory cell is tainted, then its value is also tainted. One of the main problems in taint analysis is undertaiting (i.e., values that should be tainted are not), and overtainting (i.e., unaffected values are tainted). Following these explanations, Thanassis described how forward symbolic execution works, its desired properties and its main problems. The question here is what inputs will make the execution reach a particular line of code. An x symbol is defined (x symbolic), and it can have any value. As an input traverses the control flow graph (CFG), constraints are added to path (e.g., symbol will go to a path if x is positive, and to another one if x is negative). Hence, all that needs to be done is to find an anser that will satisfy all constraints of a particular path. A main problem in this context is state space explosion when exploring all paths (i.e., exponential size). The solution for this is to apply a number of path selection heuristics (e.g., bounded DFS, random search on symbolic execution tree). Another solution is to use concolic testing. Nevertheless, all proposed approaches are heuristics. Another two issues are the exponential number of formulas, which are also exponentially sized. In addition, solving such a formula is NP-complete. In conclusion, even though these techniques are used in many cases (e.g., security, testing, fault tolerance), there are many open problems in taint analysis and forward symbolic execution that have not yet been efficiently addressed.
Discussion: Joana Trindade from University of Illinois pointed out that the the problem of overtainting has been shown to scale out of proportions in recent work called "Pointless Tainting". She asked if Thanassis believes that this problem will ever be effectively solved, and if he sees any future for taint analysis. Thanassis answered that defining a magic policy that addresses both undertainting and overtainting is an open problem. Devdatta Akhawe from UC Berkeley asked if Thanassis knew any tools that could be used for analyzing browsers. Thanassis replied that current techniques do not scale for several MB of lines of code, which is the case for most existing browsers. Right now these techniques only scale for a few thousands of lines of code. KLEE was done in 2008, and they managed to analyze up to 6-7k lines. To use it for browsers, Thanassis said, there is still a long way to go.
State of the Art: Automated Black-Box Web Application Vulnerability Testing
Jason Bau, Elie Bursztein, Divij Gupta, John Mitchell (Stanford University)WRITEUP BY: BR
To close the first Systematization of Knowledge session, Jason Bau presented a comparative evaluation of vulnerability testing services. Some certifying organizations, such as the Payment Card Industry (PCI) body, require prospective members to subject their own web sites to a sanctioned third-party vulnerability test as part of the certification process. The authors tested eight such third-party testing services to determine what they test, how effectively they find known vulnerabilities, and how closely the set of tested vulnerabilities resembles those found in the wild.
Bau summarized the landscape of existing vulnerability testing techniques including black-box testing, static analysis, penetration tests, and manual analysis. The technique favored by the services the authors examined is automated black-box testing. The authors designed (and plan to release) a testbed bearing various known vulnerabilities, then pointed each scanning service at their testbed and tallied results in three categories: speed, coverage, and vulnerability detection rate. The slowest scanning service took roughly eight hours to scan their testbed, while the fastest took just over an hour. All of the services exchanged less than 1 GB of data with the testbed. The maximum detection rate for all scanners was 62%. No scanning service was a top-3 performer in all three categories.
Morley Mao asked whether the black-box testing approach imports fundamental limitations on the kinds of vulnerabilities the services can detect. Bau confirmed that certain vulnerabilities such as blind SQL injection were too obscure for the scanners to detect. Rob Cunningham asked whether the authors shared their results with the services they tested, and Bau said they had. Cunningham remarked that many scanners stipulate that they are not to be used for evaluations like the authors', to which the authors responded that they had eventually talked to the right people to get their testing approved. Carl Landwehr asked whether the authors' results suggest that we should not put much faith in compliance badges that depend on vulnerability testing services. Bau responded by suggesting (to uproarious audience laughter) that anyone with a little Photoshop knowledge could make an official-looking badge.
BUSINESS MEETING WRITEUP BY: BR
David Evans gave a lively presentation about the review process that led to the 2010 Symposium's 31 paper acceptances (26 of 237 research papers and 5 of 30 for the Systemization of Knowledge track). Of the 136 reviews of research papers that were eventually accepted, only 3 strongly recommended acceptance, while 32 recommended acceptance, 58 weakly recommended acceptance, 34 weakly recommended rejection, and 9 strongly recommended rejection. Researchers from the United States had the most submissions and acceptances. Luxembourg had the highest density of Oakland submissions per capita; India the lowest. Language-based security and web security had the highest per-topic acceptance rates, and distributed systems security the lowest.
Evans presented the Best Practical Paper award to "Chip and PIN is Broken" by Steven Murdoch, Saar Drimer, Ross Anderson, and Mike Bond from the University of Cambridge. The Best Student Paper award went to "TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection" by Tielei Wang and Tao Wei of Peking University, Guofei Gu of Texas A&M University, and Wei Zou of Peking University. The Best Paper award went to "SCiFI -- A System for Secure Face Identification" by Margarita Osadchy, Benny Pinkas, Ayman Jarrous, and Boaz Moskovich of the University of Haifa.
SESSION 7: SECURE SYSTEMS
Chair: Jonathan McCune
A Proof-Carrying File System
Deepak Garg, Frank Pfenning (Carnegie Mellon University)WRITEUP BY: BR
Deepak Garg presented work that applied the technique of proof-carrying authorization (PCA) to a filesystem; they call the result PCFS. In PCA, access requests carry logic-based credentials and a proof that combines the credentials with descriptive axioms or inference rules to authorize the access. Garg gave the example of the Grey system at CMU, where an office door asks a prospective entrant's mobile phone to prove that the door's owner permits them to enter. PCA offers high assurance and high accountability, but it had not been applied to a system interface before the present work because of concerns about its efficiency.
Garg described the efficiency problem the authors faced when implementing PCFS. They considered a reasonable workload of 2000 to 3000 filesystem accesses per second. If credentials are encoded in certificates, and checking a certificate requires 10 microseconds, the time needed to check all certificates that authorize an access quickly exceeds the system's ability to meet its workload goal. PCFS offloads the job of proof checking to a trusted verifier that runs locally but outside the filesystem. The verifier caches and signs proof verification results (which the authors call "procaps," for proven capabilities), allowing PCFS to avoid checking proofs during file access. With the caching modification, PCFS meets its goal of supporting several thousand workloads per second. The paper also contributes a new logic called BL that supports logical statements about time-based validity (for expressing, e.g., that a principal's security clearance expires after N years) and logical statements that depend on system state, and presents a formal solution to the problem of procap staleness in the face of policies whose consequences depend on time and state.
In the question and answer session, Mohamed Aboutabl asked how PCFS differs from Kerberos. Garg clarified that credentials in PCFS encode the full logical justification for authorization decisions, whereas Kerberos credentials contain only authentication information. Simon Ou asked how PCFS handles revocation. Garg remarked that revocation is partially supported by BL's notion of time ranges and stateful policies and will be completely supported by certificate revocation lists in a future extension to PCFS. Aleksey Nogin asked whether it is possible to express a negative policy in BL (e.g., a certain principal should not be given access to a resource); Garg responded that negative policies are fundamentally incompatible with PCA but can be encoded through support for system state in BL policies. Cynthia Irvine noted that it is impossible to revoke previously obtained information even though it is possible to revoke future access, and Garg agreed, saying that PCFS is concerned with access control but not information flow.
Scalable Parametric Verification of Secure Systems: How to Verify Reference Monitors without Worrying about Data Structure Size
Jason Franklin (Carnegie Mellon University),WRITEUP BY: SB
Sagar Chaki (Carnegie Mellon University),
Anupam Datta (Carnegie Mellon University),
Arvind Seshadri (IBM Research).
This talk is about verifying reference monitors that operate over large data structures in a scalable way. Reference monitors are used to observe the execution of processes and prevent bad actions i.e. violations to policy. The goal of the work is to verify reference monitors in the presence of adversaries. To do that, the paper presents a new verification technique called the small model theorems. This is done by equating verification of a one entry in the reference monitor data structure to the verification of n entries. This approach can model any reference monitors that are row independent and row uniform. Safety properties over rows can be modeled as the policy that needs to be verified. The talk describes this analysis technique with a case study of SecVisor. Jason also mentioned that the limitation of this approach is that the the analysis was done at the design level and needs to be extended to the code level. Also, the approach can only be used to verify row independent policies.
Discussion: Zhi Wang commented that it was very interesting that the authors are using page tables as an example. However, the page tables are not row-independent. For example, one page can be mapped to mult addresses (one executable, one writable). Jason replied that, that is the issue that came up with SecVisor. However, in the case of SecVisor, it was a but not a feature. It was exactly the sort of attack that they targeted with their technique where the vulnerability is exactly that a page table allows for such a mapping. Sruthi Bandhakavi asked for examples of row-dependent properties. Verifying access control matrices is one examples given by Jason. Helen Wang asked what their findings were with respect to security properties and bugs. Jason said that they found 2 bugs. They took SecVisor paper and modeled what they did. While the authors of SecVisor were working on source code auditings, they were modeling it. While the authors of SecVisor found 2 bugs in SecVisor, Jason's model-checker independently found the 2 counterexamples. Petros Maniatis whether their system can be extended to properties with chaining, for example cryptographic integrity checking within a page table. Jason said that this kind of proof approach may not work on properties with chaining. The main bottleneck here is that the chains could be of arbitrary length and therefore the small model theorems will not work in this setting.
HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity
Zhi Wang, Xuxian Jiang (North Carolina State University)WRITEUP BY: JT
Zhi presented. Virtualization has been widely adopted in industry and academia. As evidence of that, Zhi stated that 16% of existing workload is running in VMs. In addition, virtualization technology is applied in many problems, such as guest integrity monitoring. All previous works in virtualized systems shared one commom assumption: they require a trustworthy hypervisor (e.g., one that provides isolation between VMs). This assumption is commonly based on hypervisor code size, meaning that because it is smaller than OS's, hypervisor provides higher security. According to Zhi, however, this is a flawed assumption because there are many vulnerabilities listed in CVE for hypervisors. In addition, existing hypervisors have a code size that reaches roughly 200K. Existing approaches to secure hypervisors include reducing the TCB (but cannot prove inexistence of bugs and/or vulnerabilities), and formal verification (e.g., seL4). The authors' goal is to enable self-protection of commodity type-I (bare-metal) hypervisors. Their assumptions are namely (i) a trusted x86 hardware (i.e., one that provides IOMMU and SMM), and (ii) software bugs that can be exploited to overwrite data structures in hypervisor. Their approach consists of providing code integrity (non-bypassable memory lockdown), code data integrity (restricted pointer indexing), and non-bypassable memory lockdown. In addition, page tables determine memory properties. A property is a page that can be either writable or executable, but NOT both. All memory accesses by software are translated and controlled by page tables (including reads/writes of page tables). The authors guarantee that no code can modify the write-protected hypervisor code and data by using read-only page tables. One side-effect, however, is that hypervisor itself cannot update its own page tables, and has to map them to guest memory to do so. Regarding implementation details, the write-protect CR0 bit protects page table updates. A WP bit is set to 1 by default to lock down memory. Having that, one needs to update the page table atomically (i.e., disable interrupts, WP=0, verify proposed change, update read-only page table, WP=1, enable interrupts). Another technique they use in their secure hypervisor is restricted pointer indexing. The idea here is to provide control-flow integrity, or CFI (i.e., execution paths must follow CFG, which avoids return-oriented programming attacks). This technique ensures CFI by enforcing that only legitimate control data can be used (e.g., extract it from src code). The authors demonstrate the feasibility of their approach by implementing a prototype as extension of LLVM. Their prototype fully supports Bitvisor, and partially supports Xen. In addition, their performance results show at most 6% overhead. Finally, related work includes program analysis and formal verification, guest integrity monitoring or protection, and trusted computing.
Discussion: Jinpeng asked if there are cases in which the authors would have to perform a manual analysis. If yes, what percentage of pointers required manual analysis? Zhi answered that this number is hypervisor specific, and that is 33% in their work. Jinpeng then asked if the authors do static or dynamic points-to analysis. Zhi replied that points-to analysis is performed in the LLVM compiler, so the authors also use manual analysis because they are not experts. Joana Trindade from University of Illinois asked if moving trust away from read only page bit to write protoected page bit in CR0 register would still leave the hypervisor vulnerable to hardware attacks. Zhi answered that they assume hardware is trusted. He also said that they do not prevent physical attacks, and that this would require more research to protect. Finally, Helen Wang asked if their technique can be directly applied to an OS. Zhi said that page table is more dynamic, because there are more states that are based in memory, and that they would need to protect those too.
SESSION 8: SYSTEMATIZATION OF KNOWLEDGE II
Chair: Ed Suh
How Good are Humans at Solving CAPTCHAs? A Large Scale Evaluation
Elie Bursztein, Steven Bethard, John C. Mitchell, Dan Jurafsky (Stanford University),WRITEUP BY: BR
Elie Bursztein presented an evaluation of CAPTCHAs -- small puzzles designed to be easy for humans to solve but hard for computers to solve -- from the perspective of human solvers. Bursztein set up the audience's expectations by displaying several CAPTCHAs that varied widely in difficulty.
The authors compared 21 CAPTCHA schemes, 13 of which were image based and the rest audio based, by asking two sets of humans to solve over 300,000 CAPTCHAs. They measured the accuracy and speed of the humans' solutions. Bursztein summarized their results by noting that solution accuracy was far lower than the operators of the CAPTCHA-protected websites might hope: on average, image-based CAPTCHAs took humans 9.8 seconds to solve, but only 71% of the tested CAPTCHAs had three or more human testers agree on a correct solution; audio CAPTCHAs took an average of 28.4 seconds to solve and only 31.2% had three or more humans agree on a solution. Bursztein offered numerous other statistics on native speakers versus non-native speakers, the breakdown of Amazon Mechanical Turk workers, and the human testers' performance on many different CAPTCHA schemes.
Thorsten Holz asked whether the authors planned to publish their annotated CAPTCHAs; Bursztein said that they did, though not publicly so as not to create a training set for automated solvers. Louis Kruger asked whether the authors had developed any intuitions about the relative difficulty of programmatically solving different CAPTCHA schemes, and Bursztein pointed to future work. Peter Neumann suggested that CAPTCHAs might be the wrong solution to the wrong problem, and challenged the audience to find a better way to approach the problem of distinguishing humans from computers.
Bootstrapping Trust in Commodity Computers
Bryan Parno, Jonathan M. McCune, Adrian Perrig (Carnegie Mellon University).WRITEUP BY: SB
In today's networked world, it is important to be able to trust computers. However this has several challenges as illustrated by the following questions: How does one trust the hardware? Even if hardware is trusted, how does one trust the computer's software, which is changing all the time? Finally, how does one communicate the trustworthiness of a computer to a human? The paper condenses over a hundred references that address the problem of bootstrapping trust in a computer; i.e., that help the user determine what the computer will do with her data. Topics covered include bootstrapping foundations, transmitting bootstrap data, interpreting the information, validating the process, applications of bootstrapping, and also the human factors involved in the process. The paper also discusses limitations and future directions for this research area. In the talk, Bryan covered some key points from the paper at a high level. He discussed how trust is currently established in hardware and software. He also explained that even if the software is proven to be correct with respect to properties such as type-safety and control flow, establishing code identity via a chain of trust from the hardware is still important to show which particular piece of software is controlling the computer. Attesting to the run-time execution of the software is difficult, so current approaches typically only perform load-time checks. Run-time guarantees could be provided by static program transformations that enforce a given policy at run-time or by implementing a run-time enforcement layer that enforces the policy. In conclusion, Bryan emphasized that code identity is a critical property when deciding whether to trust a computer. There are a wide variety of hardware roots of trust available, and there are many interesting question in the area that still need to answered.
Discussion: Ed Suh asked what the main challenges are for TPMs to be widely used? Bryan replied that some techniques for bootstrapping trust are used more widely than others. For example, cellphones have been able to adopt secure boot because a cellphone is a self contained application. Another example is Microsoft's BitLocker; which has been widely adopted in part because it is a closed system. However, the idea of remote attestation has not been as widely adopted because it is an open system. It has a chicken and egg problem in that both the attestor and the verifier must make changes for the attestation to be useful.
SHORT TALKS AND POSITION PAPERS
Chair: Angelos Stavrou WRITEUP BY: SB
There were 9 short talks in all. The first short talk was given by Terry Benzel titled "Advancing Cyber Security Experimental Research", where she talked about the DETECT testbed which is the next-generation of the DETER testbed that was running for 6 years. She listed the several features of DETECT that would allow one to create larger experiments to test systems for security. Carl Landwehr asked what "beyond reality" meant? Terry answered that today's testbeds have limits about realism, and they want to make bigger, more realistic experiments possible using the testbed, thereby making it "beyond reality". The next talk was by Fred Cohen titled "Progress on physics of digital systems (information physics)", about properties of digital information systems that can be understood without detailed understanding of the underlying mathematics, and for which there are calculation methods. He pointed the audience to the site infophys.com for further information about the project. The next talk was by Bryan Parno titled "Non-interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers". Bryan talked about the challenges and the proposed solution for outsourcing computations to the cloud and then trusting the results. The next talk was by Dipankar Dasgupta titled "Building Self-Monitoring Multi-Core Systems", where he introduced the problem of partitioning multi-core systems in such a way that some cores are allocated to security and can be used to monitor other cores. Dipankar talked about the research issues that need to be addressed for this idea to work. The next talk was by Vivek Nigam titled "Progressing Collaborative Systems", where Vivek talked about collaborative systems where people who don't trust each other want to collaborate to perform a common task. He formulated the problem as a planning problem and talked about the different actions in the system and their complexity of the planning algorithms. The next talk was by Kui Xu titled "Cognition and Memory in Computer Science", where Kui talked about their goal of building a robust memory-based authentication system, where the questions would be based on the users' current activity and also past activity over a period of time. The challenge here is to balance users' memories about their recent actions and their security. Someone asked if the previous transaction is itself compromised and the chain of trust is broken, won't it be a problem? Kui answered that the attacker only has temporary access to personal information. Questions are generated in the long time over different time periods. The next talk titled "Oakland Citation Graph" was given by Hilarie Orman. She showed several interesting statistics about the past Oakland papers' citation patterns. The next talk titled "Healthcare Data Hemorrhages: Inadvertent disclosure and HITECH" was given by Eric Johnson. Eric discussed about the unintentional data leaks that happen in the healthcare industry through lost laptops, paper, etc. They did a study of what they information do people end up leaking and if the HITECH act that came into effect on 23rd September 2009 slowing down the leakage of information. Their study did not show a lot of difference. The next talk titled "Webseclab: Web Application Hacker Testbed" was given by Elie Bursztein. Elie talked about the testbed created at webseclab.com at Stanford to teach web security.
SESSION 9: ANALYZING DEPLOYED SYSTEMS
Chair: J. Alex Halderman
Chip and PIN is Broken
Steven J. Murdoch, Saar Drimer, Ross Anderson, Mike Bond (University of Cambridge)WRITEUP BY: BR
Steven Murdoch began Wednesday's program with a visually rich presentation on gaping flaws in the EMV, or "Chip and PIN," system used worldwide -- though not in the US -- for point-of-sale, credit, and debit transactions. (The paper won the Best Practical Paper award the previous day.) The authors' more-gently-titled previous work on Chip and PIN revealed that it was vulnerable to relay attacks. The present work focuses on a protocol flaw that renders the system easily exploitable via a man-in-the-middle attack: the PIN verification step is never authenticated. The technical consequence is that a PIN-entry terminal can be tricked into believing that the person presenting a card has entered the correct PIN, when in fact she has not. The consequences for consumers are more dire; because banks believe that PIN entry is entirely secure, they shift blame onto the consumers whose cards are stolen.
Murdoch described the man-in-the-middle attack in detail. A thief carries a false card bearing contacts identical to a real card's; the underlying circuit is connected to the contacts on a real, stolen card. The thief presents the false card to a PIN-entry terminal and relays a card authentication conversation (personal information, account number, expiration date, and a digital signature over these data) between the terminal and the real card. The terminal asks for a PIN and the thief enters an arbitrary one. The terminal sends the entered PIN to the false card. A real card would check the PIN against the PIN stored in its memory, then reply with a yes or no indicating whether the transaction is allowed, but the false card simply replies yes. The terminal relays the transaction information to the bank and receives the bank's response allowing or denying the transaction. Neither the bank nor the terminal authenticates the PIN verification step, although both believe that a correct PIN was entered. Murdoch presented evidence suggesting that such attacks had been carried out successfully at the expense of card-theft victims. He also described the banks' responses to the authors' findings (mostly denial) and presented evidence that their responses were nonsensical. He concluded his talk by challenging the community to figure out how thieves manage to commit ATM fraud without knowing cards' PINs; ATM transactions are different in that the PIN is sent to the bank rather than the card.
In the question-and-answer period, Paolo Comparetti asked whether the authors thought ATM fraud was happening because of protocol vulnerabilities or because cards and PINs are easy to steal; Murdoch replied that it was difficult to tell. Simon Ou remarked that relying on the card to verify the PIN is "just wrong," to which Murdoch responded that while the current design is indeed flawed, on-card verification is appropriate for offline transactions; he pointed the audience to the paper's section on possible solutions. Alex Halderman asked what lessons the authors thought designers should learn about designing financial systems; Murdoch replied that designers should keep specifications simple and short and that those specifications should state all security-critical information and clearly articulate the threat model.
Experimental Security Analysis of a Modern Automobile
Karl Koscher, Alexei Czeskis, Franziska Roesner, Shwetak Patel, Tadayoshi Kohno (University of Washington),WRITEUP BY: SB
Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, Stefan Savage (University of California, San Diego).
This work performed an extensive security analysis of a car. The motivation for this analysis stems from the fact that the modern automobiles depend extensively on networked software components. Most components in a modern automobile including safety critical systems are computer controlled and these components benefit by talking to each other, which is currently done over a shared network. Moreover car networks can connect to the outside world through interfaces created for diagnostic and communication purposes. There is also a talk of allowing third party applications to be downloaded into the cars. There were some access controls implemented, but in many cases, they were either weak or incorrectly implemented. This provides ample opportunity for an attacker to subvert the different components of the car, which could be fatal. The talk described the typical computing architecture of a car and gave examples of different possible attacks scenarios. The speaker also showed video demonstrations of different attacks they could perform on the car. Karl concluded his talk by saying that their research team was surprised by the ease with which the attacks could be created. Some of the challenges that could prevent from having better security in the cars could be that the manufacturers are only integrators who get different components from different vendor and combine them to build the car. In such a scenario, adding security might be costly. Also it is not advisable to lock down diagnostic tools.
Discussion: Fred Cohen asked how the authors got immunity from prosecution. Karl replied that they talked with lawers of University of Washington so that they were on the right side of the law. Susan Landau commented that it is easy to look at these problems and say that these guys screwed up; but what we as security are doing wrong? How do we communicate to the industry properly so that they don't repeat the same mistakes over and over again? Karl agreed that it is a very interesting point. One thing that he noticed is that the specification itself addresses what should and should not be done, the third party manufacturers are the people who cut some corners to get the product out. Peter Neumann in response to Susan's question, mentioned that the RISK forum was started specifically for this purpose, but still for the last 25 years the same mistakes are made over and over again. The problem is that minimizing cost is the fundamental argument and little bit of money is saved if they save a safety cost. The lack of education is not the reason. Due to the cost saving mentality, we notice that we never see remediation proactively and later the manufacturers pretend that they did something they didn't intend to. Rob Cunningham asked when the car logs the interactions between various components in car, do they do something that makes sure that the logs are not tampered with. Karl answered that they didn't look at black boxes, however he mentioned that the log information is stored in flash and no authentication is done. Someone asked as to how they went through the IRB process. Karl answered that since the experiments were on the cars not people, they did not have to go through the IRB process.
On the Incoherencies in Web Browser Access Control Policies
Kapil Singh (Georgia Institute of Technology),WRITEUP BY: JT
Alexander Moshchuk (Microsoft Research),
Helen J. Wang (Microsoft Research),
Wenke Lee (Georgia Institute of Technology)
Kapil presented. Policies are developed in an ad-hoc manner (e.g, web applications give unrestricted access to history object, which leads to user privacy leaks). This talk focused on identifying such incoherencies in browsers today. In addition, once such incoherencies are identified, another goal is to measure the cost of eliminating these incoherencies. Kapil went on and explained how access control works in browsers. Essentially, a principal (subject) accesses a resource (object), and resources are labeled with principal IDs. The problem then is how access control decisions are made using these labels at runtime. The authors have identified a number of incoherencies, which fit into 3 types: (i) system identifies website as a principal and ignores role of user, (ii) inconsistent principal labels, and (iii) browsers allow principals to change the labels at runtime. Next, Kapil explained their webanalyzer measurement framework. It performs network interposition to examine http requests and user cookies. It also performs dom interposition, and display interposition to analyze visual layout of elements in a page. Their measurement methodology proceeded as follows. A user was emulated in the 100k most popular websites according to Alexia. In addition, the authors validated their emulation results against manual browsing, and concluded that both are roughly comparable. Regarding categories of inconsistencies, with respect to principal labeling, most objects follow same origin policy. In this case, incoherencies occur when resources interplay (e.g., dom and display, dom and cookies). Also dom policies were inconsistent with same origin policy. Finally, in the relationship between dom objects and display, the authors discovered that malicious plugins can overlap fields with other plugin form fields (i.e., iframes that overlap), allowing it to steal credentials. According to the authors, this vulnerability is present in 6% of analyzed websites. Changing principal at runtime violates principle of least privilege. In addition, it is possible to access cookies from other domains. Finally, resources are used by user of the browser, but APIs allow websites to control resources that belong to the user (e.g., browsing history, geolocation). In conclusion, the authors' main contributions are threefold: (i) they provide a systematic principal-driven analysis of browser access control incoherencies, (ii) they have developed a web-centric compatibility measurement framework, and (iii) it is the first large-scale measurement of the cost of fixing access control policies. As future work, they plan to use this approach as policy guidelines for future browsers, and to perform a security analysis of web applications.
Discussion: Charlie asked where can one draw the line. Kapil answered that their purpose was to give a view to the browser vendors, which consisted of security vs. backward compatibility. Specifically, if there is a security hole and this is the cost of fixing it., if cost is low browser vendors could drop the feature. If not, then vendors could consider moving to a safer alternative. Someone asked if Kapil had gotten a chance to see any trends for certain features. Kapil replied that they had measured access to user resources, and resize by is not used for sites after position 10.000 in the ranking. David Jacobson from Qualcomm asked if overlapping frames were intentional or accidental. Kapil responded that what they wanted to measure was if there were genuine uses of some of these features. Overlapping frames are mostly used for ads. In Kapil's opinion, since there are attacks, one should give up some features. Paolo Milani from TU Vienna asked how removing features of the sites would affect them. Kapil said that they analyze only some results. Their conclusion was that if some features were removed, there would be differences in look and feel, or it can leave the site unaffacted. In any case, they have not analyzed this completely. Finally, Steve Hanna from UC Berkeley asked what was the depth of the exploration, and how could the authors be sure that measurements accurately reflected the features of the sites. Kapil answered that they have measured this by performing a manual analysis of the top 100 sites, and then comparing these numbers to those obtained from user emulation, and they were comparable. Steve then asked if the manual analysis explore the site in the same depth. Kapil replied that this is a limitation of their work, and that they plan to address it in the future.
SESSION 10: LANGUAGE-BASED SECURITY
Chair: David Brumley
Leo Meyerovich (University of California, Berkeley),WRITEUP BY: SB
Benjamin Livshits (Microsoft Research).
TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection
Tielei Wang (Peking University),WRITEUP BY: BR
Tao Wei (Peking University),
Guofei Gu (Texas A & M University),
Wei Zou (Peking University)
Tielei Wang spoke about a novel fuzzing technique the authors used to find numerous zero-day vulnerabilities in popular software. The paper received the Best Student Paper award the previous day. Fuzz testing involves passing malformed inputs to programs while monitoring the security properties of the system. The authors noticed that current fuzz-testing techniques fall short when dealing with checksum-based integrity checks: altering a well-formed input that is later subjected to such a check causes the check to fail, often causing the program to terminate before it processes the malformed input. The program's early termination leaves open the possibility of flaws in that input's processing that could have been uncovered if the check had succeeded. The authors considered the problem of making execution continue despite the fuzzing of these sensitive inputs. The solution Wang described is to find and bypass checksum checks in executable code.
To find checksum checks, the authors used the intuition that checksums are typically computed over large inputs. Their system, TaintScope, looks for conditional jumps that depend on large amounts of data. When TaintScope finds such a conditional jump, it observes its execution with well-formed and malformed inputs to see whether the condition is true on the well-formed inputs and false on the malformed ones. The authors also considered the problem of repairing checksums: when a malformed input is found to trigger a vulnerability after the bypassed checksum check, TaintScope uses a combination of concrete and symbolic execution (as opposed to the conventional approach of completely symbolic execution) to calculate a new checksum that can be tacked onto the input before it is handed to an un-fuzzed copy of the program. Wang described the authors' "directed" fuzzing strategy in more detail as well; TaintScope focuses on the "hot bytes" in an input that are accessed more than the others. Wang showed tables from the paper to provide evidence that TaintScope found checksum checks, identified hot bytes, and correctly repaired checksums. The authors used TaintScope to find 27 previously unknown vulnerabilities in software that was familiar to much of the audience.
In the question-and-answer period, Zhenkai Liang asked whether TaintScope found any false positives or false negatives. Wang acknowledged that those were possible and suggested that a future version of TaintScope would address the problem. Louis Kruger noted that the checksum algorithms the authors tested against were simple checksums (e.g., CRC) rather than cryptographic ones (e.g., MD5); Wang remarked that TaintScope could support cryptographic checksums if it were extended to recognize string comparisons. Juan Caballero asked why constraint solving was necessary at all; why not just re-run the checksum routine after fuzzing an input? Wang answered that they wanted TaintScope to be agnostic of the checksum function.
Prateek Saxena, Devdatta Akhawe, Steve Hanna, Stephen McCamant, Dawn Song, Feng Mao (University of California, Berkeley)WRITEUP BY: JT
Discussion: Paolo Milani from TU Vienna asked how much work went into specifying the attack grammar, and how confident the authors were with it. Prateek responded that they used attacks publicly available, and that they err on the side of false negatives. He also said that they want to make sure that everything they detect are indeed bugs. If one can automatically come up with a grammar, that would be great. Someone from UIC asked what is the decision procedure, and if the authors supported strings of arbitrary length. Prateek answered that they cannot work with arbitrary lenght strings, because in this case it is not possible to do string concatenation. Therefore, they bound the string size to 300 characters to make it possible for constraint solvers to solve it. Finally, David Brumley from CMU asked if there were any cases in which the authors ran out of memory, or limit. Prateek said that they could have run it longer, but that they had not done it yet. In addition, he mentioned that they had had several examples in which they ran out of memory. A good fraction of these were infinite loops.
WRITEUP BY: BR
Ulf Lindqvist closed the conference by thanking all of the organizers, then passed the mantle to Deb Frincke, next year's general chair.