Review of  the
IEEE Computer Society Security and Privacy Symposium 2006,
Oakland/Berkeley, California, USA
May 21-24, 2006

Review by Ganesha Bhaskara and Justin Zhan
June 13, 2006


Part 1, by Ganesha Bhaskara, Information Sciences Institute, University of Southern California.

Attending IEEE Symposium on Security and Privacy for the first time, I was unsure what to expect during the reception, though I had high expectations from the days to follow. I walked into the reception room, met faculty, students and researchers from different universities. As I would find out the next day, there were researchers from 48 institutions from 14 countries presenting at the conference. Finally, I settled down with a group of people swapping stories about past projects and current state of affairs at NSF / DARPA and in software industry and computer security in general. As the evening wound down, and enough glasses of wine had been consumed, somebody commented, "how boring security would be without Microsoft", indicating it was time to call it a day.

Day 1

The first day of the conference started with an address from the General Chair Hilarie Orman, and Program Co-Chair, Vern Paxon. One of the highlights on their address was the introduction of short papers in the conference to give better exposure to new and promising work in progress.

The first session was about "Signature Generation". The first presenter discussed a language theoretic approach to automatically generate all and only those inputs that exploit a given vulnerability, indicating that in practice, they did not encounter cases where the inputs were dependent on the state of the program and hence they could express the vulnerability language as a regular expression. Though I thought this was one of the best papers in the conference, many had fundamental objections to the approach of addressing specific problems and patching them, instead of looking at the problem from a global perspective to find the root cause of the problem. Other papers in this session dealt with misleading worm signature detection algorithms and improving signature detection algorithms.

The second session was about "Detection". The first presenter discussed a technique of improving intrusion detection by learning data flow behavior of programs. Unary and binary data flow relationships were used to show the empirical superiority of this technique over the existing ones, with the acknowledgement that there exists a tradeoff between accuracy and efficiency as the order of data flow relationships learned is increased. The next presentation dealt with developing a unified framework for evaluating intrusion detection systems. This paper took the approach of modeling the problem as a general instance of multi-criteria optimization problem to find the optimal tradeoff of the metrics used to evaluate an IDS system. The last paper of this session was a short paper that dealt with disrupting the learning process of malware by injecting crafted human like inputs using a system called "Siren".

The next session of the day was about "Privacy". The first paper presented fundamental limits on anonymity provided by the MIX technique. The second presentation discussed fast a cheap attacks on Tor anonymous communication network that can reveal location of hidden servers leading to compromise of anonymity. The short paper presented during this session discussed an interesting approach that "allows a client to retrieve documents matching some search criteria from a remote server while the server evaluating the request remains completely oblivious to the search criteria". One of the members of the audience did a quick back of the envelope calculation and questioned it scalability as this technique was required to be used on a per client basis to achieve its objective.

There was a poster session scheduled for the evening, but few posters materialized. Food and wine was good, thanks to the conference organizers.

Day 2

The first session on day two was "Formal Methods". Presentations included a new mechanized prover for security properties of cryptographic protocols and a new logic for constraint based protocol analysis that could also specify data freshness properties and it corresponding decision procedure. A distinct lack of questions in many of the presentation in this session indicated how few people actually understand the guts of formal methods.

The second session was on "Analyzing and Enforcing Policy". The first presentation discussed a conceptual framework to formalize some aspects of privacy expectation and its implication as developed in law, public and political philosophy. The author pointed out that one of the main difficulties that they faced in translating law to formal policies was that different people interpreted law in different ways and that modeling distributed enforcement policy was a hard problem. The second presentation discussed an efficient BDD based static analysis toolkit for modeling and analyzing configurations of centralized and distributed firewalls. The last presentation in the session was on "Retrofitting legacy code for authorization policy enforcement". This used trace analysis to determine points in the code that required authorization. One of the main concerns of the audience was lack of completeness of trace based techniques which that author hinted could be alleviated using static code analysis tools.

The last session I attended in the conference was on "Authentication". The first presentation discussed an interesting concept of "Integrity codes" (inspired by ECC) for integrity protection of messages (without symmetric / asymmetric keys) transmitted over wireless channels. Using this, the authors introduced a concept of "Authentication through presence" that can be used for several security applications. The short paper in this session revolved around cognitive authentication scheme that is safe against spyware. There were questions from the audience about the attack space of this scheme.( For more on this please contact the author or David Wagner). Other presentations in this session included the use of cache cookies for browser authentication and secure device pairing on a visual channel.

Overall, this was a great conference and I hope to attend future conferences. One aspect that was apparent from the start of the conference was the disconnect between the communities that work on MLS type of systems for the military and the communities that works on IDS, Signature generation and the likes, which aim to make existing systems secure. I believe both communities have much to learn from working with each other and would result in, as I optimistically put it, usable secure systems.


Part 2, summaries for May 24 by Justin Zhan, University of Ottawa, Canada. Additional material by Sven Dietrich, Carnegie Mellon University

Session: Attacks (Chair: Kevin Fu)

Title: SubVirt: Implementing malware with virtual machines
Authors: Samuel T. King, Peter M. Chen, Yi-Min Wang, Chad Verbowski, Helen J. Wang, Jacob R. Lorch
University of Michigan, USA, and Microsoft Research, USA
Speaker: Samuel King

This work examines a new type of malicious software which is called a virtual-machine based rootkit (VMBR). The virtual-machine based rootkits are difficult to detect since VMBR installs a virtual-machine monitor (VMM) underneath an existing operating system into a virtual machine. The VMM then can also launch an attack OS, which is hard to detect by the actual operating system. In particular, VMBR supports general-purpose malicious services. They evaluated this new malware threat by implementing four example malicious services, including keyloggers and a phishing web server.

A restart can be easily handled by the VMBR, but a shutdown is more difficult to deal with. However, low-power modes allow for simulating a shutdown machine. A secure way to circumvent this is an actual powerdown (pulling the plug), which assures a real shutdown.

Paul Karger mentioned earlier related works by Marv Schaefer and himself from as early as 1976.

Title: Practical Attacks on Proximity Identification Systems (short)
Author and Speaker: Gerhard P. Hancke, University of Cambridge, UK

This talk presents some proof-of-concept attacks on RFID following ISO 14442 A. In particular, two types of attacks are described: (1) Eavesdropping: An eavesdropper can intercept a two-way communication sequence between a legitimate reader and a token within 4m, and it is possible to scan a token's response from 1.5m away after activating it from a distance of 15cm using a magnetic loop antenna. (2) Relay attacks: Relay attacks can successfully spoof the location of authentication tokens. Furthermore, the permissible system delay may cause attacks on the system's integrity by allowing enough time for the modification of legitimate communication sequences.

The author compared his approach to various claims made by ACLU (read of e-passport at 1 meter), NIST (read of e-passport at 9m), and DEFCON (read at 20m). His results remain inconclusive in validation or refuting those claims, but he does point out that the testing conditions are not always well documented.

Title: On the Secrecy of Timing-Based Active Watermarking Trace-Back Techniques
Authors: Pai Peng, Peng Ning and Douglas S. Reeves
North Carolina State University, USA

Speaker: Pai Peng

There are various approaches for watermarking packets. Timing-based active watermarking schemes are shown to be effective in tracing back attackers through stepping stone connections or anonymizing networks. Research on the timing-based active watermarking has overlooked an important issue: the secrecy of the parameters used in watermarking. This work studies two types of attacks against the watermarking schemes: (1) The attackers may attempt to remove the watermark. (2) The attackers may replicate the watermark in other network flows. The above attacks may be conducted through the following steps: estimating the watermark parameters, identifying watermark delayed packets, watermark recovery and duplication.

Four configurations of the timing-based active watermarking are studied. It is shown that the above attacks are effective on the first two configurations and are effective on some cases for the later two configurations. One author of the original Timing-based active watermarking papers pointed out that, the proposed attack may not implement the full version of the watermarking schemes (otherwise, attacks may be more difficult). On the attack details, he also pointed out that the differentiation of the normal network delay and the delay caused by watermarking may also be harder than that assumed in this work.

Session: Systems (Chair: Helen Wang)

Title: A Safety-Oriented Platform for Web Applications
Authors: Richard S. Cox, Jacob Gorm Hansen, Steven D. Gribble, and Henry M. Levy
University of Washington, USA, and University of Copenhagen, Denmark
Speaker: Steven D. Gribble (U. Washington)

Modern browsers are de facto operating systems that manage dynamic and potentially malicious applications. This work reports the design of Tahoma, a browser operating system (BOS), which is a trusted software layer on which Web browsers execute. It has the following properties. First, it runs the client-side component of each Web application in its virtual machine, thus provides storage isolation between Web services and the user's local resources. Second, it requires Web publishers limit the scope of their Web applications by specifying which URLs and other resources their browsers are allowed to access, thus limits the harm that can be caused by a compromised browser. Third, it treats Web applications as first-class objects that users explicitly install and manage, giving them explicit knowledge about and control over downloaded content and code.

Tohoma has been implemented using Linux and the Xen virtual machine monitor. The code will be released but not open-source. It has the following characteristics: (1) Security property: Evaluation results show that it can prevent up to 87% of the vulnerabilities that have been identified in the widely used Mozilla browser. (2)Overhead: Tohoma causes a higher latency than native browsers but the difference is not very high. The evaluation of overhead in terms of other resources (e.g., memory, storage) is under investigation.

Title: Tamper-Evident, History-Independent, Subliminal-Free Data
Structures on PROM Storage (or, How to store ballots on a voting machine) (short)
Authors: David Molnar, Tadayoshi Kohno, Naveen Sastry and David Wagner
University of California, Berkeley, USA, and University of California, San Diego, USA
Speaker: David Molnar (UC Berkeley)

The paper describes the requirements of a voting storage which contains (1) history-independent - hiding the order in which the votes are cast to protect the privacy of voters. (2) subliminal-free representation - adversarial voting machine's attempts to mark ballots through representation of the data structure (3) Tamper-evident is for preventing addition or deletion of votes after election.

Their approaches are as follows. Tamper-evident can be achieved by applying Manchester codes and programmable read-only memory (PROM). Manchester encoding: encoding of a n-bit string x is a 2n-bit codeword M(x) obtained by applying the mapping 0->01 and 1->10 to each bit; we can use the property that if any set of 1 bits in a Manchester code are flipped to 0s, the result becomes invalid. These, together with property of a PROM, guarantee that changes to any codes written to the PROM can be detected. History-independent and Subliminal-free can be achieved by the following any of the following approaches: unary counter (Space complexity: O(n)), copy over list (space complexity: O(n^2)), lexicographic table (space complexity: O(n log^2 n), and random table (space complexity: O(n)).

The full version of the report: www.cs.berkeley.edu/~dmolnar/papers.pdf

Title: Analysis of the Linux Random Number Generator (LRNG)
Authors: Zvi Gutterman, Benny Pinkas and Tzachy Reinman
Hebrew University, Israel, Haifa University, Israel, and Safend, Israel
Speaker: Zvi Gutterman (Safend and The Hebrew University of Jerusalem)

In LRNG, randomness is generated from entropy of operating system events. LRNG is part of an open source project, but its source code is poorly documented and patched with hundreds of code patches. It is very hard to understand the algorithm from the existing code. This work reports a clear description and analysis of the LRNG algorithm. An attack is found that can break the forward-security of LRNG. Specifically, given a state of LRNG, it is possible to reconstruct previous states with the time complexity of 2^{64} or 2^{96} and the memory overhead of O(1). Some vulnerabilities (in the current code) are identified: denial of service, guessable passwords, prone to noise created by an adversary. The report is based on the analysis of Linux kernel labeled version 2.6.10.

Title: The final nail in WEP's Coffin
Authors: Andrea Bittau, Mark Handley and Joshua Lackey
University College London, UK, and Microsoft, USA
Speaker: Andrea Bittau (Univ College London)

(WEP stands for Wired Equivalent Privacy, 802.11 encryption standard.)

WEP is still widely used though more sophisticated encryption protocols since people believe that breaking the keystream in WEP takes a long time and is highly impractical. This work reports the finding of a novel vulnerability in WEP which allows an attacker to send arbitrary data packet after eavesdropping a single packet.

The principle behind the attack is that each 802.11 packet starts with two headers (an LLC header followed by SNAP), which occupy the first 8 bytes and the content of these bytes can be easily derived. By intercepting a packet and knowing the first eight bytes of plaintext (as just mentioned), 8 bytes of the keystream could be calculated, thus an arbitrary 8-byte information can be sent out using the known keystream. IP fragmentation can be exploited for sending larger IP packets. Moreover, after 1500 bytes of keystream is recovered by sending large broadcasts in smaller fragments, transmission of any data of arbitrary length becomes possible. Furthermore, if Internet connectivity is available, it is possible to decrypt traffic in real-time.

A fully automatic version of this attack is implemented and demonstrated near the end of the talk. It concludes that WEP must be abandoned rather than patched yet again.