Review of  the
USENIX Security,
Washington, DC, USA
August 4-8, 2003

Review by Jeremy Epstein and Sven Dietrich
September 15, 2003


Notes from Jeremy Epstein

USENIX Security was held in Washington DC Aug 4-8 2003. USENIX Security is one of the best security research conferences. It gets really good, practical papers. They also usually have very good invited talks. Here's notes on some of the more interesting ones. They're all available for download by USENIX members from http://www.usenix.org/publications/library/proceedings/sec03/. Those that are labeled [Invited talk] or [Panel] don't have papers, though.

Remote Timing Attacks are Practical

They looked at two types of attacks: one where you have multiple web servers on the same machine that are mutually hostile (e.g., in a hosted environment) and the other where you have remote attackers. The idea was to determine a web site's private key in either/both environment, which they did. They basically found some timing differences in how SSL handshake implementations respond based on whether a particular bit in the private key is correct or incorrect. You get the same error (a handshake failure), but you can figure out one bit at a time by starting at the most significant and moving your way down. Surprisingly, network latency doesn't really affect gathering the results.

It turns out you can "steal" a private key remotely in 2 hours and 1.4 million queries (i.e., failed SSL handshakes). The only way to detect it is if you log failed handshakes. Blinding (introducing artificial timing delays) is an effective countermeasure, and has been implemented in OpenSSL. They expect that all SSL implementations (even those in Java) are vulnerable unless specifically protected against this attack.

This is the first semi-practical attack I've seen on SSL since about 1997. All other attacks have been harder than breaking into the end-system.

A PKI Your Mother Could Use
---------------------------

PKI is just too hard to use: there are thousands of pages of standards, and obtaining certificates is complex. For example, it takes about 18 forms on a web site to get a free certificate (vendor unstated, but presumably VeriSign). It takes 30 minutes to 4 hours just to get the certificate. And when you go through all that, all it says is that you were able to receive email at the address listed in the certificate... it says nothing about who you really are. There needs to be a simpler way.

Everyone knows that passwords are really a lousy protection scheme, but they're reality. So they propose a scheme that allows getting a private key & certificate using a password scheme. It's vulnerable to Man In The Middle (MITM) attacks just like SSH, but that's "good enough". It avoids all the effort required in certificate creation.

Seems like a good scheme to me, given its limitations.

Electronic Voting [Panel]
-------------------------
David Elliot, Washington State, Office of the Secretary of State; David Dill, Stanford University; Douglas Jones, University of Iowa; Sanford Morganstein, Populex; Jim Adler, VoteHere; Brian O'Connor, Sequoia; [he was listed on the program, but didn't show up] Avi Rubin, Johns Hopkins University & Technical Director of the Hopkins Information Security Institute

This wasn't a paper, but rather a panel with people from academia and e-voting companies. The accuracy of electronic voting has been getting a lot of attention recently due to a paper from Avi Rubin at Johns Hopkins Univ (see it at avirubin.com/vote if you're interested). What he points out isn't new: there's been research going back at least 10 years on problems with e-voting, and there are numerous concrete examples of how e-voting has failed. [BTW, this discussion is about using electronic voting machines, known as direct recording electronic (DRE), not voting over the Internet which is far worse.]

Doug Jones from Univ of Iowa sits on the Iowa board that reviews voting machines before they can be purchased. In that role he sees confidential reports from the labs that certify voting systems, and says that five years ago he identified some of the problems that Rubin recently reported. He was assured by the voting machine company (which is now part of Diebold) that they were fixed. Since they obviously were not fixed, he's calling for decertification of Diebold machines in Iowa.

Jim Adler is CEO of Votehere, a company doing voting software. He described the risks of electronic voting, including undetected election compromise and secret ballot compromise. He says their product relies on multiple trustees taking responsibility, and suggests "live auditing" on election day, in addition to before/after the election.

David Elliot is the election administrator for the state of Washington. Many elections still use punch cards, largely because spending money on voting machines isn't popular ("do you want a new police car or fire truck or voting machines"). The Help America Vote Act (HAVA) throws federal money at the problem to eliminate punch cards, and allow for voting by visually impaired. There's a Jan 1 2006 deadline for compliance. However, voting standards are all voluntary, so if they get too strict (e.g., in the interest of greater security) and drive the cost of the machines up, local jurisdictions may ignore the standards and buy whatever is cheapest.

Sanford Morganstein is from Populex, another voting machine company. They use the Mercuri method of voting (named for Rebecca Mercuri, a professor who proposed it). When you vote, a printed record is generated which you can inspect for correctness before you leave the voting booth. The paper copy (which is behind glass, to avoid tampering) is then put in the ballot box for reconciliation. That way you have a paper record validated by the voter which can be used to cross-check the results from the computer.

David Dill from Stanford has been doing research in this area. He says that voting rests on the shoulders of the computer security community, and we need to help. Mostly, though, the vendors don't want help. Most (all?) of the products are vulnerable to insider and outsider attacks. For safety, we need to stop the acquisition of the current paperless systems, and fix the regulatory framework that makes security optional. The key question is how to get better voting systems without squelching innovation... we can write requirements precisely enough to get safe systems, but that eliminates the innovation.

High Coverage Detection of Input-Related Faults
-----------------------------------------------

They've developed a method to find security vulnerabilities in C code by measuring what lines are actually tickled by input, and then seeing how other inputs would impact those lines. For example, they want to detect if buffer overruns could occur. It works by automatically instrumenting the code (by modifying GCC). There's major overhead... many times slower than standard generated code, so this is only suitable for a testing environment. They can't deal with multithreaded applications, which limits the usefulness. And there's some overlap between what they do and automated tools like Purify.

I like tools like this at one level, because they help improve the quality of C code. But I'm very disturbed at the amount of effort being invested in ways to make C/C++ less dangerous, rather than working in an inherently safer language like Java. This comment also applies to the two following papers. I commented on this to the author during the Q&A session, and several people came to me afterwards and said they agreed.

Address Obfuscation
-------------------

The idea here is to randomize locations of variables, which makes it harder for an attacker to do stack or heap overruns, because the locations in memory are unpredictable. They do simple transformations at load or link time. Other more complicated transformations (such as reordering variables on the stack) can only be done at compile time. It doesn't provide absolute protection, and it doesn't require system-wide changes (e.g., recompiling everything).

The types of randomization they use include: - Randomizing the location of the bottom of the stack by adjusting it upwards by 1 to 100MB (which takes up virtual memory but not physical memory). - Rewrite binary code to add random padding onto the stack between function calls. - Randomize the location of the code and data segment base addresses. - Randomly increase the size of all malloc requests by 0-25%. Since buffer overruns frequently rely on knowing the absolute address of code, any/all of these make those attacks impossible.

Another technique is to place unreadable pages randomly through memory, so attacks that randomly hit addresses will cause exceptions and crashes.

"Success" is defined as reducing a successful penetration to a system crash (i.e., a denial of service attack). That doesn't seem like such a good idea to me! However, there's essentially zero overhead, since it's just moving an address at load/link time.

In the future, they want to permute code & data, and do more obfuscation at runtime.

PointGuard
----------

This paper is part of a series of very effective technologies to make it harder to attack C/C++ code. The idea is very simple: modify the GCC compiler so all pointer are kept "encrypted" in memory, and decrypted whenever they're used. The encryption is an XOR with a randomly chosen value (randomly chosen whenever the program starts). If pointers get corrupted (e.g., by an attack), when they get decrypted, they'll point to a random memory location, thus crashing the program. It doesn't matter that the encryption is trivial, since the attacker doesn't get to see the key (which changes for every run), or the plaintext (the actual pointer), or the ciphertext (the encrypted pointer).

Static data is a special case because static pointers have to be initialized at runtime to the encrypted values, rather than to a statically set value.

There are some gotchas: the encrypted pointers need to be decrypted before any kernel calls that take pointers (e.g., any system call that needs a buffer), since the kernel doesn't know the encryption key for that process. And if you have mixed code (some that's been modified to use encrypted code and libraries that haven't), it gets nasty. For example, if you're trying to run an application that you don't have code for on top of a library that uses encrypted pointers, things get ugly. They extended the C pointer syntax to allow defining encrypted & unencrypted pointers (encrypted pointers look like "char *foo", and unencrypted pointers look like "char @foo").

The performance results are surprising... in some cases the code actually runs slightly faster, due to how register usage gets optimized to do the pointer decryption. In other cases, it's slightly slower, but little enough that it's not much of an issue.

They're planning to put out a Linux system completely recompiled using their encrypted pointers. But unless they can get application vendors to do the same, it's of limited value.

Static Analysis of Executables of Malicious Code Detection
----------------------------------------------------------

Polymorphic viruses are hard to find. Their goal is to identify malicious intent while remaining safe against the attack. They can find and remove code reordering, null (noop code), etc. The key problem is that they need a spec for the un-obfuscated code before they do anything, which renders it pretty useless!

The Internet as a Surveillance Network
[Invited Talk] by Richard M. Smith
--------------------------------------

RFID tags are everywhere, and will be in more places. Not just for toll taking on highways, but also embedded in clothing, cell phones, computers, toys, etc. Mailboxes won't accept your mail unless the stamps have RFID tags embedded in them. A database maintained by the private sector will be able to track where you are by correlating the RFID tags you buy on your credit card with where they're detected. You won't have to show your credit card, because the RFID in it will identify itself to the reader. Just walking around you'll be leaving a trail of evidence.

There are obviously many security & privacy issues associated with this. But it's no dream/nightmare... it's happening already. [The day after the conference, I read an article in the newspaper about a clothing designer that's embedding RFID chips within the fabric of the clothing.]

Trusted Computing [Panel]
-------------------------
David Farber, University of Pennsylvania Lucky Green, IBM; Leendert van Doorn, IBM; Bill Arbaugh, University of Maryland; Peter Biddle, Microsoft

This was a review of Microsoft's Trusted Computing system, which encompasses Palladium, TCPA, etc. There was nothing really new here.... just a rehash of the same old issues about whether it's Microsoft's stalking horse to take over the world by preventing non-Microsoft software from running, etc. There's an overview of TCPA and the terminology in this month's Linux Journal. They've renamed it NGSCB (which I can't pronounce, but they did) because Palladium is someone else's trademark! For more info, see www.microsoft.com/ngscb, www.trustedcompting.org, or www.trustedcomputinggroup.org.

The Internet is Too Secure Already
[Invited Talk] by Richard M. Smith
----------------------------------

[Slides for this talk available at http://www.rtfm.com/TooSecure-usenix.pdf]

Eric Rescorla is very entertaining, as well as knowledgeable. His claim is that we have lots of good technology (e.g., crypto algorithms like RSA & AES), but virtually nothing gets secured. His hypothesis is that we have the wrong threat model - we worry about all of the known threats, even when they're unimportant (e.g., minor crypto weaknesses that are impractical to exploit) and ignore the big ones. So "too good security" is trumping deployment.

The current threat model is that attackers can see/modify all communication traffic, and end systems are secure. But in fact attackers generally CANNOT see comms traffic, and the end systems are weak. The reason for the disconnect is that security engineers want to avoid being embarrassed.

The real threat model is (1) remote penetration (via buffer overflows, etc) and exploitation, (2) malware (viruses & worms) and (3) Distributed Denial of Service (DDoS).

To counter these, we have two wins (SSL & SSH), three draws (IPsec, S/MIME, and PKIX, which aren't in widespread use), and one loss (WEP, which is just broken).

Wins: SSL/TLS are widely deployed, but only on 1% of all servers. It's succeeded because it's easy to use and doesn't require a lot of help to get set up. SSH is a de facto standard, but IETF has gotten bogged down trying to make it a real standard. It's totally displaced telnet. It can be deployed without any help, so it became ubiquitous. It's much easier to use than a VPN. Yes, it's vulnerable to Man In The Middle (MITM) attacks, but the risk is acceptable, and it makes deployment easy.

Draws: IPsec is way behind, even though it's built into most operating systems now. Only used for point-to-point VPNs, and even those are being replaced by SSL VPNs because they're easier to deploy. S/MIMEv2 is standardized (S/MIMEv3 has been stuck in standards hell), and is built into most mail clients, but it's used even less than PGP (which isn't used much). Problem is that getting your own key/certificate is too hard, and getting other peoples' certificates is even harder. Maybe people just don't want secure email? They keep saying they want it, and VCs keep funding it. Maybe the problem is barrier to entry? PKIX has made lots of progress, if you measure by volume of standards. Although there's lots of implementations, there's no interoperability. Deployment is limited to SSL.

Losses: WEP is "security" for 802.11, but badly broken. Deployment is good - 28% of wireless sites use it, and it's better than nothing.

The common themes are to use what's available, and that certificates are a roadblock.

Some possible explanations: - Security is inherently hard - possible, but doesn't help - Customers are stupid - probably true, but they're not getting smarter - We're delivering the wrong products - either because it's not mature enough or we have the wrong design criteria

Why do we do the wrong things? Because we think they're the right ones (e.g. IPsec) or we mis-prioritize features. Too much effort is going into new mechanisms and polishing existing protocols (e.g., fixing impractical attacks) rather than addressing the real threats.

All of the security protocol *implementations* have had buffer overflow attacks, which is much worse than the cryptanalytic failures in the protocols themselves. We should put the energy into fixing the bugs in implementations rather than the weird crypto stuff.

Customer say "security is really important" but what they really mean is "the *appearance* of security is really important". Customers say "security is more important than features" but what they really mean is "I want my dancing pigs". That is, features always win over security. Customers say "make security easy to install" and really mean it.

His proposal: should go to a zero-based view: what do users *really* want, what are the *real* threats, etc. What market research says is (a) no cryptanalytic attackers, (b) protocol flaws are rare, (c) programming flaw attacks are common, (d) DDoS is common.

What can we do? Stop using C! Sandboxing, StackGuard, etc. No ideas on how to fix DDoS. Need real data on how much the techniques really fix the problem.

SCrash - Generating Secure Crash Data
-------------------------------------

The problem they're addressing is that sending crash data in some cases will include sensitive data such as documents, data put in web forms, etc. Some people are opposed to sharing that data with vendors; Dept of Energy put out a circular banning users from sending any crash reports, lest sensitive/classified data leak that way.

There are several problems: protecting the data in transit to the vendor site, and secure storage of crash data on vendor site. The trust model is that when you use software, you assume the vendor of the software isn't trying to sabotage you (or else you wouldn't use their software). [Not sure I agree with this one, but it's probably right.] But at the same time, you don't trust the vendor to store your data securely.

Their tool is designed to scrub sensitive data before it leaves the user machine by scanning source code to figure out what's sensitive, and scrub that out of the crash image. The idea is that they do data flow analysis on the source and then put all the sensitive stuff in one heap/stack, and the non-sensitive stuff in another heap/stack. You can then just delete the sensitive stuff before sending the crash.

That sounds good, but data flow analysis is really hard. Their results show that depending on the program, they get as much as 97% of the data being sensitive. The related question is whether once you scrub the data, whether what's left is actually of any use in figuring out what the problem really was.

This solution can't deal with binary, but only source code since it's a source to source transformation. There's relatively little performance impact (around 2-3%) at runtime, but compile time is much slower because of the data flow analysis.

[In my opinion, if the analysis is sensitive enough to get the sensitive stuff, you're not left with anything useful. For example, if "buf" contains sensitive data, so does "strlen(buf)". If you say "if strlen(buf) > 10 then i = 1 else i = 2", the value of "i" is sensitive. It just goes on from there...]

An audience member asked whether, if Microsoft used this technology, users would be induced to send crash reports. The authors think so, but I doubt it.

Implementing and Testing a Virus Throttle
-----------------------------------------

This paper talks about some research to try to stop worm spread by putting limits on outbound connections. The kernel has to be modified so it refuses to allow a host to contact more than one new host every second (where "new" means not already in the cache of the most recently used hosts). If a program tries to connect to more than one host per second, it's suspended, and it gets to connect at that rate. So worm propagation is greatly slowed, since instead of connecting to and infecting thousands of hosts a second, it gets one a second. This doesn't prevent a given host from being infected, but slows down the rate of spread. As such, it's only useful if a significant fraction of people use it. They have nice graphs to show what happens to the spread rate at various percentages of people using the tool.

They also have a version that does effectively the same thing on throttling SMTP connections (the subject of an earlier paper at ACSAC 2002), which slows down worms that use your email address book to spread themselves.

The biggest threat is that if the malicious software knows about the throttle, it could disable it before it starts attacking. So it's only an element in the escalating war between attackers & defenders.

Notes from Sven Dietrich

There was a Works-in-Progress session at Usenix Security this year, led by Kevin Fu. I walked in a bit late, only to catch the tail end of Adam Stubblefield's presentation.

Adam Stubblefield
Adam talked about the analysis of an electronic voting system, more to be found at: http://www.avirubin.com/vote

Kevin Fu - A WiP of ill repute
Kevin spilled water on and then destroyed "Avi's laptop", demonstrating how a reputation system works.

Clif Flynt - Validating a firewall
clif@noucorp.com
Clif discussed methods for validating a firewall, using a Master, an Assistant, and a Golem More details to follow on the website: www.noucorp.com

Steve Bellovin - LInk-cutting attacks
AT&T Research
Steve suggested cutting links to force traffic past "controlled" points Calculations for various topologies take just a few seconds and offer a success rate of 80-90%. More at: http://www.research.att.com/~smb/papers/reroute.pdf

Simson Garfinkel - Stream - an e-mail encryption scheme
Simson Garfinkel discussed a new e-mail encryption scheme. He compared SSL/https vs S/MIME & PGP. His scheme solves the e-mail encryption usability problem in an unobtrusive way. The keys are kept unencrypted on file systems. Three components: sproxy/sfilter/ssendmail and it's "Zero-Click". More at: http://stream.simson.net/web/ Future work: Migrating keys between multiple laptops, and ways to handle webmail.

Algis Rudys - Wireless LAN Location-Sensing for Security Applications Rice University
It can track cooperative users, but attackers won't be using coalitions It can't train for everything an attacker will do. Algis provided sample trace of attack, some jitter. If attacker varies signal strength, the jitter is more noticeable. This can locate any user within 3.5 meters. To be presented at WiSe 2003: http://wwwcs.rice.edu/~arudys/papers/wise2003.html

Nazario - Trends in DoS attacks
Arbor Networks
Jose talked about blackhole monitoring using a /8 network over a very long time at Arbor Networks in conjunction with Merit. The research looks at backscatter from various attacks. He noticed a shift from TCP to UDP, and mostly small packets. What can be highlighted is the cumulative effects of small attacks and that some attacks have more sources/bandwidth. In the arena of spoofed vs. nonspoofed - the "bot armies" don't care about disclosure of IPs.

Greg Rose - SonicKey
QualComm Down Under
Greg introduced the idea of an acoustic key. Shows 1024-bit DSS key appication on a phone which allows to sign this and send signature over phone. He demosntrated this live. According to him, it enables electronic commerce. No replay attacks supposedly, but no challenge response yet, but working on it. It has some hardware support for the modulo ops.

David Molnar - A Rekeying Protocol for Wireless Sensor Networks
Harvard University
He showed the Berkeley Mica2 Mote that runs TinyOS. TinySEC does not provide runtime rekey support. Rekeying allows for forward secrecy and secure transient associations. This displays an unreliable radio, low power budget, low CPU power, and a small code area (128KB). Suggested application of this is in the triage in the field for vital sign monitoring.

Nick Weaver - Wormholes and a honeyfarm
Nick talks about the problem of detecting new worms automatically. A set of k honeypots will detect a worm when ~1/k of the vul machineasare infected. It splits traffic endpoints from the honeypots. A honeyfarm will use virtual machine images to implement honeypots. This will create a "vulnerability signature" and a possible attack signature. It works best for human attackers and scanning worms. However, it must trust wormhole devices, not the honeyfarm operator. The detection based on infected honeypot, not traffic from wormhole. At his point the status is at the paper design phase. Suggested cost: Wormhole: $350 per node, but 1000 endpoints would be necessary.

Niels Provos - Honeyd - A Virtual HoneyPot Daemon
CITI
Niels talked about Honeyd, a virtual honeypot daemon that adopts different OS personalities and fools nmap and xprobe2. It features load sharing, provides network decoys, etc. Honeyd acts as a frontend for real high-interaction honeypots: connect virtual IPs to real machines You can (and should) sandbox Honeyd subsystems using systrace. It can be used for spam prevention: collaborative filter like Razor, by simulating open proxies and mail relays and resend to spam trap. Also can act as wireless honeypots with a fake internet topology

Scott Crosby - Regular Expression DoS
This talk was a bit cryptic. Scott talked about ways to "render SpamAssassin unusable" by overloading what regular expressions look for. Aimed at regular expression parsing in Perl. No defenses were suggested. Hm.

Robert Watson - SEDarwin: Porting TrustedBSD MAC and SEBSD to Apple's Darwin
Robert talked about his experience in porting MAC and SEBSD to Mac OS X.

monkeys@monkeys.org
Still using cleartext after all the years.
One of the "monkeys" talked about the passwords collected from the wireless network at the conference. Reactions were mixed in the audience. They posted the actual passwords (sans usernames) in their slides. Claimed to have caught one ssh password via MITM attack (sshd + dhcpd). Meant as a follow-up to Dug Song's 2000 paper "Passwords found on a wireless network" More at: http://monkey.org/~marius/netics

The full Usenix Security 2003 WiP agenda can be found at: http://www.usenix.org/events/sec03/wips.html including some of the abstracts.