Review of  the RSA Conference,
San Francisco, CA
February 14-18, 2005

Review by Jeremy Epstein
Senior Director, Product Security & Performance
http:/www.webMethods.com
March 3, 2005

This was my first time attending, and I expected to be unimpressed, as I usually am at conferences that have lots of trade show / PR aspects. To my surprise, there were a lot of good talks, as well as goodies on the show floor.

The organizers claimed over 13,000 attendees, and it felt that way trying to find a seat in some of the tracks, or find somewhere to sit down to talk to someone.

Predominant technologies in the trade show were:

  • Authentication devices - 2 factor, challenge response, biometrics, etc.
  • Anti-spam software, appliances, services, etc.
  • Security assessment services, including for regulatory compliance
  • Fair amount of IDS & IPS, with less hype than in the past
  • Wireless crypto devices
  • Security testing via static and dynamic testing
  • I didn't see anything radically new and exciting on the show floor. I asked several people I know what excited them the most, and heard "nothing much".

    Highlights:

  • There's a lot more focus on getting confidence that systems are built securely
  • Everyone understands that web services have a lot of potential dangers, but many users are ignoring them and moving forward with implementation
  • The security field is progressing very slowly, and keeps making the same mistakes
  • There's a lot of talk about the SHA-1 break, but it's not practical at this point
  • Bill Gates may be rich as Croesus, but he's also a mediocre speaker
  • Some of the highlights....
    Cryptographer's Panel

    The cryptographers panel focused on predictions for the future, as well as looking at prior predictions. Video clips showed previous predictions, and the panelists were asked to comment on how their views had changed and what they got right and wrong.

  • Passwords won't go away - the real reason IT wants to get rid of passwords isn't security, but rather password sharing. For many applications, passwords are good enough.
  • RFID tags are still coming. An interesting usage is in libraries to find mis-filed books, which are effectively lost forever if not found.
  • Provable security isn't much closer than it was 12 years ago, when it was predicted to be 10 years away.
  • Anonymous electronic money, which was predicted as a key technology for the Internet 10 years ago, is never going anywhere; the real progress is in online credit cards. [There was an article in the Washington Post about small value transactions a few weeks ago. See http://www.washingtonpost.com/wp-dyn/articles/A45393-2005Feb22.html].
  • Fraud isn't as big a problem as was expected.
  • Hardware cryptanalysis may start getting interesting - the new multi-core chips have the potential for allowing one thread to do cryptanalysis of what another chip is doing, by watching the timing and processor state. This is critical for Digital Rights Management (DRM).
  • There was a discussion of the new SHA-1 security break found by a group of Chinese cryptographers who had been totally unknown until a year ago. Their progress is really surprising, because they don't come out of any recognized cryptographic community. They can find collisions in 2^69 tries rather than 2^80 as would be expected. It's not severe enough to change things, but cause for caution. SHA-256, SHA-384, and SHA-512 will increase the work factors enough to be safe for now.

    Web services security
    There were a number of presentations on web services hacking, mostly pointing out that the same issues that apply in any other application apply to web services. Just because something is behind the firewall doesn't mean it's safe. There are SQL injection attacks, cross-site scripting, etc., plus a few unique ones such as recursive XML using DTDs to cause the XML processor to (effectively) perform a DoS attack on itself. WSDL and UDDI are great things that make web services useful, but they also give an attacker lots of information about what an attack should look like. Some companies are responding by keeping their WSDL proprietary (a security through obscurity attack).

    What was both enlightening and alarming is that the few actual customer presentations on use of web services were just using SSL and username/password authentication, with the assumption that trading partners are entirely benign so they make minimal efforts to protect against attacks once you're authenticated. The one bit of good news is that since every web services installation is relatively unique (offering different web services), an attacker will need to go after each web service individually (as opposed to firewalls or operating systems, that are multiple instances of the same thing).

    Regulation for improved security
    This debate has been covered pretty extensively in the media, so I'll summarize it by noting that the breakdown was mostly Bruce Schneier and Dick Clarke (arguing that government has to force more security) vs. Harris Miller from ITAA & Rick White (who think the free market does fine and regulation is inherently evil). A hot topic of debate was the Choicepoint case (theft of credit information) which happened the day before. Schneier pointed out that companies need to be actually responsible, not just give lip service to security. Specifically, Choicepoint's customers aren't the people whose information was stolen, but rather other companies - so Choicepoint has little non-regulatory incentive to do a better job. He recognizes that regulation *will* stifle innovation, but we need to make the tradeoff for the good of society. His summary was "When we have self-regulation, security becomes a PR issue". Dick Clarke also suggested that disclosure might work as a substitute for regulation, just as Y2K regulations forced companies to disclose, not to fix. Right now, accounting companies are setting the security bar via SOX compliance - is that really what we want?

    Static analysis
    There were several talks around static analysis, generally of source code. There's been a fair amount of progress of commercial products (several of which were exhibited on the show floor), although this field is in its infancy as far as being practical. Most of the academic research is in finding potential problems, which is making good progress. However, the really critical parts are figuring out which of the potential problems are actual vulnerabilities and need to be fixed vs. which can be safely ignored, and then to tell the user what's wrong and how to fix it. It's also harder than just analysis of a program because modern systems are built out of a mish-mash of technologies in n-tier architectures that include C/C++, Java, HTML, Perl, JSPs, etc., and the problems can be from a combination of pieces that are individually "safe".

    Ten Bugs that Cost Our Customer Billions

    Ben Jun from Cryptography Research gave perhaps the most amusing talk. He reviewed systemic problems that cost $100M+ each.

    Briefly:

  • Bug #10: DVD CSS failure (trivially cracked) caused by bad crypto selection (no reason to do any other types of attack, because the crypto failure is so bad). Lesson learned: bad crypto is inexcusable
  • - Bug #9: Pachinko stored value fraud (Japanese electronic stored value cards, easily duplicated/forged). Loss was $600M+ from fraudulent cards exchanged for cash. Merchants were indemnified against fraudulent cards, so they had no motivation to watch for fraud. Some indications that North Korea was behind this. Regulators weren't technically savvy, and corporate culture was not to report bad news. Lesson learned: even though fraud can't be eliminated, it needs to be managed. Lesson learned: security field should have equivalent of "morbidity and mortality" meetings in hospitals to review what went wrong, without fear of lawsuits.
  • Bug #8: Y2K: This was not a security problem, but costs were huge. Testing failed when products were originally built - should have had tests for date rollovers. Lesson learned: try testing data fields more thoroughly.
  • Bug #7: Napster/P2P: Lesson learned: improved delivery mechanisms simplify content attacks. Countermeasures include new systems that include code to decode content, making the job of the attacker harder.
  • Bug #6: Spam: Email wasn't designed for a hostile environment. The economic disparity, similar to telemarketers, where it costs the sender much less than the receiver make spam hard to fix. Lesson learned: Infrastructure retrofits are hard & expensive.
  • Bug #5: Mag stripe skimming: ATM card fronting/cloning is easy. Lesson learned: Need to replace dumb cards with chip-based cards.
  • Bug #4: Pay TV hacking: The international boundary system (TV regions) generates demand, because can't get programs from certain countries. As a result, attackers are willing to spend $1M to duplicate/crack systems. Lesson learned: don't need to make the system bulletproof, but need to make attacks expensive.
  • Bug #3: The PC platform: Saltzer & Schroeder identified the principal of least privilege in 1975, but their results are still largely ignored. Most users run as administrator on Windows boxes, which makes them more vulnerable to attack. We need firewalls inside computers, and use of all the rings (not just kernel vs. application). Lesson learned: Don't sidestep partitioning.
  • Bug #2: 802.11b (WEP): Lousy usage of crypto algorithms, no key management resulted in corporate rollouts being delayed ~18 months. There are real exploits, including database theft. Real threat is for non-PCs (i.e., PDAs, watches). Lesson learned: Make it hard for users to make security mistakes; get expert assistance.
  • Bug #1: The unknown bug: How do we fail less than we have in the past? Learn from the Risk and Insurance Management Society.