Review of  the
ACSAC, the Annual Security Applications Conference
Miami, Florida
December 11-15, 2006

Reviews by Tom Haigh and Charlie Payne and Jeremy Epstein
1/09/07

ACSAC 2006 took place December 11 - 15 at the Miami Beach Resort and Spa. It was a very well run conference with a strong program. The notes below reflect the interests of the reviewers (Tom Haigh and Charlie Payne), so are not complete. Please see www.acsac.org for the full program.

Invited Talks

Distinguished Practioner, Dixie Baker (SAIC), described the interplay between security (privacy) and the objective of providing for a healthy society. She observed that detecting biological events near to time of initial exposure reduces public health risk but increases risk to personal privacy. Hence, there is a trade to be made, and how to make this trade is a policy decision. She also talked about the somewhat baroque state, federal, and national systems, the interplay among them, and the impact of HIPAA. Along the way, she made an interesting distinction between the perception of federal control (not desired) versus a willingness to engage in national level cooperation (bottom-up, grassroots). This appears to be true for many aspects of critical infrastructure protection. How effective can this approach be? Her presentation, along with the presentation by Lillian Rostad reported below, suggest that there is a significant amount of work that could be done to improve both security policy and security mechansism applicable to healthcare systems.

In his presentation, Invited Essayist Brian Witten (Symantec) described an approach for using a separation kernel architecture, formal methods, and automated binary code analysis (model checking) to build highly assured systems. In his intro he made two very interesting assertions. First, he claimed that Symantec has anecdotal evidence of a pre-zero day attack being used to create a botnet on over a million nodes, and that it existed for over a week before it was detected. So he claims we may have reached the point where the time to exploit, defined as the time a vulnerability is reported minus the time it was exploited, has gone negative. Second, he suggested that the reason there have not been any massive DoS attacks for several years is that it is now in the financial best interest of the attackers to keep the Internet running. This presentation stimulated a great deal of discussion, with two predominant themes. The first was the observation by many veterans that aspects of Brian's approach have been tried before. The second was a question about the relative merits of separation kernels and hypervisors as a foundation for assured computing.

In the Classic Papers session, Jeremy Epstein (Web Methods) provided a very good review of the Trusted X-Windows work. His presentation provided a good stimulus for people to think about the problems associated with building a truly trustworthy windowing system, and he illustrated very clearly the challenges associated with building highly assured systems. His paper is good reading for both the veterans and newcomers to Infosec.

For the second Classic Paper, Peter Neumann (SRI) presented conclusions based on what he has seen in his Risks Forum. Read the paper for some amazing stories. Peter expressed concern over emergent properties in complex systems. He suggested composability analysis. Afterward John McHugh suggested it might not be reasonable to expect a priori analysis to identify emergent properties. It could have been a long and very interesting discussion if the moderator had not stepped in. Definitely a fun question to think about.

Other presentations that caught the reviewers' attention:

"Shamon: A System for Distributed Mandatory Access Control" by Jonathan McCune (CMU grad student), Trent Jaeger (Penn State), Stefan Berger, Ramone Cackeres, Reiner Sailer (all IBM)
The problem they considered was to ensure that corresponding partitions on separate Xen hosts can communicate with each other, akin to what the reviewer knew as distributed type enforcement in the SCC LOCKed Workstation days. They run a MAC gateway in a separate Xen partition. The gateway can communicate with all the partitions on the local host and with other gateways. It is trusted to keep information from different partitions separate. For communication with other gateways, it runs the SE Linux labeled IPSec. McCune gave the presentation and did a good job.

"Backtracking Algorithmic Complexity Attacks Against a NIDS" by Randy Smith, Cristian Estan, Somesh Jha (all UW Madison)
This won the best paper award. They provided examples of Snort rules that can take unacceptably long times to execute because of the way the Snort engine evaluates rules with relative predicates. When a predicate evaluates to false, the engine ignores all it has learned about the truth of other predicates, and starts over (too much backtracking). As a result, an attacker can disable a system with a relatively small amount of bandwidth (4 kbps) They then introduced an approach for avoiding this problem by saving information about predicates that have already been evaluated. They also observe that Bro is not susceptible to these backtracking attacks.

"Practical Attack Graph Generation for Network Defense" by Kyle Ingols, Richard Lipmann, Keith Piworarski (Lincoln Labs)
They created what they called a "Multiple Pre-Requisite Graph" and claim generation in times that grow nearly linearly with the size of the network. They reported it has processed synthetic networks with 50,000 hosts in 4 mminutes. The reviewer is uncertain what they assumed about the hosts. They use Nessus to create their network model, and they include vulnerabilities found in CVE and NVD. They also collect configuration rules from Sidewinder and Checkpoint firewalls. They have applied it to a live network of 250 nodes and found a previously unknown configuration error.

"A Study of Access Control Requirements for Healthcare Systems Based on Audit Trails from Access Logs", by Lillian Rostad and Ole Edsberg (Norwegian University of Science & Tech)
This team analyzed access logs from the Siemend DocuLive Electronic Patient Record (EPR) system to see how often there were exceptions to the defined role-based access control policy. The notion of exception is a part of the policy, and it is used when, in the judgment of the person granting or requesting the exception, it is necessary for the treatment of the patient. They learned that about 25% of accesses are exceptions, and they are concentrated in certain wards. Their conclusion is that exceptions are clearly not "exceptions" and a different sort of policy is required.

Automatic Evaluation of Intrusion Detection Systems by Frederic Massicotte (Commmunications Research Center) Francois Gagnon, Yvan Labiche, Lionel Briand, Mathieu Coutrure (Carlton Univ., Canada)
They are building a system to generate attack traffic, and they want to make their traffic (and their system?) freely available. They have about 125 exploits against both Linux and Windows systems. In evaluations of Snort and Bro, they found that both systems missed about 25% of the attacks, and Snort did a poorer job of discriminating between successful and unsuccessful attacks. They are looking for partners in this work. Learn more at networksystems-security@crc.ca

From the vendor track:

Trusted Storage - Dave Anderson (Seagate)
Seagate is coming out with a line of secure hard drives for PCs based on the TPM. There are some public papers at http://www.trustedcomputinggroup.org/home.

Using Predictive Analytics and Modeling to Improve Insider Threat Detection and Cyber Threat Identification - Peter Frometa (SPSS, Inc)
His approach combined data mining and text mining (linguistics-focused). For detecting insider threats, he used cluster maps to identify typical behaviors and outliers. Clusters are tagged, then rulesets are built to characterize each cluster. As new users enter the system, they are characterized by cluster type.

From the Works in Progress session:

Federated Identity Architecture Evaluation. Fragoso-Rodriguez, Laurent-Makanavicious, Incera-Vieguez (Instituto Technolgico Autonomo de Mxico)
They are developing metrics and method of evaluating the Liberty Alliance, Shibboleth, and WS-Federation architectures.

An Open Source High Robustness VMM. John McDermot and Myong Kang at NRL.
They are using Xen to build an EAL6 system. They are trying to do everything right, and this reviewer has some hope that they might pull it off, which would be great.


Further ACSAC Commentary by Jeremey Epstein

Dixie Baker (SAIC) kicked off the conference with "Privacy and Security in Public Health: Maintaining the Delicate Balance between Personal Privacy and Population Safety", a discussion of security in healthcare systems from a US perspective. She was the author of a trailblazing paper about PCASSO in 1997 which discussed using MLS technology for healthcare systems, but their successes using MLS have not been copied. Dixie started with a very informative overview of what public health means in the US, and what types of information are routinely collected. She then surveyed the legal situation, focusing on HIPAA, which allows for release of the "minimum necessary" information without patient consent. Just as security and usability frequently collide, she described how privacy and health risk can collide or at least be interdependent. For example, while issuing X.509 certificates to health care workers has many advantages, in case of an epidemic where door-to-door surveys are required in a very short time window, waiting for certificate issuance is not feasible. Adding to the usual security issues, health care information is collected at many different levels, and not routinely shared. For example, localities and states collect information that's not passed up to the federal level. This can be critical because some information is passed up in an anonymized form with a "linkback" ID, but if due to an epidemic some people need to be contacted, the privacy of the linkback must be breached very quickly.

Jonathan McCune (CMU/IBM) presented "Shamon: A System for Distributed Mandatory Access Control". Their goal is to have machines shared (e.g., for web hosting companies that may be competitors) using a hypervisor that provides separation. They provide protected controls between instances of the same label (i.e., same company) on different machines. They've built a prototype using mostly open source tools, and have it generally working. Concerns from the audience are that this isn't really MLS, it's just isolation because there's no communication across labels. Their eventual plan is to support more sophisticated policies.

Michael Korman (Univ of Connecticut) presented "An Internet Voting System Supporting User Privacy". Electronic voting is obviously a hot topic, and they want to provide a transparent solution that can allow voting over the Internet. This has obvious advantages (e.g., allowing absentee balloting without mailins), but significant disadvantages as well (opportunities for vote buying, even if they address anonymity, accuracy, etc). They get accuracy by allowing voters to see their ballots after the election in such a way that they can be totaled, but other voters can't tell how the voted. Their scheme is based on homomorphic encryption. [JE comments: even if this is totally workable, which I doubt, it fails the grandma test - it's just not understandable to average voters - or even to this security-savvy but non-crypto expert.] They recognize some of the hard problems such as ensuring that voters' machines haven't been compromised (suggest booting from CD-ROM before voting), denial of service attacks (suggest puzzles to prevent), vote buying and coercion (not easy to address), etc. As Peter Neuman commented in the Q&A session, "Vote selling is the key problem in this design. This is a nice technological solution to something that's not the real problem."

Brian Witten (Symantec) started the second day with his talk "Engineering Sufficiently Secure Computing". After a review of attack and response statistics, he presented a set of axioms, starting with "defense must cover all weaknesses - offense needs just one" and "perfection is not possible - there are n silver bullets; for every move, there is a counter-move". He then proposed that we base security on four technologies: cryptography, separation kernels (an old idea, but worth revisiting), formal verification (perhaps to verify pieces of the separation kernels), and static analysis (to reduce the number of flaws in code). He then delved into the four technologies, bringing some new perspectives (e.g., Intel uses automated formal verification because they have no other choice - the verification team was tripling in size every time the engineering team doubled). Static analysis isn't perfect, but it's a lot cheaper than formal verification, and gets us much of the way there. Topics raised in the Q&A session included how this is revisiting 30 year old ideas, and Brian's note that chip vendors are now willing to use silicon to improve security, since there's less need for more raw speed.

As part of the DHS panel chaired by Doug Maughan, Dave Jevans (Anti-Phishing Working Group and IronKey) described the state of the art in fighting phishers. While it's not hard to get phishing sites removed, the phishers are determined, and are fighting the countermeasures. The volume of attacks has doubled in the last two months. Russian sites are selling tailored malware to attack specific sites, and no longer just the big sites like eBay and Citibank, but also small banks and credit unions (I guess they've figured out free enterprise!). They've come up with new ways of getting cash out through manipulation of thinly traded stocks, buying stock using commandeered accounts. Despite common belief, a majority of phishing sites are in the US with many in China and Korea (the latter because of ubiquitous broadband and weak security controls).

The most interesting of the Works In Progress (WIP) session was by David Perlowski (NSA) on Software Assurance Analysis Methodology. They've recognized that existing DoD schemes are insufficient to measure software security - Common Criteria isn't timely and doesn't cover all software, and C&A is inadequate. Individual services have had efforts but they haven't addressed timeliness, repeatability, quality, etc., and don't scale. Their goal is to "let the code speak" by performing time-constrained evaluations that are goal-based, highly automated, and repeatable. They'll define a set of metrics that can be automatically calculated from code (source, binary, script) - using a diverse set of metrics, and apply them to the code with a weighting as an indicator. The idea is to turn this into a high level statement of code, with percentage confidence. So far they've done some pilot evaluations with the results published internally at NSA, with a second round soon.

Also in the WIP session, but not a WIP, was David Bell's follow-up from last year's "Classic Paper". He concluded after last year's pessimistic talk that he had been too optimistic, and set out to correct that impression. He reviewed the mistakes NSA has made (and is making) including MISSI (which killed off commercial high assurance systems), SE-Linux (which popularized MLS but isn't high assurance), the failure to provide a path for MLS vendors to move from TCSEC to Common Criteria, etc. An excellent set of slides and his paper are available at www.selfless-security.org.

Chongkung Kil (North Carolina State Univ) presented "Address Space Layout Permutation, Towards Fine-grained randomization of Commodity Software", one of several papers at the conference on this subject. While Microsoft Windows Vista provides 8 bits of address space randomization (which can be effectively defeated), others have been working on greater amounts of randomization. Kil's approach is for Linux, and uses binary rewriting to modify programs before they are loaded. The risk is that if it's not done perfectly, the program will crash strangely, and will be hard to figure out why.

David Whyte (Carleton University) explained "Addressing SMTP-based Mass-Mailing Activity within Enterprise Networks". His premise is to use a simple form of anomaly detection: machines other than mail servers are very unlikely to do MX lookups. This will catch nearly all bots that try to bypass mail servers and broadcast directly. ISPs aren't particularly interested since they avoid this problem by blocking port 25, but his solution allows unblocking port 25 without increasing spam. However, in the escalating spam wars, all the spammer has to do is provide their own DNS server (which will admittedly increase the size of the bot) and this scheme stops providing any protection.

The "Highlights from the 2006 New Security Paradigms Workshop" is always a great panel, and this year met expectations. Lt. Col. Greg Conti (USMA, but speaking for himself!) talked about just how much information Google collects, and using the AOL dataset showed how much information can be used to trace back to individuals. If you're going to Google yourself, don't do it from the same machine where you look at porn! Carrie Gates (CA Labs) & Carol Taylor (U of Idaho) challenged the established intrusion detection paradigms, noting that the assumptions made in the early days of IDS (attacks are anomalous, distinguishable, rare; anomalous activity is malicious, and attack-free data is available) are all incorrect. Additionally, the early work was done in non-networked environments, and the assumption was made that the results carried out, but the assumption has never been vetted. Among their recommendations are the development of meaningful sample data for training and testing IDSs, unlike the Lincoln Labs data which has many unrealistic characteristics. Richard Ford (U of Florida) noted that spam occurs because spammers find it economically lucrative, and if we can tip the economic balance we can drive them out of business by making their business model unstable. To do so, he proposed a scheme of installing as much adware as possible on a protected network (thus getting fees from those who pay for the installation) and just as quickly wiping them out. This will force those who pay for spyware installation to revisit their model. Subsequent phases try to similarly break the economic model using automated clickthroughs of the spam ads that don't lead to sales. He then (briefly) raised some of the ethical issues of trying to kill off spammers.

ACSAC will be back in Miami Beach in December 2007, although the hotel has not been determined (there was much grumbling that the hotel was too far from restaurants and the food in the hotel was overpriced).