A Summary of the NDSS 2000


Mahesh V. Tripunitara

CERIAS and CPlane



The Network and Distributed System Security (NDSS) Symposium for the year 2000 was held February 2-4 in San Diego, California, USA. As with the previous symposium, the location was the very tasteful Catamaran Resort. The resort offered not only excellent facilities for the presentations and discussions, but also opportunities for activities outside the symposium.


The first day of NDSS 2000 consisted of pre-conference tutorials on such topics as Network Security Protocol Standards, Deployed and Emerging Security Systems for the Internet, Mobile Code Security, Cryptography and Intrusion Detection technology. The remainder of this note summarizes the program events that took place over the next two days.


Please note that all the papers and several of the presentations are available online at http://www.isoc.org/ndss2000/proceedings/.

Day 1


The general chair, Steve Welke, and the program co-chairs, Gene Tsudik and Avi Rubin, began the symposium by welcoming all attendees and discussing the nature of NDSS.


Steve Welke pointed out that the symposium emphasizes applied work over more theoretical work in information security. Gene Tsudik remarked that the NDSS has a unique profile and offered some statistics: there were 51 submissions, as compared to 36 to the previous symposium, of which 15 were accepted. Each paper went through a minimum of three reviews.


The first session, titled “Software Assurance,” was chaired by Gary McGraw. The first presentation, by D. Wagner, was based on a paper titled “A First Step Towards Automated Detection of Buffer Overrun Vulnerabilities.” He pointed out that buffer overruns have been going up as a percentage of the number of CERT advisories. He proposed recasting the problem as an integer range analysis problem involving two integers: the allocated size of a buffer and the number of bytes currently in use. The standard C library functions are then modeled as imposing constraints on the ranges of the two quantities. Any algorithm for integer range analysis can then be used. He reported that two previously unknown buffer overflow problems were found in Sendmail, but neither of these had security implications.


The second presentation, by R. Sekar, was based on a paper titled “User-Level Infrastructure for System Call Interposition: A Platform for Intrusion Detection and Confinement.” He proposed user-level system call interposition as a strategy for intrusion detection. The interposer has both pre- and post-system call extensions. The architecture of the interposer is object-oriented, and consists of supervisor objects that are layered on top of an architecture/OS independent portion, that is in turn layered on top of an architecture/dependent portion. He also presented performance statistics: the interposer has low overhead (2%) under low and high loads for some applications (e.g., gzip), but has higher overhead (35%) under high loads for some others (e.g., httpd).


The second session, titled “Group and Multicast Security,” was chaired by Thomas Hardjono. The first presentation in the session, by O. Rodeh, was based on a paper titled “Optimized Rekey for Group Communication Systems.” He presented a distributed secret-key management approach for group communication. He differentiated between a centralized key management scheme (e.g., keygraphs) and his scheme. His scheme involves the use of balanced tree to group principals, as with keygraphs; but each group chooses a leader who is in charge of negotiating mutual keys between groups. He then argued for the safety and scalability properties of his solution.


The second presentation, by R. Canetti, was based on a paper titled “An IPSec-based Host Architecture for Secure Internet Multicast.” He spoke about a host-based architecture for secure multicast. He focused on a Multicast Internet Key Exchange (MIKE) module that is deployed on each host to handle group membership and control, and of a Sender Authentication Module (SAM) that interacts with MIKE for key material. He pointed out that MIKE is only a framework that can deploy within it one of several multicast key exchange protocols. He then presented some test results based on a deployment on Linux systems.


The third session was a panel moderated by James Ellis and Gary McGraw. The title of the session was “The Economics of Security.” The panelists were Nicholas Economidis, Nick Pasciullo, Fred Chris Smith, and Laurie Wagner. The panel discussed the business and legal aspects of information security. Nick Economidis gave an industry perspective, Nick Pasciullo and Fred Smith gave legal perspectives, and Laurie Wagner gave a non-profit organization’s perspective.


Nick Economidis opined that information security currently consists of guarantees with security assessments, and in the future will involve sureties from software companies. Nick Pasciullo lamented that lawsuits are ineffective, that technological solutions are incomplete, and that security will become pertinent and effective when Wall Street includes it in the list of items they use to value a company.


Fred Smith gave a very entertaining talk replete with pertinent cartoons, and opined that litigation will drive product development from the information security perspective. He said that false insurance claims pertaining to information security will turn out to be a huge, illegal business soon. Laurie Wagner pointed out that property damage lies at the heart of information security issues: only if property damage is involved is there a pertinent security issue. Most questions to the panelists pertained to insurance and legal issues around information security, and the cost-benefit issues associated with security solutions.


The final session of the day, titled “Protocols,” was chaired by Marc Dacier. The first presentation in the session was by A. Perrig, and he spoke on “A First Step Towards the Automatic Generation of Security Protocols.” He presented an automatic protocol generator that takes as input information encoded using the specification language security properties of the desired protocol, a metric function, and an “initial setup.” It searches a suite of protocols and finds one that matches the inputs. The metric function is assumed to be monotonically non-decreasing, and typically depicts the performance overhead associated with a protocol primitive. His discussion was motivated by an example. He discussed generation of an optimal protocol given some input, and a pruning algorithm to speed up the generator.


The second presentation was titled “A Revocation, Validation and Authentication Protocol for SPKI-based Delegation Systems” by Y. Kortesniemi. He stressed the advantages of using the Simple Public Key Infrastructure (SPKI) for authorization over Access Control Lists (ACLs), and then focused on the certificate validation and revocation problem for SPKI. He commented that while validation and revocation have been previously discussed in the context of SPKI, several details were missing. He discussed some of these details, including the state machine for a verification protocol, and proposed several changes to the SPKI.


The last presentation was titled “Secure Border Gateway Protocol – Real World Performance and Deployment Issues” by S. Kent. He discussed a prototype implementation and proof of concept deployment of S-BGP. The tests were geared towards addressing the following two issues: whether S-BGP does offer the security to the control traffic it is supposed to, and the overhead S-BGP introduces over BGP. His results were encouraging, and he proposed security-related enhancements to the Internet infrastructure that will further improve performance and security in this context.


There was then an impromptu “Birds Of a Feather” session by Peter Brundrett from the Windows 2000 security group at the Microsoft Corporation on “The Security Architecture of Windows 2000.” He spoke extensively on all the security features available within Windows 2000. In particular, he pointed to Kerberos v5, L2TP and IPSec support, and access to security services via the SSPI and CryptoAPI interfaces. He also discussed protocols used to negotiate security parameters between clients and servers, and how tightly security is integrated into Windows 2000 networking.


Prof. Gene Spafford from the Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University was the banquet speaker for the evening. He gave a captivating and entertaining talk on the Y2K related panic. He delved into history to discuss similar episodes, such as during the time of Y1K. He said that the problem lay in technical people not being able to place and see things in a broader context. He said that this is a cause for the lack of public awareness, and opined that IT professionals need to be aware of the role of the media and the law. In conclusion, he pointed out that we lose more from loss of productivity because of software unreliability than break-ins, and that to promote reliability, we need to move towards stripped down systems as opposed to cramming more features into them. He warned that laws are being drafted to address software quality and this would soon have an impact on the IT industry.

Day 2


The first session of Day 2 was titled “Protocols II” and was chaired by Paul Van Oorschot. The first presentation was based on a paper titled “Analysis of a Fair Exchange Protocol” by V. Shmatikov. He spoke about a particular contract signing protocol, and the use of the Murj finite state analysis tool to show the weakness in the protocol. The use of Murj also aids in repairing the protocol. The talk’s intent was to showcase Murj.


The second presentation, by M. Steiner, was based on “Secure Password-based Cipher Suite for TLS.” He spoke about the Transport Layer Security (TLS) protocol and its predecessor, SSL. He gave several reasons why the use of PKI as part of TLS is unsuitable for some applications. One of the reasons was that PKI is not “lightweight” enough. He then discussed the suite of EKE key exchange protocols that are based on passwords. He then discussed the integration of one version of EKE, the DH-EKE, into TLS.


The final presentation in the session was by T. Rabin on “Chameleon Signatures.” His presentation discussed two problems: sender non-repudiation and controlled dissemination. He discussed chameleon hash functions, that have the property that the holder of “trapdoor information” (which acts like a public key) can easily find several inputs that hash to a particular value. Using chameleon hash functions and a signature algorithm, he showed how it can be arranged that the sender cannot repudiate a signature because he does not have access to the trapdoor information, and a receiver cannot disseminate the information in an uncontrolled fashion as he is capable of finding several inputs that hash to the value attached to the document. He also discussed the practical implications for the scheme.


The second session was titled “Intrusion Detection” and was chaired by Douglas Maughan. The first presentation, by H. Debar, was based on “A Lightweight Tool For Detecting Web Server Attacks.” He discussed why “standard” network intrusion detection tools are inappropriate in the specific context of detecting intrusions against web servers, given the constraint that the detection must be in “real time.” He then presented an informal categorization of attacks against web servers, and a signature based scheme to detect those types of attacks. He also presented an architecture to support the scheme, and presented statistics from running the tool at a commercial site for extended periods of time.


The second presentation, by J. Loyall, was based on “Building Adaptive and Agile Applications Using Intrusion Detection and Response.” He spoke about the CORBA DOC paradigm and the Quality Objects (QuO) framework that is based on it. He then discussed the use of QuO as middleware for intrusion detection. The middleware is typically situated between a client application and its corresponding ORB proxy. He presented integration scenarios, especially of security within QuO, and experiences to date from the use of such a framework with COTS software.


The third session was chaired by Virgil Gligor and was titled “Distributed Systems.” The first presentation was by D. Shands on “Secure Virtual Enclaves: Supporting Coalition Use of Distributed Application Technologies.” She presented Secure Virtual Enclaves (SVE) as an infrastructure for multiple organizations to share information based on a security policy. She argued that SVE can be incorporated into COTS as middleware, with no change to existing code. She presented the SVE component architecture and the notion of enclaves as it pertains to SVE. She also presented example policies that can be supported within the framework. She commented on the performance and scalability features based on a prototype implementation.


The second presentation, by K. Hildrum, was based on “Security of Encrypted rlogin Connections Created with Kerberos 4.” She showed that rlogin sessions, protected using Kerberos v4, are susceptible to TCP session hijacking. She said that her observations indicated that the Kerberized version of rlogin (and rsh) does not use the “standard” encryption techniques from Kerberos that are immune to replay, but one that is. She pointed out (as did an audience member that happens to be one of the creators of Kerberos) that this is not a flaw fundamental to Kerberos, but its implementation in version 4.


The final presentation, by M. Humphrey, was based on “Accountability and Control of Process Creation in Metasystems.” He spoke about the metasystem concept: it involves the ability to present several heterogeneous entities across multiple administrative domains as part of a single system. He spoke about the problem of user identification and access control within such a system. He presented a specific metasystem, Legion, and addressed those problems in the context of that system. He presented a few different design approaches to the security subsystem and discussed their benefits and disadvantages.


The final session of the day, and the conference, was a panel titled “Red Teaming and Network Security.” The moderator was Douglas Maughan, and the panelists were Brad Wood, Sami Saydjari and Michael Puldy. The panelists first spoke about the basis for red-teaming, and the attributes of “good quality” red teaming. They argued that red teaming can indeed be made scientific, with its basis in software testing approaches. They also spoke about the limitations of red teaming, and argued for its appropriateness and efficacy in today’s information security environment. They presented real world statistics on red teaming and what it costs. They also stressed the differences between adversaries modeled within a red teaming scenario, and real world adversaries.


As the conference from the year 2000 indicates, the NDSS continues to be a high quality information security conference that attempts to include all important aspects of information security in its agenda. There were over 175 attendees from 19 different countries, spanning various fields within security, and drawn from industry, academia and government. The conference only has one track, thus ensuring that the attendees do not get “fragmented.” This author looks forward to attending the next conference. The call for papers is now out, and the website for it is http://www.isoc.org/ndss01/.