NSPW 2001 Conference Report
Cloudcroft, New Mexico, USA,   September 10-13, 2001
Review by Mary Ellen Zurko
October 2, 2001

 

NSPW 2001 Conference Report New Security Paradigms Workshop was held September 10 - 13, 2001 at Cloudcroft, New Mexico, USA. It was sponsored by the Association for Computing Machinery with Support from the Department of Defense, USA, and SEI/CERT. The first session began on Tuesday, September 11, after a recognition of the tragedy of the attacks, and various participants needs to contact loved ones.

Session 1, Creative Mathematics, was chaired by Mike Williams.

The first paper was "Computational Paradigms and Protection" by Simon N. Foley and John P. Morrison. Simon presented. Their security model uses the sequencing control in parallel and concurrent programming to specify access control over the states in a transaction. In the traditional imperative paradigm, the programmer must explicitly specify the sequencing constraints on operations. The availability and coercion paradigms take a sequence of operations and drive it from either the beginning or the end result. In the availability paradigm, operations run when their input data is available. In the coercion paradigm, operations are executed when their results are needed. The authors suggest that casting protection in the availability or coercion styles provides the basis for more flexible and distributed control over the sequencing and mediation of the operations. They use the Condensed Graph model to specify the flow and triggers. An operation may be scheduled to a particular security domain if the domain is permitted to execute the operation. The example equates these permissions with user roles. A tenaciously protected operation will only produce results in an appropriate domain, while a fragilely protected operation may produce null if it is not in an appropriate domain when it fires. They have a prototype implementation of this system.

The next paper was "Secure Multi-Party Computation Problems and their Applications: A Review and Open Problems" by Wenliang (Kevin) Du and Mikhail J. Attalah. Kevin presented. After an overview of what Secure Multi-party Computation (SMC) is an the related work in the area, Kevin discussed their framework for identifying and defining SMC problems for a spectrum of computation domains. Their transformation framework systematically transforms normal computations into SMC computations. Computations can be multi-input (often two) or single-input. The inputs are considered private, and sometimes the results are too. In the latter case, some participating party is not allowed to know the results. The SMC model assumes two parts of input, coming from different two different parties, each of whom is keeping their input private. In the single input case, the input is divided into two data sets. In the homogeneous transformation, each data item maintains its atomicity. In the heterogeneous transformation, each data item is split in two. Problems identified with this framework include privacy-preserving (PP) database queries, PP data mining, PP intrusion detection, PP statistical analysis, PP geometric computations, and PP scientific computations.

The next paper was "Model-Carrying Code (MCC): A New Paradigm for Mobile-Code Security" by R. Sekar, C.R. Ramakrishnan, I.V. Ramakrishnan and S.A. Smolka. Sekar presented. The primary motivation is that with existing approaches, neither the producer nor the consumer can unilaterally determine the security needs of a mobile program. This vision includes consumers refining their policies when mismatches occur. In addition to conformance with the consumer's policy, their approach checks if the model represents a safe approximation of program behavior, based on the particular execution of the program at the consumer's site. With MCC, mobile code comes equipped with an expressive yet concise model of the code's security relevant behavior. The code can be restricted to that model, as accepted by the consumer, when it runs. Alternatively, the consumer can trust a signature over the code and model, or rely on proof-carrying code to check the code against the model. The approach is applicable to code written in C or C++; it is not language-specific or limited to type-safe languages. It is an alternative to approaches such as proof-carrying code and Java security. They use extended finite-state automata to represent program models. They have looked at compile-time analysis and machine learning to generate the models.

Session 2: Survivability, was chaired by Abe Singer.

The first paper of that session was "Heterogeneous Networking - A New Survivability Paradigm" by Yonguang Zhang, Son K. Dao, Harrick Vin, Lorenzo Alvisi, and Wenke Lee. Yongguang presented. Their paper proposes systematically increasing a network's heterogeneity to improve a its defense capabilities, without sacrificing interoperability. Their diversity space diagram organizes functional capabilities of a network (protocols, routers) along the dimensions of operating systems, communication medium and service model. The distance between any two elements would represent their vulnerability to attacks. A key question is how many elements a survivable network needs to support. In principle, composing different selections of network elements from each functional capability layer can yield different versions of an end-to-end network service. Their methodology supports network reconstitution through heterogeneous replication and dynamic reconfiguration. IDS reports drive the dynamic behavior. As an example of their heterogeneous service model, they show the standard client/server WWW application over the Internet being replicated with a broadcast/filter information dissemination application over a satellite network. Discussion points that will be integrated into the final paper include problems with single physical points of failure (the dreaded back hoe attack) and studies that show that different programmers given the same problem produce solutions with overlapping vulnerabilities.

The next paper was "Safe and Sound: a safety-critical approach to security (Position Paper)" by Sacha Brostoff and M. Angela Sasse. Sacha presented. Their emphasis is on socio-technical design of security. They note that safety-critical systems design has similar goals and issues as security design. On similarity is that failures in both types of systems may result in an attribution of failure that does more to identify who to blame than it does to fix the problem. On difference they point out is that violation of security rules is encouraged in certain contexts to identify security flaws. They suggest Reason's Generic Error Modeling System (GEMS) as a starting point. It identifies three error types: slips (attentional failures), lapses (memory failures), and mistakes (intended action that leads to unintended result). With violations, these form the class of unsafe acts. An organization is described by decision-makers, line managers, preconditions, productive activities, and defenses. The model's distinction between active and latent failures offers a way to identify and address security issues that involve human behavior.

Session 3 was a discussion session, chaired by Bob Blakley, based on Victor Raskin's "Ontology in Information Security: A Useful Theoretical Foundation and Methodological Tool". Victor argues that the security community needs an ontology. He has seen researchers argue about the definition of terms such as anonymity, unlinkability, unobservability, and pseudonymy. An ontology is a highly structured system of concepts covering the processes, objects, and attributes of a domain in all of their pertinent complex relationships. Ontology organizes and systematizes all phenomena in a research purview. Most approaches gain from the induced modularity. I can predict additions from the full combinatorics of the compatible properties. Discussion questions included ontology's relationship to glossaries, how practitioners would use and profit from it, and whether or not an ontology makes it easier to miss security flaws that have gone unnamed.

The first session on Wednesday, September 12, was Session 4: Innovative Solutions, chaired by Carla Marceau.

Carla also stepped in and presented "AngeL: A Tool to Disarm Computer Systems" for Danilo Bruschi and Emilia Rosti. This tool attempts to stop distributed denial of service attacks from the hosts that are used without their owner's knowledge to launch the attacks, as opposed to more traditional approaches that try to defend from the target on back. The tool works for DOS attacks targeted to the local host as well. It can currently detect and block more than 70 documented attacks. They had presented their initial concept of connecting disarmed hosts at NSPW 2000. One of the things AngeL does is wrap the execve(). It checks the contents of environment variables for suspicious characters, it checks that stats and privileges on the calling program, and it checks the parameters. There is also a module that can be integrated into the personal firewall capability of Linux which looks for attacks that exploit network and transport layer and application layer protocol vulnerabilities. A protection mechanism to keep the tool from being removed was also implemented. AngeL is a loadable kernel module. An MD5 encrypted password is associated with it during loading. The password must be written to the write only /dev/angel device to allow removal. Some performance evaluation was also included in the paper.

The next paper was "Survival by Defense-Enabling" by Partha Pal, Franklin Webber and Richard Schantz. Partha presented. Their work attempts to give applications attack survival and intrusion tolerance, even when their environment is untrustworthy. They emphasize survival by defense, which aims to frustrate an attacker if protection fails and the attacker gains some privilege. This work assumes the ability to modify or extend the design of critical applications. The paper focuses on corruption that results from a malicious attack exploiting flaws in an application's environment. They discuss slowing down the attacker's acquisition of privilege, by distributing the application's parts across domains and constraining privilege accumulation concurrently across a set of domains. Requiring application privilege separate from domain administrator privilege can slow down attackers. Use of redundancy, monitoring, and adaptation can also be used. The paper also considers both direct attacks on the application itself and indirect attacks which target the resources applications need. Pro-active defensive adaptation can further slow the attacker. While their emphasis is on sequential attacks, they are looking at rapid reaction to anomalies for non-sequential attacks (such as DDOS).

The next paper was "A Trusted Process to Digitally Sign a Document" by Boris Balacheff, Liqun Chen, David Plaquin and Graeme Proudler. Graeme presented. This approach relies on the Trusted Computing Platform Alliance . The trusted process creates a signature over a digital image that represents the document. It uses a trusted display controller (TDC) and a smartcard owned by the signer. The method relies on the protected communications between the TDC and users' smart card, and on privileged access to the computer's display. The trusted display controller is pat of the video processing path, and can display video data on a monitor without interference or subversion by any software on the platform. The smart card is able to authenticate the trusted display controller and demonstrate to the signer the results of that authentication using the trusted display controller. It uses a user-specified thumbnail image to mark its displays. This thumbnail seal image should be written to the smart card securely. The smart card signs image data on the authority of the TDC without direct authorization from the signer. The TDC generates an ascii string nonce, which is also displayed with the thumbnail seal image. If the user wants to sign the document, they type the nonce into the normal keyboard.

Session 5, Less is More, was chaired by Sami Saydjari.

The first paper was "NATE - Network analysis of Anomalous Traffic Events, a Low-Cost Approach" by Carol Taylor and Jim Alves-Foss. Carol presented. Their work is specifically designed for high speed traffic and low maintenance. It features minimal traffic measurements, an anomaly-based detection method, and a limited attack scope. The expectation is that the anomaly based approach combined with simplified design will be more efficient in both operation and maintenance than other lightweight approaches. NATE only measures packet headers. They monitor counts of the TCP flags and the number of bytes transferred for each packet. They also aggregate sessions based on source and destination IP and port. They use cluster analysis, a multivariate technique, to find normal groups of TCP/IP sessions, and they used Principal Components Analysis for data reduction. They evaluated their method against the Lincoln Labs data set. They found it was successful in identifying the attacks that could be identified from analyzing network headers, and it had a low false positive rate.

The next paper was "Information Security is Information Risk Management" by Bob Blakley, Ellen McDermott, and Dan Geer. Bob presented. The paper argues the information security technology deals with only a small fraction of the problem of information risk. The evidence increasingly suggests that it does not reduce information risk very effectively. Generally speaking, businesses manage risk as part of their day-to-day operations. They may transfer liability for an adverse event to another party. A business may indemnify itself against the consequences, either by pooling with other businesses (insurance policies) or hedging by placing a bet that the adverse event will happen (options). Risk can be mitigated by systems or process redesign, or reducing the damage that is likely to occur (building codes). Businesses can retain risks, and either set aside funds to offset the cost, or not. A table of information security products and processes shows them to be heavily clustered in the mitigation category. The FBI/CSI survey shows nearly universal deployment of security technology and rapidly and steadily rising losses from security incidents. Information security risk assessments should focus on quantifying risks. Security technology development and selection should be based on quantitative studies of effectiveness. Information that needs to be collected to do this includes vulnerabilities, incidents, losses, and countermeasure effectiveness. The authors ask, if the IT security industry can design countermeasures and counsel clients on how to defend their systems, why can't _we_ help underwriters develop assessment and underwriting tools and train claims professionals in the intricacies of IT losses? Do we have something more important to do?

Session 6 was a panel celebrating the 10th NSPW, called "The New Security Paradigm Workshop: Boom or Bust?" Steven J. Greenwald was Panel Chair. The panel statements were "Neither Boom Nor Bust" by Hilary H. Hosmer, "Tracking Influence Through Citation Index Comparisons and Preliminary Case Studies" by Mary Ellen Zurko, and "Thinking in an Age of Instant Communication; Communicating in a Time of Reflective Thought" by Marv Schaefer. Steve asked the question, has NSPW been effective for advancing new ideas, challenging old ones, and encouraging new authors? Paradigm shifts predicted by the first NSPW authors included a shift to application level security, decentralized interoperable networks, systemic flaw reporting and correction, and enterprise modeling of sociotechnological aspects of computers. Holly pointed out that the Boom and Bust model is not applicable. Her paper points out that the boom and bust model is a dynamic model with delayed feedback, resulting in the failure of the system to adjust rapidly enough to reach sustainable equilibrium. Most new paradigms take at least a generation to win general acceptance, when the holders of the old paradigm die off. Mez presented some citation index numbers from CiteSeer comparing NSPW, Computer Security Foundations Workshop, and IEEE Symposium on Security and Privacy. She also had some feedback from authors of the most heavily referenced NSPW papers on CiteSeer. It was unclear from the data that influence could be tracked that way. Victor Raskin mentioned that universites are considering dropping citations indices as a criterion for promotion because the data is worthless. John McHugh recommended a more intelligent literature search tracking subfields such as inline reference monitors and immune system approaches. Marv emphasized the workshop nature of NSPW. It's value is that it gives individual participants more in return than they individually contributed, and not just during hallway conversations, but during the actual paper presentations as well. Putting the contributions of more traditional conferences in context, he noted that the security posture of most computer systems today is far weaker than ever before, and that every few years the past is recreated in our profession, because computer security professionals do not read the literature they cite. Sami Saydjari commented that the worth of NSPW is its willingness to accept papers out of the traditional categories.

The final session, on Thursday, September 13, was Session 7: Passwords Revisited, chaired by Tom Daniels.

The first paper was "A Note on Proactive Password Checking" by Jeff Jianxin Yan. Jeff argues that hackers may attempt to brute force passwords based on entropy. He proposes using entropy-based proactive password checking as an enhancement to current use of dictionary-based checking. Proactive password checking checks a user's (new) password in order to determine its strength. He gives 12a34b5 as an example of a low entropy password not currently caught by proactive password checkers. Using a 7 character password as an example (because the length is widely used), he notes that the highest entropy pattern is 5 alphabetical and 2 numeric characters, while 2 alphabetic characters and 5 numeric characters is in an area of low entropy. The paper outlines a simple and efficient algorithm to detect low entropy 7 character passwords. Future work includes extending the analysis to longer passwords. Much of the discussion centered around the potential pitfalls of attempting to deploy proactive low entropy password checking.

The final paper was "Pretty Good Persuasion: A First Step Towards Effective Password Security" by Dirk Weirich and Martina Angela Sasse. Dirk presented. This work starts from the assumption that in most organizations, users cannot be forced to comply with security policies. They must be persuaded to do so. The persuasion may rely on changes to the policies and the way they are enforced, or on changing the social discourse around the subject. This work focuses on rules around and use of passwords. They conducted semi-structured in-depth interviews to try to understand why some users are motivated to behave in a security-conscious fashion, and some are not. The interviews were guided by the theory of fear appeals, which says that to be effective, they must convince the recipient that the problem is serious, it may affect them, it can be avoided by taking appropriate action, and the recipient is capable of performing that action. Discourse analysis brought out a number of points. A large number of participants had mental constructs that make it almost impossible to use fear appeals effectively. There was a strong social element in sharing passwords; it is seen as a sign of trust among co-workers. People who behave in a security-conscious way are often described in negative terms such as "paranoid", even by themselves. Initial ideas on how to solve these problems include changing the discourse about password mechanisms using social marketing techniques, changing policy so that it will increase compliance, and designing mechanisms with their persuasive power in mind. A very extreme change that might be used for further study would be to implement a password mechanism that did not allow a user who forgot their password to change it for 24 hours, and which was unique and used without a name, to strengthen personal association.