Report on the 1997 New Security Paradigms Workshop
Langdale, Cumbria, England, September 23 - 26, 1997
by Mary Ellen Zurko, The Open Group Research Institute (

New Security Paradigms Workshop '97 was held this year in Langdale, Cumbria, England, from September 23 - September 26. The goal of the workshop is to provide a highly interactive and supportive forum for brand new and radically different work in computer security. NSPW is an invitation-only workshop, generally restricted to accepted authors, organizers, and sponsors. It's home page is

Session 1 was on "Formalism and Pragmatism," and run by Hilary Hosmer, who founded the workshop. The first paper was "Integrating Formalism and Pragmatism: Architectural Security" by Ruth Nelson. Ruth was motivated by her perception that nobody listens to security people. She suggested we can change that, by focusing on things that matter to the world, and not just to us. While correctness is a really handy thing to worry about, universality is hard: to get the world right is difficult. She categorized security people as either formalists who mostly access control or pragmatists who mostly worry about intrusion detection. Neither school has great effect on system design. Someone suggested that penetrate and patch had an influence on system design by making systems like Java more complex (and hence worse). Ruth had several possible solutions to these problems. Architects can be intermediaries between the specific and general. We could work on general but not necessarily universal countermeasures. We can try to get more input from the real world (such as from system manager). We shouldn't buy into any one model of the world, because the bad guys don't have to. Instead of discussing issues with jargon (which is good for hiding structure and information) we should discuss things in pidgins and creoles (inter-languages). We can take a risk-based approach and work with attacks and countermeasures. We can include actual users within the boundary of the systems we're designing. There is a tension with security work. Security people come from the "don't do this" direction; functionality people come from the "do do this" direction.

The second paper was "A Practical Approach to Computer Security" by Darrell M. Kienzle and William A. Wulf. Darrell presented. The security people working on the Legion system needed a practical approach to evaluating security. It needed a minimal learning curve and had to allow formality to applied incrementally, where it was most useful. The goal was to give the user (the Legion application writer) the information to make the decision about whether the system is secure enough. They used system fault trees as the basis of their approach to produce Methodically Organized Argument Tree (MOAT). They produce a tree of uninterpreted predicates that are combined with AND or OR gates. Arguments are accessible and amenable to discussion/inspection. Further refinements of the trees go after the next weakest link in the arguments or the node that seems to have the best benefit/cost ratio. Chenxi Wang is doing research on assigning numbers to the nodes and composing them. One much discussed aspect of the work was that it chose a tree representation instead of a graph. It is trying to represent a formal argument. The results from using this method with Legion were that it was inexpensive, formality proved unnecessary, and they benefited from some reuse. It provided a systematic approach to trade-offs, aided problem understanding, and uncovered implicit assumptions. It aided communication in that arguments were accessible to inspection and assumptions were well communicated. They need tools to manage larger analyses; the gestalt approach begins to break down as the system scales up. Dixie Baker suggested that this approach also needs to communicate to the customers how much risk is left.

Session 2, on "Protection," was chaired by Steven Greenwald. The first paper in that session was "Meta Objects for Access Control: Extending Capability-Based Security" by Thomas Riechmann and Franz J. Hauck. Thomas presented. Their work was in a distributed, object-oriented system where an object reference is already a capability (controlled by the system and necessary and sufficient for calling an object). Their Security Meta Objects place restrictions on these capabilities. They are fully programmable, so can be used for policies involving revocation and expiration. Bob Blakley pointed out that you can encode policy that changes member functions to return different values for different policy states. These SMOs can apply policy to references that are returned from calls to methods they point to, or simply disallow their return. This approach encourages library reuse allowing object classes to be security unaware. SMOs also work when their reference is passed as a parameter to a method on an untrusted object. One future direction mentioned was to integrate ACLs and principals into SMOs. They are implementing their work in a Java prototype. Bob Blakley pointed out that Java can turn references into strings, then back into references, which would circumvent this approach.

The next paper was "A Tentative Approach to Constructing Tamper-Resistant Software" by Takanori Murayam, Masahiro Mambo, and Eiji Okamoto. Masahiro presented. They are working on defining tamper resistance as information stored in a device or software that is hard to read or modify by tampering. In terms of cost vs. performance and ease of handling, it is better to achieve tamper resistance without using any physical device. They are starting by making programs hard to analyze, so that it is uneconomic to guess the algorithm and impractical to guess the place in the module of a given piece of functionality. They attempted to generate hard to analyze programs without modifying the original algorithm (one attendee suggested they could hire some programmers who wrote spaghetti code that was hard to understand). Their basic techniques to generate elementary tamper-resistant code are irregularly order the instruction streams, insert dummy codes, eliminate useful patterns and redundancy via optimization. Heather Hinton pointed out that their method of turning two lines of assembly language into nine would produce a significant size issue. Bob Blakley suggested looking over the obfuscated C handbook to see how to do things like this without unreasonable code expansion. Another concern was that if this process can be automated, then it can be undone by an automated process.

Session 3, "Design of Secure Systems," was run by Cathy Meadows, who authored the paper "Three Paradigms in Computer Security" that was used as a touchstone for this discussion session. Her paper was inspired by a panel she was on last year called "High Assurance Systems: The Good, the Bad, and the Ugly." The Ugly are solutions that are practical but messy and of doubtful assurance (like most things in the world). Cathy suggested that firewalls and virus checkers fall in this category. The Bad are sound but impractical solutions (the Orange Book being a favorite example). The Good are sound and practical solutions, such as connecting system high computers at different security levels via one-way flow devices. Cathy's conjecture about what makes the difference between these approaches is in the attitude to existing infrastructure. Cathy proposes three paradigms based on this conjecture: Live With It (Ugly), Replace It (Bad), and Extend It (Good). The Live With It Paradigm takes the infrastructure as a given and applies security patches without modifying or extending the underlying structure. The Replace It Paradigm states "Replace X with a secure X" while ignoring entrenchment of X. The Extend It Paradigm pays close attention to infrastructure. It identifies necessary modifications, but keeps them to a minimum. Adding components to the infrastructure is usually better than trying to replace them. One question that received much discussion was "How do we distinguish between the different paradigms?", particularly between Live With It and Extend It. Extend It seems easiest to apply when a function does not yet exist or is inadequate. I pointed out that Digital's A1 Virtual Machine Monitor overcame the Replace It issues, but died for other reasons, so that following the Extend It paradigm might be necessary, but not sufficient. One participant suggested coming up with a problem architecture before coming up with the solution architecture.

Session 4, "Trust and Distrust," was chaired by Marv Schaefer. The first paper was "Patterns of Trust and Distrust" by Daniel Essin. Dan took a look at policies in the context of a service organization such as a hospital, where high quality work is the goal. Activities are highly regulated and there is a potential for catastrophic loss. Many tasks have a specification and require permission (not just data system tasks). Each component of an individual's work may be governed by different policies and permissions. For example, the organization may suspend the privileges to do elective surgery, but in an emergency they may be temporarily granted, if the suspension was due to non-compliance with administrative policy. The nature of the task and the circumstances may affect the decision to allow the action and the evaluation of its deliverables. There is no training on how to make policy; it's assumed people who can breathe can make policy. The number of policies defined goes up while the number carried out remains flat. One of the many questions raised was "How do you determine in retrospect what the situation was so that you can be sure that the right policy was applied?". It hinges on having a detailed, contemporaneous record. Actors motivation is to have as few policies as possible so they only need to be held to as few as possible. Trust represents a conclusion about whether an outcome will be positively affected by allowing an actor to interact with resources in the context of the risks involved. John Dobson pointed out that policy making is not a rational act; it's a visible manifestation of the exercise of power.

The next paper was "A Distributed Trust Model" by Alfarez Abdul-Rahman and Stephen Hailes. Farez presented. He quoted Diego Gambetta, a sociologist, from Can We Trust Trust?, "trust ... is a particular level of the subjective probability with which an agent will perform a particular action, both before [we] can monitor such action ... and in a context in which it affects [our] own action." This points out that trust is subjective and contingent on the uncertainty of a future outcome. Another quote states "Human interaction would be impossible without trust." Trust is made necessary by a lack of knowledge. Their approach is a generalization of existing approaches. Agents exchange (recommend) reputation information about other agents. The 'quality' of the information depends on the recommender's reputation. While Farez stated that managing your own policy is a downside to recommendation-based trust, many of the attendees disagreed, and considered it to be necessary (even full delegation was considered self management). Chenxi Wang stated that recommendation works well in a contract based society but not in a reputation based society like China or Russia. John Dobson pointed out that evidence from psychological studies showed that trust is not even partially ordered, and that you can change your amount of trust without any evidence. Ian Welch suggested the notion of back flow; if A recommends B, and B stings me, I don't trust A or B. Marv Schaefer pointed out that B could undermine A on purpose this way. Ruth Nelson noted that tracking who to refresh (trust values are refreshed by their recommenders) is unwieldy.

The last paper in this session was "An Insecurity Flow Model" by Ira S. Moskowitz and Myong H. Kang. Ira presented. Ira started by listing the issues and assumptions not covered by this work: it's insecurity flow, not information flow; time is not considered; the model cares about the invader getting in, not about anything getting out; the model does not address if you know if anyone gets in; and all "layers" are independent (this simplifies the current model, though it is a very major assumption). Their motivation was Moore and Shannon on "Reliable Circuits Using Less Reliable Relays." Layers of defenses such as firewalls and access control systems are composed. They ask questions such as: What is the vulnerability for a path? How do they aggregate? Are there redundant or useless mechanisms? How do we analyze cost? 0 is secure; 1 is insecure (the probabilities measure intrusion). For parallel circuits, if there are 3 ways in, it's less secure than one way in. This matches our intuition. All probabilities are >= 0. In sequence, cheap, high probability components more secure than one alone. The probability of insecurity flowing through two identical components in series that are again replicated in parallel is: 1 - (1-p^2)^2. When this function is plotted against the single component option, it has a nice little wiggle in it and becomes more insecure than the single component at p = .618. Ira then outlined the formalism for considering multiple paths. The probability of insecurity is union of all of the "non-stupid" paths. They have some reduction formulas for common topologies.

Session 5 on Emergent Systems was led by Cristina Serban. The first paper in that session was "Principles of a Computer Immune System" by Anil Somayaji, Steven Hofmeyr, and Stephanie Forrest. Anil presented. They looked at computers as complex systems, where software is updated, configurations change, and system administrators are overburdened. We build them, but we do not understand them. Ruth Nelson classified our attempts to make models that accurately reflect these systems as "a desperate move." Anil outlined what they considered to be general properties of complex adaptive systems: distributability, diversity, adaptability, disposability, and autonomy. They then tried to turn these properties into principles. For example, no small set of cells is irreplaceable. Steven Greenwald pointed out that infectious disease was a major issue, but now auto-immune diseases are. However, immune systems are still essential. Tom Lincoln stated that medicine is 15% science and models of dubious completeness. The models give us great clinical courage. Anil pointed out how immune systems are not computer security systems. There is no confidentiality, no data integrity, little accountability, and (perhaps most importantly) no guarantees. There is a physical barrier. Dixie suggested we consider analogies with the skin and brain as well as the immune system. Anil talked about identity being determined by behavior, which is spoofable, but intrinsic. In a sense, you have to become the thing you're pretending to be. Ruth Nelson pointed out that viruses spoof cells without being them. Anil concluded that we still need traditional security mechanisms. They provide the first level of defense.

The next paper was "Composition and Emergent Properties" by Heather Hinton. Heather started her research by looking at why composition fails and how properties emerge on composition. Previous approaches focused on information flow. Properties are the formal instantiation of the informal policy goals at a given system. While changes in a system and its environment may affect a system's properties, the system's policies are usually unchanged by composition. Heather classified properties by whether their satisfied by individual systems and/or the two systems' composition. She defined emergent behaviors as new behaviors relevant to a composite system but not its components. Ruth Nelson asked about disappearing properties, which Mike Williams suggested were submergent behaviors. Heather indicated that emergent behaviors are unpredictable or surprising behaviors arising from a composition. This implies that they are subjective and might be learnable from experience (such as what happens when you mix colors). Bob Blakley suggested that as you compose more things, more of the system behavior is underspecified and surprising. Emergent behaviors may lead to the emergence of desired properties on composition. No matter how we represent the system, we have to deal with effects of composition. Tom Lincoln was reminded of the difficulty of composing valid laws.

The final session, Session 6 on Networks, was chaired by Mike Williams. The first paper was "Protecting Routing Infrastructures from Denial of Service Using Cooperative Intrusion Detection" by Steven Cheung and Karl N. Levitt. Steven presented. Their models attempt to deal with a compromised router that can remove (blackhole) or misroute packets. Good routers diagnose each other to detect and respond misbehaving routers. Their approach makes assumptions about the networks, develops their system model and failure models for routers, designs the diagnosis protocols, and proves detection and response properties of the protocols. They design their responses so that they cannot be used against them by an attacker. They assume that the well-behaved routers are connected (no partitioning by bad routers), that neighboring routers can send packets directly to each other, and that routers know the network topology and the cost of every link. They do shortest path based routing. Misrouting routers are routers that forward a transit packet to a router that is not on the shortest path to its destination. They developed a classification of the strength of a bad router. A bad router can misbehave permanently, probabilistically (on every packet with a certain probability), almost permanently, or intermittently. They can misbehave on all packets, packets determined by addresses, or packets determined by address and/or payload. For a permanent bad router that acts on all or address specific packets, can they can use distributed probing: if C is closest to B, it can send a packet to B and see if it gets it back. For diagnosing intermittently bad routers that may be address or payload aware, they can use flow analysis: the amount of transit packets going into B should be the same as the amount going out. A good router never incorrectly claims another router as a misbehaving router (soundness). If a network has misbehaving routers, one or more of them can be located (completeness). Misbehaving routers will eventually be removed; good routers will remain connected (responsiveness).

The concluding paper of the workshop was "A Security Model for Dynamic Adaptable Traffic Masking" by Brenda Timmerman. Her thesis is on traffic masking for traffic flow confidentiality, and this paper is one chapter of it. She calls her approach Secure Dynamic Adaptive Traffic Masking (S-DATM). The model can also be self-modifying. She began by outlining a variety of incidents from Desert Storm which would have allowed for traffic analysis. She said that about 2% of users needed traffic flow confidentiality (TFC). TFC masks the frequency, length, and source and destination traffic patterns. The fundamental mechanisms for TFC are encryption, traffic adding and delaying, and routing controls. Her model supports adaptable traffic masking for trade-offs between user protection needs and efficiency and performance. Early work on TFC encrypted the link and kept it full of noise. More recent work on adaptable transport layer traffic masking work well for normal low traffic, but large spurts makes it adapt, which gives away information. Mike Williams suggested considering multiplexing across multiple channels. Cathy Meadows pointed out that you need the people doing the innocuous stuff to mask the people doing the important work that needs to be hidden. S-DTAM mask statistical anomalies with bursts of traffic. It precisely specifies the acceptable ranges of system behavior and can model dynamic adjustments. Previous work required global knowledge, which didn't scale to the size of the Internet. A profile of masked traffic behavior is defined at installation time, including burst size, inter-arrival delay, and throughput. Statistics provides precision and reduces processing and storage. The model can change the statistical critical values. For example, in times of extreme crisis, people know you're going to react, so TFC can go down. Heather Hinton suggested considering an algorithm that ramps up and ramps down; prediction is hard.