Report on the 1996 New Security Paradigms Workshop
by Mary Ellen Zurko and Cristina Serban

The 1996 New Security Paradigms Workshop was held at the UCLA Conference in Lake Arrowhead, CA, September 17 - 20. Hilary Hosmer introduced the workshop, which she started in 1992. It was originally a workshop on impossible problems in computer security. It now emphasizes work on very recent, unfinished ideas which can serve as an inspiration to work by others.

The first session was "Security in Its Contexts" chaired by Cathy Meadows.

The first paper, "Harmonized Development Model for Information Security", by Jussipekka Leiwo, Monash University, had no presenters, and was skipped.

The second paper was "Simulated Social Control for Secure Internet Commerce" by Lars Rasmussen and Swerker Jansson of Swedish Insitute of Computer Science, and was presented by Lars. They are trying to make a system with an emergent effect of fraud reduction to an acceptable level. Relying on game theory, they note that iterated prisoner's dilemmas reward cooperation, while a single instance of the dilemma rewards defection. They are trying to achieve soft security, with no central point of control. They run simulations of social control with hundreds of agents. Some of the simulations have painful transition phases for the cooperative agents. Much of the discussion centered on identity, reputation, and trust, since an iterated prisoners' dilemma relies on stable identity for the parties. Questions about modelling included issues such as a referred trusted party not acting in a trustworthy fashion because "it doesn't like my face", and modeling stereotypes such as gender stereotypes. Most of the discussion also centered on the singleton events that could produce bad results instead of when the tradeoff between good and bad events could be considered economically "good enough".

The second session was "User Control of Security" chaired by Tom Lincoln.

The first paper, "User-Centered Security" by Mary Ellen Zurko and Rich Simon of OSF Research Institute, was presented by Mary Ellen. They surveyed the history of "psychological acceptability" in secure systems, and considered whether making secure systems usable is an inherently particularly difficult task. They defined user-centered secure systems as those with usability as a primary goal or motivation. They presented three methodologies for achieving user-centered security: applying usability methods to secure systems, generating security models or mechanisms for Computer Supported Cooperative Work (CSCW), and motivating the design of security features based on identifiable user needs (user-centered design of security). Much of the discussion centered on the tension between security and usability. While more informative errors can greatly enhance usability, they can also introduce covert channels. The lack of usability can work against security too; people try to work around things to make their life easier, at the expense of security, intentionally or not. One idea was to use knowledge bases to distinguish stupid behavior from malicious behavior.

The second paper "A New Model of Security for Distributed Systems" by Chenxi Wang, William Wulf, and Darrell Kienzle of University of Virginia. It was presented by William. They start from the claim that the TCB doesn't work for large scale distributed systems. Legion, their system, has agressive scalability goals, so it is assumed it will run on top of fragile, insecure systems. There's no one thing that the system requires its users to trust. In fact, the system does not trust itself, since it could have been corrupted before it was downloaded ("rogue Legionnaire"). There is no owner of the distributed system. It is made up of a federation of folks who willingly sign up. Security has costs; not everyone is willing to pay, and there is no single security policy. Legion is based on three design principles: 1) "First, do no harm"': a correct Legion is not an avenue of attack. 2) "Caveat emptor: Let the buyer beware": ultimately individuals are responsible for themselves; the system is responsible for very little. 3) "Small is beautiful": if it is little, it's hard(er) to corrupt it to do something wrong. Legion is an object-oriented system. The designer of an object class is the responsible entity and the entity to be protected is the object. The name of the object is its public key. Discussion centered on how poor people are doing difficult system management, and issues with the public key as a name such as key changes and identity comparisons.

The next session was "Agent Security", chaired by John Dobson.

The first paper was "Personal Security Assistance for Secure Internet Commerce" by Andreas Rasmussen and Swerker Jansson of Swedish Institute of Computer Science. Andreas presented. To answer the questions of how an end-user gains confidence and trust in downloaded programs, they presented a security assitant approach. It is an open architecture of agents acting as sensors. Each agent has responsibility for monitoring some aspect of a program's execution and notifying the user of unexpected activities. This relies on some statement about what a program should do. They hope to be able to categorize programs by expected behavior or class, such as all editors. These categories will list both expected and disallowed actions. Agents should be easy to find, add, and replace, but they have to be trusted. Most of the discussion centered on issues of coverage by the agents. For example, it was not clear what happened when an action that was not on either the allowed or disallowed list occurred.

The second paper was "Communicating Security Agents" by Robert Filman of Software Technology Center and Ted Linden of Lockheed Missles and Space. Ted presented. Their work is motivated by the need for foolproof security controls for distributed systems that are both flexible and context sensitive. They suggested that since mammals devote large fraction of processing to security that it might not be unreasonable for computer systems to devote two orders of magnitude of processing power to it. They are developing Safebots to translate very high level specification languages into executables that can wrap insecure components. In contrast to firewalls, this is a pervasive approach. Assurance comes from continuous mutual vetting of distributed Safebots. Discussion included considering english as a language for security ontology (ontosec), and the depth and redundancy security mechanisms needed for this approach.

The final session of the first day was "New Paradigms for Security Policies" chaired Yvo Desmedt.

The first paper was "A Credibility-Based Model of Computer System Security" by Shaw Chuang and Paul Wernick of University of Cambridge. It was presented by Shaw. They consider the credibility level of statements made on behalf of a particular user based on attributes such as context and time. Credibility is meant to be a qualitative, rankable value that can be compared within a system (and possibly across systems). It can be derived from statements from others, the quality of authentication mechanisms, the transmission medium, certification by a "trusted party", certification of a process (ISO 9000), knowledge/awareness of the assessor, and the assessor's perceptions of principal's trustworthiness. The credibility applies to some statements but not to others. There are user interface issues with determining what level/value of credibility is required. Discussion emphasized questions about composing credibilities, whether negative credibility would be supported, previous work using fuzzy logic, and whether Bayesian theory would help.

The final paper of the first day was "The Emperor's Old Armor" by Bob Blakley of IBM NS Distributed Systems. Bob's discussion was mostly about the source of problems in the old paradigm, which he called the "Information Fortress Model." It relies on three problematic foundations: 1) system integrity: every part must be perfect, but humans and their artifacts are fallable, 2) cryptography for secrecy: again, implemenations must be perfect, and people are bad at keeping secrets anyway, 3) policy: security policy scales poorly and its administration is complicated and sensitive. This paradigm is used in a world where there are more systems which are less assured, the tasks they are used for are more complex, and we have made no progress in integrity, assurability, physical security, composability or policy simplification. Cryptographic protocols are better, but can't use strong cryptography in commercial products. Attacks can cause us to close the doors on our fortresses, causing a denial of service problem. One possible direction is to consider inherent vs. imposed properties of what we want to protect. Examples of inherent properties are size, weight, radioactivity, difficulty, and obscurity. Policies based on inherent properties are easier to maintain (for example, the size and weight of $1 billion dollars in gold make it harder to steal than bits). Bob suggested making electronic cash bit (many bits) to slow down stealing and illicit spending. Discussion against this idea included pointing out that convenience is a bit selling point of electronic cash. Bob then pointed out that the strength of our cryptographic protections only depends on no one having found a way to break them. "How much of the wealth of the world are we willing to bet that our cryptographers are smarter than everyone who will born?" One of the attendees told a story of a fortified town where all the warriors had gone off to engage in battle. An enemy army approached the town, and the wise old patriarch left in charge had to defend his town using women, children, and old men. He opened up the gate of the town and, sitting on the top of the wall, invited the enemy commander in to talk. The commander considered the situation and determined that the invitation must be a trap, and so left the town untouched. The moral is good enough is good enough.

The first session of the second day was "Architectures and Mechanisms chaired Pierangela Samarati.

The first paper was "Developing and Using a Policy Neutral Access Control Policy" by Duane Olawsky, Todd Fine, Edward Schneider, and Ray Spencer of Secure Computing Corporation. Duane presented. The work reported had developed a policy-neutral security server for the DTOS microkernel. Each microkernel service is mapped to a subject, object, and permission, and a single request can perform many services. Security IDs are associated with subjects and objects, and mapped to security information. A security policy specification can be used by many audiences: assurance, developer, evaluators, accreditors. It is also used to generate most of the code that checks permissions. They have implemented MLS and Clark and Wilson in this system. They are considering ORCON. It is difficult to say which services are needed and which are extraneous, since they have discovered that many policies need only a handful of permissions. Discussion centered on issues of complexity for the administrator and support of policies such as least privilege and time-based ones.

The second paper was "Run-time Security Evaluation: Can We Afford It?" by Cristina Serban and Bruce McMillin, University of Missouri-Rolla. Cristina presented. She discussed the performance issues that had arisen in their earlier work on run-time distributed security evaluation in the context of a distributed application with message exchange. They require a security specification or policy to determine what correctness means. They want to flag errors resulting from faults or intruders at run-time during each execution. The assertions are coded into the application, and run-time event histories are generated by each component and shared with all the others. The causal structure of the execution gives a partial ordering of event (using vector clocks). Each node checks all the shared histories against the security specifications, looking for disagreement. They measured a 40% decrease in performance (which compared well with the earlier paper that suggested two orders of magnitude was allowable). They want to consider how to get some of the benefits while reducing the costs. Discussion included considering the security impacts of this kind of evaluation. Since security specifications don't include many events that shouldn't happen, these don't get checked for. Someone asked, "If I use this on an A1 system, and it detects an error, do I still have an A1 system?"

The second session was "New Paradigms For Access Control" chaired by Tom Haigh.

The first paper was "A New Security Paradigm for Distributed Resource Management and Access Control" by Steven Greenwald of Naval Research Laboratory. Steven started by discussing the problems with the Jurassic Age Security Policy that dates back to the giant mainframe computers, lazily grazing on their data. A system administrator must be enlisted to manage the resources which users and applications need managed. This causes difficulties in distributed, heterogeneous environments. Users require thee permission of system administrators to share their resources across domains. Steven considered maximizing the freedom of users while limiting system administration to only necessary functions. In the context of a Computer-Supported Cooperative Work (CSCW) application, he suggests that applications handle any needs for user IDs and roles, and that administrators merely need to give these applications access to their resources. They can then support anonymity and handle the accountability issues. They can manage distributed user identity and compartment information. Discussion centered on policies such as officer of the day for hospitals, and distributing the role of governor across individuals.

The second paper was "Access Control in Federated Systems" by Pierangela Samarati and Sabrina De Capitani di Vimercati of Universita di Milano. Pierangela presented. They considered the issues involved in trading off local and federated control of resources and login identities in a federated system of cooperative yet autonomous component systems. Access control may be specified independantly, bottom up, or top down, raising the issue of consistancy. Reconciliation may occur at different times such as whenever the policies get out of synch, on-demand, and whenever a object is accessed. Policies may also let the site retain full authorization power, give it to the federation, and require checks at both levels. Their work allows for negative authorizations at the local level only. Federation level groups and wildcards help with the scalability issues. Users may belong to just one group. Objects can be exported to the federation in a limited manner (such as read-only). Discussion included the issues around trusting other administrations, and enforcing policies with a dynamic component such as orcon.

The next session was "Distributed Systems" chaired by Cathy Meadows.

The first paper was "The Right Type of Trust for Distributed Systems" by Audun Josang of Norwegian University of Science and and Technology. His work attempts ot understand trust as a human phenomenon and discuss how it applies to distributed systems. He tries to extract trust parameters from the real world. In a world of uncertainty it becomes very important to figure trust out. He presented a model with two entity types and two trust types. Entities can be passionate (humans) or rational (systems), or combinations of the two. Passionate entities can be honest or malicious, and can be tempted. Rational entities can be secure or insecure, and can be threatened. The trusting entity is passionate; the rational entity uses belief. Trust is potentially unstable, and reflects the state of knowledge about security or honesty. Trust is diverse over functions (trusting for different purposes) and time (very dynamic). While it is sufficient to be rational to assess the reliability of a system, assessing the reliability of humans takes skill and experience. Audun concluded by saying that to have stable trust, we need good knowledge. Discussion included questions on how to treat rational systems written by passionate people, and other aspects of trust: it's brittle, behaviour in animals is a result of adaptation, and it's a hypothesis about future behaviour.

The second paper was "CAPSL: Common Authentication Protocol Specification Language" by Jonathan Millen of MITRE. The vision is that a CAPSL description of an authentication protocol (that uses cryptography) can be used as input to the wide variety of tools and approaches that are used to evaluate these protocols. This variety is needed since no one approach to vulnerability analysis is completely satisfactory. CAPSL is also meant to be usable by protocol designers/analysts to define protocols and to ease that task. The language takes a message list approach, and contains primitives for things like the data held by parties and their initial beliefs. Discussion centered on the details of the style choices in the language, such as using the same symbol for assignment and equality checking. Jonathan has an area on the Web for discussing these details collaboratively.

The final session of the day was "Availability" chaired by Dixie Baker. In health care, availability means you might save a life, while privacy means you might get sued. Both papers were presented by Hilary Hosmer.

The first paper was "Managing Time for Service and Security" by Ruth Nelson of Information Systems Security and Elizabeth Schwartz of University of Massachusetts at Boston. Hilary began by outlining problems such as security mechanisms being exploited to shut a system down and legitimate and reasonable use of the system can also produce unexpected denial of service problems (such as the day that Jerry Garcia day, when Deadheads bogged down the Well). Some systems find a way to limit the load by gradually throttling certain types of inputs (White House email is using this approach). The basic questions to the designer are: What are the required system services - to whom and how much?, Is service more important than security?, and What are the control mechanisms? (measuring and monitoring, audit vulnerabilities?). She suggests that system management techniques are applicable to security. Discussion centered on other current examples of availability problems.

The last paper of the day was "Availability Policies in Adversarial Situations" by Hilary Hosmer of Data Systems Security. She called on us to rethink traditional availability policies such as "90% uptime". Availability is not always desirable; in the military, if your site is overrun, you must destroy it. In cyberspace we are interested in information availability. There are social threats to this (censors, marketeers). She wants to study policies where availability, confidentiality, and integrity are measured together. Discussion included references to real availability engineering as it is practiced today, identifying critical paths and single points of failure, and access control as a mechanism that can support availability. A final example of an availability problem was mentioned: Abbie Hoffman threw $300 in crisp new dollar bills on the floor of the NY Stock Exchange and brought it to its knees; what kind of attack is that?

The final session on the final day was "Paradigms from Other Fields" chaired by Hillary Hosmer.

The first paper of the day was "Positive Feedback and the Madness of Crowds" by Hilarie Orman of University of Arizona. Rich Schroeppel presented. The title of the paper is taken from a Dover reprint "Extraordinary Popular Delusions and the Madness of Crowds." Rich started by pointing out phenomena that built on positive feedback such as the tulip pricing bubble (when tulips became extremely popular) and various forms of stock market madness. He suggested the study of the composition of dynamic systems as an approach to the availability issues being raised, pariticularly in the context of the whole network. Simple, unbounded loops are easy to build and can cause feedback problems. The cannonical example is recursive email distribution lists. If you can identify a loop, you can limit the nuber of times a resource is consumed in a loop, or insert drag into it. Another type of feedback was dubbed "flash floods." An example is everyone going to the Cool Site of the Day on the Web. An approach to these problems is to insert drag or negative feedback into the system. One challenge is to add in random backoff without breaking things. One parameter that complicates the problem is time. The time constant ranges from nanoseconds to years (the latter is the case with recurring urban myths such as the Good Times Virus). The challenges include efficient specification and code analysis for potential resonances and runtime adaptation. Discussion included current problems (such as having a single cool site of the day instead of redundant copies) and attempted solution (such as Ethernet's exponential backoff on collision and the use of priorities).

The final paper of the workshop was "Just Sick About Security" by Jeff Williams of Arca Systems. It was presented by Bill Wilson. It explored the analogies between response to disease and healthcare in general and computer security. While this is not a perfect analogy, we may be able to learn from the differences as well as the similarities. As we try to deal with increasing complexity in computer systems we look to the complexity of organisms for ideas. There is a tradeoff between performance and security in organisms; deeply recessed eyes are protected but less effective. Some of the comparisons he made were: fight or flight vs. shutdown, pain vs. auditing, FDA vs. evaluation organizations, and checkups vs. certification and accreditation. Can we leverage off "survival of the fittest" to produce selection criteria that tends to promote better security? In healthcare, wellness is a process. Should security professionals be certified? Can we make use of warning labels? We hope that there's no ebola coming, and we may use laws and law enforcement for those who are intentionally harmed. Discussion included exploration of the analogies, such as struggle for life, federation wrappers as a sense of self, and evolution as a process of running as hard as you can to stay in place. Software is currently a process of "survival of the fit enough." The state of software was likened to 19th century patent medicince. In response to the anthropomorphic analogy, Steven Greenwald suggested that computers really don't like it when you anthropomorphize them :-).

Cathy Meadows was tasked with producing a synthesis of the ideas. She noted the emphasis on: decentralized, distributed systems, more user control, more flexibility for describing and enforcing, and interaction with global policy. Two other threads were: 1. assurance: making a convincing argument that system will work using game theory, dynamic systems, medicine, or runtime evaluation, and 2. how can different ideas support each other? using systems, policies, models, and metaphors.

The workshop concluded with a business meeting. Proceedings will be put together in the near future. The initial call for papers for NSPW '97 will be available at NISSC in late October. We're still looking for a publications chair for the next workshop.