NSPW '98: New Security Paradigms Workshop
September 22-25, 1998, Charlottesville, Virginia

by Mary Ellen Zurko, Iris Associates

I apologize to all the attendees whose comments I noted but forgot to whom
to attribute them.

New Security Paradigms Workshop 1998 was held at the Boar's Head Inn in
Charlottesville, VA, and chaired by Bob Blakley (IBM). NSPW is a workshop
set up to foster and explore new approaches to security. The workshop
founder, Hilary Hosmer (Data Security Inc.), chaired the first session, The
Software Life-Cycle.

The first paper, "Toward A Secure System Engineering Methodology", by
Chris Salter (NSA), O. Sami Saydjari (DARPA), Bruce Schneier (Counterpane
Systems), and James Wallner (NSA), was presented by Sami. Sami is a DARPA
PM interested in redirecting community energy toward solving the hard
problems of information system security engineering. He suggests that the
government contribute threat models and the community contributes its
expertise in quality system engineering. He believes that current DoD
issues such as the need to use new advanced in network technology and full
integration of all aspects of the system reflects industry issues. The DoD
needs to use commercial technology, standards based application, crypto and
public key infrastructures, and needs to ensure the information they manage
is available to the users who need it. They need automated systems that
remove the latencies in the system. Sami sees technology as the key driving
factor; if the technology is irresistible, the DoD will use it. Sami
suggests that we work on determining then strengthening the weakest link;
the place where the work factor for the adversary is lowest. We should also
model both adversary behavior and the effectiveness of counter measures to
determine the likeliest targets. We need to determine how to build a secure
system from flawed and tampered pieces. We need to take a look at the
community that creates reliable systems out of unreliable components. We
need to make flexible, controllable systems so that they can be changed as
the adversary's strategy changes. He recommends the Joint Vision 2010 paper
for understanding the future of warfare. There was much discussion about
whether the DoD recognizes this sea change, and if so, what parts. Mike
Williams pointed out a paper on a defect tolerant computer architecture
from Science this year, by  the UCLA department of chemistry and HP.
Another participant suggested that our old adversarial model obsolete. It's
not a simple chess game but a game with multiple sides with shifting
alliances, cheating, shifting rules, and so on. Another argued that
efficiency is often the enemy of survivability. Sami argued that we need to
do them both; that's the paradigm shift he's looking for.

The second paper was "Security Engineering in an Evolutionary Acquisition
Environment" by Marshall D. Abrams (The MITRE Corporation). His paper is
about engineering as opposed to computer science and advocates evolutionary
system development as a new paradigm to help understand large systems.
Since you cannot state the requirements at the beginning he is trying to
merge the evolutionary spiral model with security engineering to put
security in the system from the beginning. He is working on an FAA project
right now that it attempting to do this. They are going to learn from
experience; the first iteration will be highly imperfect. The cycle is
called "the wheel of reincarnation". They will analyze the security risk
(which involves considerable judgement), develop a risk management plan
(where the biggest risk is career risk), and choose a mix of
countermeasures that provide adequate protection within available funding
without impeding the efficient flow of information to those who require
ready access to it. Then they'll develop the system, apply system security
engineering tools, test and verify. At that point, the next iteration
begins. They're about ¾ through the first iteration of trying this model
out. Establishing and controlling requirements and reviewing against them
has worked really well. The contractor has worked less well; it's been hard
getting everyone working on same objectives. They've insisted on putting
all security features in the prototypes, because prototypes get fielded.
Some discussion touched on whether to assume that all software was already
subverted. Someone called it the "rhinosaurus in the parlor"; something
that we all know about but don't want to discuss. Another attendee argued
that older logic bombs may be less useful than newer ones. It was pointed
out that releases are concurrent, not sequential, in industry. There was
also a question about what happens when the architecture rules out needed
changes. The only answer seemed to be you would try something else.

Session 2, on Protection,  was chaired by John Michael "Mike" Williams.

It's first paper was "An Integrated Framework for Security and
Dependability" by Erland Jonsson (Chalmers University of Technology). His
goal was to provide a framework that encourages measurements of security.
His aim is to make the concept of security clearer (as it must be
well-defined to be measured), though he may have simply take the confusion
to a higher level. He postulates that security and dependability are
complex attributes of the same meta-concept. His approach is a regrouping
and redefinition of the concepts within a novel framework based on the
preventative (input) and behavioral characteristics (output) of an object
system. The measurements are also divided between input and output and
address the difference between authorized and non-authorized users (users
and non-users). He suggests defining preventive measures based on the
effort it takes to make an intrusion (clearly a theme of several of the
papers here). Data collection for the modeling of the effort parameter can
be done by means of performing intrusion experiments in a reference system.
His framework rearranges and merges the dependability attributes of
reliability, availability, and safety and the security attributes  of
availability, confidentiality, and integrity. He categorizes integrity as
preventative and confidentiality and availability as behavioral. He  wants
to measure fault introduction, failure, and the correctness of system. An
attendee noted that the non-repeatability of measurement is an issue.

The next paper was "Meta Objects for Access Control: A Formal Model for
Role-Based Principals" by Thomas Riechmann and Franz J. Hauck (University
of Erlangen-Nurnberg). Thomas presented.  His Security Meta-Objects (SMOs)
are attached to object references. They can intercept calls, control
propagation and check for attachments. SMOs can implement access checks and
provide principal information. All aspects but this last one were discussed
in a previous paper. Principal SMOs provide information on whose behalf
calls are executed. A reference with a  principal SMO is a capability for a
call on behalf of the principal. As a pure capability it can't leave our
application part. Newly obtained references inherit the principal SMO. The
principal SMO detaches itself when the reference passed out of our
application part. It is automatically invoked by the runtime system when
references are passed. His paper contains a formal model of domains, object
references, method invocations, the principal SMOs and their global policy.
SMOs can also be programmed to act as roles. He has a Java prototype of his
work. Discussion included whether this was a delegation mechanism (it's
not) and whether anonymity at the target object is supported. Simon Foley
posited that the paradigm is a separation of concerns, separating security
functionality out from functionality. He asked about interactions with
other meta objects for things like synchronization. Bob Blakley pointed out
that in CORBA, they have to do certain things in certain orders but haven't
run in to any cycles yet.

Session 3 on Integrity was chaired by Cristina Serban (AT&T Labs).

The first paper was "Evaluating System Integrity" by Simon Foley
(University College, Cork). He posits that there is no such thing as a
security property, just correctness or dependability properties. Thus, the
new paradigm is the old paradigm for correctness. He goes on to consider
what system integrity is. Biba and Clark & Wilson define how, but not what,
to achieve. There is no guarantee that a user cannot use some unexpected
route to bypass integrity controls. He considers system examples involving
both humans and computers. He defines dependability as a property of an
enterprise about the service it delivers, whereby the enterprise may
include details about the nature of the reliability of its infrastructure.
Assurance is the argument that the system is dependable. A safety
refinement specifies that everything the implementation does is legal. We
might be talking about functional correctness all the time; the failure
model is the thing that changes. Statistical attacks can't be captured in
the framework as it currently exists. It does provide a definition for
external consistency: correspondence between the data object and the real
world object it represents. Simon then summarized. The same property
characterizes different attributes of dependability; there are different
attack and fault models. When a designer claims a system is fault tolerant
or that a protocol properly authenticates, the designer is claiming the
system is dependable. Security verification may be correctness
verification. An attendee asked what it takes to create a dependable
internet worm. There was also discussion about duration of dependability
needing to be covered.

The next paper was "Position Paper: Prolepsis on The Problem of
Trojan-Horse-Based Integrity Attacks" by John McDermott (NRL). A prolepsis
is an argument to an objection before the objection is raised. While the
previous paper covered process integrity, this is data integrity. His paper
is attacking a weak link, and talking heuristics, not formalisms. John
stated that when people considered whether trojan horses are really a
problem, they tend to respond that either they're not there or it's too
hard a problem to consider. One attendee asked about the difference between
a trojan horse and erroneous code. John stated that there is not a lot of
difference between a trojan horse and dll. One attendee suggested the
difference is that one is designed to do something bad for you while the
other is badly designed to do something for you. Sami took the contrarian
position that people underestimate how hard it is to create trojan horses
to do anything other than denial of service. His specific point of view was
that it is difficult to have goal directed national impact. Marv responded
that all you need is Visual Basic for applications or access through a
similar meta trojan horse. John pointed out that if you're using commercial
off-the-shelf technology, knowledge of system is available and not a
barrier. Trojan horses are easy to write, they're not very big. John then
asked us how we'd prevent the propagation of trojan horses. Hilary
suggested configuration management procedures. Another attendee suggested
code signing. John pointed out that authenticode can't be applied to a dll.
He stated that Byzantine general and threshold schemes for keys work, but
they won't make it into real products and systems, because of the overhead.
He advocates the use of logical replication, session replay, and
pre and post condition checks. He noted that redundancy is expensive, and
bulk replication is the cheapest strategy. One attendee asked how you tell
which is the correct copy. John suggests using a person to look at it. Bob
pointed out that logs of updates may help. An attendee noted that updates
are accepted fairly blindly now.

The first session on Thursday was on Assurance,  chaired by Marv Shaefer
(Arca Systems, Inc.).

The first paper was "Death, Taxes, and Imperfect Software: Surviving the
Inevitable" by Crispin Cowan, Calton Pu (both of Oregon Graduate Institute
of Science & Technology), and Heather Hinton (Ryerson Polytechnic
University). Crispin presented. Their work is aimed at surviving attacks
against insecure systems. Security bugs are chronic and normal, so they are
promoting security bug tolerance. The paper categorizes techniques for
doing this. Crispin stated that it's hard to produce perfect security and
its overhead degrades the appeal of the system. Customers don't purchase
secure OSes. It's completely rational for vendors to give customers what
they want. Attendees countered pointing out that virus protection is a
thriving industry and many ads do mention security. Crispin countered with
noting that ads don't mention correctness. There are a variety of bug
tolerance techniques. For games, no one cares if they crash. People using
editors checkpoint regularly. When the OS crashes, they reboot. Replication
is effective against independent failures, but an attacker may explicitly
corrupt backups. An attendee noted that checkpointing is only successful if
you know what a secure state is. For surviving attacks that exploit bugs,
Crispin noted that fault isolation has worked. Each component should stop
exporting its bugs and stop exporting other components bugs. Their work
categorizes survivability adaptations by what is adapted vs. how it is
adapted. What can be the interface or the implementation. How can be a
restriction or permutation. Interface restrictions, such as access control,
can be static or dynamic. The paper gives examples within each of the
categories. Crispin said that intrusion detection is vital to dynamic
restrictions, however an attendee pointed out that the  *-property as
initially defined was adaptive to history, not intrusion detection. Crispin
suggested adding redundant checks to code. An attendee pointed out that
could add bugs. Crispin suggested using automatic mechanisms. The attendee
countered that those could also have bugs. Someone pointed out that those
tools could be written by people who care more, like security people. The
alternative technique is removing vulnerable code, such as when configuring
a firewall or (attempting to) turn off Java in your browser. Restrictions
make attacks less damaging, while permutations make attacks more expensive

to mount. Permutations offer the benefits of security through obscurity
persistently. An attendee pointed out that "security through obscurity"
traditionally refers to algorithms. Interface permutations may make the
attacker cautious and it increases the complexity of the search space. Fred
Cohen deception tool kit was the only example cited for this class. An
attendee pointed out that implementation permutations may insert a lot of
bugs. Crispin suggested this was only difficult because our programs are
over-specified in languages that are too low level. An attendee asked how
you can measure how good it is, a theme that recurred throughout the
workshop.

The next paper was "A Graph-Based System for Network-Vulnerability
Analysis" by Laura Painton Swiler and Cynthia Phillips (Sandia National
Laboratories). Cindy presented. They are interested in quantitatively
measuring the operational security of a system in comparison with others.
The quantities are estimates for gross comparisons that identify the
dominant vulnerabilities and enable you to examine the configuration and
policy issues for a given system. They represent a system as a graph with a
goal node. A path from some start node to the goal node represents an
attack. It can be as comprehensive as the set of attacks you understand.
The analysis produces the "best" paths from the point of view of the
attacker. It  may also provide useful simulations. There are a variety of
inputs into the system. An attacker profile has capabilities that can be
changed as attack proceeds. Default profiles represent stereotypes. They
are steering clear of human behavior issues; they can't figure out how to
quantify cleverness or intelligence. They plan on prototyping the system in
a year, which won't be as general as their model. A configuration file
documents the system hardware, software, network, routers, and so on. The
attack template contains information about generic and atomic steps in
known or hypothesized attacks. These steps have a start node, an end node,
and conditions that must be met. The edges between the nodes have weights.
The source of weights can be expert opinion, statistical data, or
experimentation. Edges represent a change in state in the attack graph. The
system may be able to recognize new permutations of attacks. They want to
generate the set of near optimal paths as a reflection of total system
security. They are modeling at a very fine granularity and hoping that
produces reasonable probabilities for each step. This approach only signals
vulnerabilities that can be part of a complete attack. There was some
discussion over whether that was good or bad. They are starting to look at
prolog matching and unification, and would like to look at optimal defense
placement. There was some discussion on how much will be classified (for
example, the profile of national scale attacker). They hope to make as much
as possible unclassified. An attendee pointed out that some actions can
sometimes be benign or part of an attack, and that particular input into
the system could be very large.

Session 5 was Tough Stuff, chaired by Cathy Meadows (NRL). Cathy humorously
pointed out that the session was so tough that we had to put a lunch break
in the middle of it.

The first paper was "The Parsimonious Downgrading and Decision Trees
Applied to the Inference Problem" by LiWu Chang and Ira S. Moskowitz (NRL).
Ira presented, though he redirected several of the tough questions to LiWu.
This paper deals with declassification of information from high to low. In
the ideal world (from high's perspective), low is stupid and stays stupid.
In reality, low is stupid and gets smart. Stuff has to be sent from high to
low. There is a national order to declassify more stuff, more quickly.
Their work uses a decision tree technique to determine the probability of
the value of missing data based on the existing base set. Their paper has a
simple, easy to follow example involving hair color, lotion use, and burn
state. They calculate conditional entropy; 0 is best. Their technique gets
the "best" rules for interpreting the data based on information theory
(which values in which columns imply which other values in the column of
interest). They make the maximum use of the available information. They
then go on to use parsimonious downgrading to keep low from learning the
rules. The goal is to just take out the right piece of high data to mess up
the entropy/temperature. They use a Bayesian approach, which is extremely
controversial. Ira noted the difficulty in finding the right data to remove
in terms of both functionality and security requirements. He suggested we
might use utility functions from economy. Dan Essin was concerned about the
damage from the extra information that is withheld, particularly if it is
concerned with personally identified records. Someone noted that the census
bureau will put out data that is changed to control inference. Ira is not
against putting a person in the loop. Another attendee suggested they
investigate how expert downgraders do it.

The next paper was "Server-Assisted Cryptography" by Donald Beaver
(IBM/Transarc Corp). Don is interested in parsimonious trust and
server-assisted (commodity-based) cryptography. He wants to obtain crypto
resources from service providers and increase their robustness through
composition. He doesn't want to have to go to trusted places. He wants to
pick random places because it's hard to corrupt everyone on the world. The
client should only have to use simple software (not full crypto) to use
these resources. He uses composition to get the opposite effect of weakest
link. His talk covered the evolution of large systems, modeling trust and
risk, changing the trust model for trusted third parties (TTPs), changing
the interaction model for threshold cryptography, and some examples. The
evolution and design of large systems involves division of labor,
specialization, replication, compartmentalization, differentiation,
increased functionality, and translation. Don considers if and how these
apply to cryptography, security, and crypto tools. He focuses on division
of labor, specialization and differentiation. An attendee noted that
separation of duties is a division of labor. The common extremes of trust
models are "do it yourself crypto" which may not scale well, and big trust
in particular parties, which scaled moderately well using coordinated,
compartmental systems. He suggests democracy as an uncommon extreme (or it
may be communism, he's not sure): trust everybody but trust nobody. He
points out that threshold signature schemes are complex and highly
interactive (unscalable). He aims to make TTPs less vulnerable, and
democracies less interactive. We trust the TTP for functionality and
discretion. He would like to minimize vulnerability by not relying on the
TTPs discretion. Information only flows from the third party, not to it.
One example that he gave was to take one time pads from multiple parties
and xor them. As long as one is good, you get an improved one. Discussion
concerned just how much discretion we require from TTPs such as CAs, and
the fact that no zero knowledge proof can prove that an authentication is
correct.

The final session of the day was chaired by Mary Ellen Zurko (Iris
Associates). It was a discussion topic called "What is the Old Security
Paradigm?" led by Steven J. Greenwald (Independent Consultant). Discussion
began with the paper's title. A reviewer had suggested that there might not
be just one. Someone suggested the evaluation, pluggability , layerability
paradigm from the rainbow series, which assumes that systems can be
separately evaluated and composed. Someone else suggested confidentiality,
availability and integrity (CAI), which was the thrust Steve's discussion
paper. Steve suggests we might want to formalize or model one or more "old"
paradigms so that we can be rigorous and to allow for comparison with
suggested new paradigms. The old paradigm is not necessarily obsolete. An
attendee commented that just because we're using it right now doesn't mean
it's not obsolete. The old ones still working, sometimes in new paradigm
systems. They are useful for teaching, preserve knowledge and history, and
allow us to learn from our past. Steve suggested three contexts for the old
paradigm:  government, military, and commercial. An attendee added the
personal context, which may be only for new paradigms. Steve's survey
starts in 1880, with the census, moves to 1945 when computers emerge,
transitions in the 1950's when computers were mass produced, and targets
the late 1970's as the beginning of the current era when computer security
emerges as a truly separate field. An attendee questioned his emphasis on
computing while ignoring communications. An attendee commented that, by mid
1960's, when most commercial banks had installed computers, was when the
Y2K problem was getting inserted. Someone noted that they had to worry
about people born in 1800's and still alive. There was discussion on the
Ware report (which was from the 1940's but classified until 1969) and the
Anderson report (which provided the initial formal model). Someone noted
that just because it's not one of the old paradigms we have identified
doesn't mean it's a new paradigm. Our memories tend to be somewhat
selective. There are current myths about the Orange Book; that it was all
or nothing (nope, there were levels) and that we thought it solved
everything (nope; there was "beyond A1" and the authors were deeply aware
of many unsolved issues). Someone noted that the Ware and Anderson reports
were definitive, brilliant, written in English, and currently hard to find.

The final session was on Availability, chaired by Brenda Timmerman
(California State University, Northridge).

The first paper was "Tolerating Penetrations and Insider Attacks by
Requiring Independent Corroboration" by Clifford Kahn (EMC Corporation).
His talk covered the notions of independent corroboration and compromise
tolerance, a formal model, grounding the model, and limitations and
directions. The goal is to tolerate compromise of (diverse) information
sources. The work is applicable when information may be compromised, there
are redundant, somewhat independent information sources, but they are not
too redundant (only a few information sources know whether a given
assertion is true). The word "independence" covers a lot of common-sense
(hard) reasoning, yet humans make fairly good judgments about independence.
We consider how trustworthy we think each party is, whether we know (or
suspect) any compromising connections between the parties, what barriers we
know of between the parties, and the set of interests relative to which we
are judging. We use a diagnostic approach. Cliff's work models trust as a
number between 0 and 1, which indicates the probability of compromise of
the information source. It also models compromising connections as a set of
influences (institutions, vulnerabilities, relationships (marriage)) and a
strength-of-influence matrix (with a row for each influence and a column
for each principal). Entries are numbers between 0 and 1. Barriers are
modeled with a similar matrix. He also models set of interests. The
analyzer's interests affect the trust metrics. The model gives the analyzer
the full trust metric of 1. An attendee pointed out that that assumes you
make no mistakes. We cannot estimate the probability of compromise with
precision. A rough estimate may suffice. We might tune it with a learning
algorithm and train the learning algorithm with human oracles. The model
has no influences on influences so it has to, flatten the chains of
influence. Attendees pointed out that the model assumes influences are tree
structured. Marriage is circular graph. A compromise tolerant system keeps
working even if some of the components are compromised, including human
operators. An attendee noted that removal of an operator doesn't remove the
influence. Hilary pointed out that the model misses the lone prophet who
says something is going to change tomorrow.

The final paper of the workshop was "A New Model for Availability in the
Face of Self-Propagating Attacks" by Meng-Jang Lin, Aleta Ricciardi (both
of The University of Texas at Austin), and Keith Marzullo (University of
California at San Diego). Meng-Jang presented. The attacks considering can
replicate themselves and propagate to other systems. This models push
services and mobile agents, among others. They looked at the effect of
scale and dissemination strategies. Their work is based on an
epidemiological model of a simple epidemic and rigid rules for mixing. An
infection cannot be cured. The mixing rules are homogeneous; all processes
are interconnected. They use availability as a metric. The spread of
infection is modeled as a stochastic process. They ask, what is the
availability of the system after having run for some period of time? And,
how long can a system run until the availability is unacceptably low? They
consider four dissemination strategies: peer, coordinator-cohort, ring and
tree. They take these strategies from existing multi cast protocols. The
probability of a process being infected when it receives the infection is
always the same. Discussion questioned whether in some systems perhaps that
should be weakened. They can approximate a diverse population by calling
them the same with a particular probability. An attendee noted that this
describes password cracking well but describes a DOS attack on Unix less
well, particularly if there's someone invulnerable in a ring. It also
doesn't model carrier states. Their simulations indicate that the ring is
best for the defender but has the worst latency (these two are directly
related in their measurements). The coordinator-cohort is best for the
attacker. In between those two, the peer spreads more than the tree. An
attendee noted that this assumes that all nodes are equally important.
Perhaps more surprisingly, the larger the number of nodes, the more
messages are sent, so that the availability goes down. In the future, they
may look at information propagation and gossip protocols (or advertising)
where infection is desired. Hilary suggested that epidemiological
communication may be analogous to the grapevine.