Mary Ellen Zurko's Report on Security and Privacy '99

Symposium Program with URLs for papers and related links.

The 1999 IEEE Symposium on Security and Privacy was held in Berkeley,
CA, May 9 - 12 (at the traditional Oakland venue, The Claremont, which
changed its mailing address from its front door in Oakland to its back
door in Berkeley several years ago). It was the 20th anniversary of
the symposium, and was particularly well attended and lively. I
apologize for not recording the names of the majority of questioners,
and for any names that I got wrong.

The first session, Monday, May 10, was on Systems, chaired by Roger
Needham (Microsoft Research).

The first paper presented was "Hardening COTS software with generic
software wrappers" by Timothy Fraser, Lee Badger and Mark Feldman (TIS
Labs at Network Associates, Inc.). Timothy presented. The paper
addresses the problem of adding security to Commercial Off The Shelf
software that was not designed with security in mind, particularly in
the face of the limited security support in widely deployed OSes like
Linux and WNT. In addition, they
 want to add security functionality, not just limit what applications
can do, and on a per-application basis. All this must be done at an
acceptable cost. They chose the approach of developing and deploying
generic software wrappers, which intercept all interactions with the
OS, including inter-process communications. The wrapper is a small
state machine that examines the parameters and may take any action,
transform the call, and so on. They have wrappers for Solaris and
FreeBSD, and one with less functionality for WNT, available for
downloading. They can perform Mandatory Access Control, do intrusion
detection, provide micro firewalls around processes, provide sandbox
execution environments, participate in new protocols, and increase
assurance by restricting behaviors. However, they cannot provide
information flow control on cover channels, as they can only monitor
explicit events. They can't observe behavior in inverted system calls,
where the request loop runs inside the kernel. They do not provide an
Orange Book style layered assurance argument. They express rich
functionality with a Wrapper Definition Language, provide platform
independence with abstraction, manage and ensure wrapper state with an
Sql-like interface to share and store it, automate deployment with
Boolean expressions, and provide non-bypassable control with a
loadable kernel module architecture. A wrapper references an interface
and says which events it wants to intercept from that
interface. Timothy presented some performance numbers, indicating
about 1.5% slower end user application performance, a 3% penalty for
an HTTP Webstone benchmark, and a 6% penalty for a kernel build (all
unoptimized). Wrappers are duplicated for forked processes. In wrapper
composition, the first wrapper gets to intercept a call out first, the
last works on a call back in first. Several questioners brought up the
issue that this still provides no protection from a flawed OS, though
Timothy pointed out they do run in kernel space. One questioner asked
about the relationship between a parent and forked child
database. There are three levels of scope in the WDL; global database
tables, a middle level for all instances o f a wrapper, and one
wrapper on one process. The child wrapper can see parents'. Another
questioner asked how programs are identified.  It's by the name of the
binary.

The second paper was "Firmato: A novel firewall management toolkit" by
Yair Bartal, Alain Mayer (Lucent Bell Labs), Kobbi Nissim (Weizmann
Institute), and Avishai Wool (Lucent Bell Labs). Avishai
presented. Their paper addressed the problem of telling firewalls what
to do. This generally requires lengthy and complicated lists in a
large environment. Firewalls are configured separately, and they may
be from different vendors. Globally, security managers hope that they
get the policy they want. Their work tries to separate the design of
the global security policy from the firewall vendor specifics, and
separate the design of that policy from the network topology, by
automatically generating the configuration files. This can also allow
high level debugging of the rule bases and policies. A model
definition language describes what the security manager wants to
achieve. It goes through a compiler and generates the rule bases for
the fire walls. The rule illustrator takes a rule base and displays it
graphically, in a hopefully easy to understand manner. They use the
term "role" for a property which may be assumed by hosts on the
network. Roles define positive capabilities between peers. You need a
pair of roles to specify that capability. Whatever is not specifically
allowed is disallowed. They also have inheritance. They support host
groups in an IP range, and any role for the subgroup is inherited by
member subgroups. The notion of a "closed role" is
 used for roles that should not be inherited. Policy and network
topology are also input to the parser. This generates rules in a
central rule base, which can be optimized. The rule distributor cuts
the rule base into pieces and puts the pieces on firewalls. A rule
illustrator helps with debugging the compiler. You can look at the
rules generated. It also allows backwards engineering. If a customer
has hand written rules, it can help the transition to Firmato. The
graphical display shows host groups, location with respect to
firewalls, containment trees, and services.  Th e Firmato compiler is
a working prototype that currently supports the Lucent firewall
product. Questioners asked if they had show the rule illustrator
output to any security managers. They tried it out on their home
system and showed it to the people running it. They found rules that
were left over, and there were some surprises in the structure of host
groups. Another questioner asked about correctness preserving
properties between the compiler and illustrator. They were implemented
by a small group of people. Avishai stated that "It's a good idea to
get both right." Another questioner noted that the visualizer does
some decompilation, and asked if they had thought about decompiling
the rule base into policies. Avishai stated you can lose information
on the way. For example, some firewalls don't support what their
language supports. Another questioner noted that it was disconcerting
to bind topology at compile time, since it changes frequently and
unexpectedly. Avishai replied that they are modeling a limit ed part
of the topology intentionally. It does not include routing
information. In response to questions, Avishai noted that it can't
help with deciding what goes on each side of the firewall, and that
detecting policy conflict was future work. They are not showing
application proxy firewalls,
 just packet firewalls.

The third paper was "Flexible-policy directed code safety" by David
Evans and Andrew Twyman (MIT). David presented. Their goal is to
prevent programs from doing harmful things while allowing useful
work. They want to protect against malicious attacks, as patching code
to prevent them after the fact is a losing battle. They are also
concerned about buggy programs (which are more common) and user
mistakes (from bad interfaces). Each of these may be represented by
different kinds of policies. For example, you can be more precise
about buggy programs. Their work takes a program and a safety policy
and outputs a safe program. They want to reuse their descriptions of
safety policies across multiple platforms. Examples of different
policies include access constraints (including JDK policies), resource
use limits, application specific policies (such as one for tar),
behavior modifying policies, and soft bandwidth limits (that slow down
the application). The user view is of abstract resources like files,
not system calls and disks. They want to express at the policies at
the user level but enforce them at the system level. They enforce them
at the level of the system library. A safety policy definition
includes a resource description, a platform interface, and a resource
use policy. The resource description is a list of operations with
parameters and a comment. The platform interface gives it meaning. A
policy is a constraint on the resource manipulations. They can enforce
any policy that can be defined by checking that can occur on resource
operations. They cannot constrain CPU usage. To enforce policies, they
run them through a policy compiler to produce a policy enforcing
system library. An application transformer takes a program and makes
it policy enforcing by forcing it to use that library. They have code
working on JavaVM and Win32. It scans for kernel traps, so certain
good programs won't run. It uses software fault isolation to prevent
jumps around the wrappers. They replace DLL names in the import
table. "The most surprising result is that it works." They can
support policies on Microsoft Word. Their testing showed that PKZip
performance degraded 12% using a policy that limits writes.  The
overhead will depend on the policy and application. It's faster than
JDK because it generates policies statically.  They can optimize out
unnecessary checking. Their JavaVM needs to be attacked. It's
available on the web. You can upload a program, select a policy, and
have it execute on their machine, though "you don't actually have to
bring down our network." One questioner said he had had problems with
certain programs when replacing DLL names in import sections. The
authors had not yet run into difficulties. Questioning brought out
that they don't allow self modifying code, which precludes Lisp. They
also have to worry about reflection being used in Java. When asked
about covert channels, they stated that while their web site will send
out the violation message, in practice, no information goes back to
code provider. Only the user sees the policy violation.

The second session was on Policy, chaired by Ravi Sandhu (George Mason
University).

The first paper in the session was "Local reconfiguration policies" by
Jonathan K. Millen (SRI International). His work covers the
intersection of multilevel database security (the aggregation problem)
and system survivability (fault tolerance and reconfiguration). A
reconfiguration is a state change, usually forced by a component
failure. Services should continue to be supported if possible. A
reconfiguration policy says what state changes are supported. For
example, multiple services may be supported by different combinations
of resources. A state is a set of services and a composition of
resources. A configuration has to map the services to the
components. For example, a router may be shared for both printing and
dialin. Components can be non-sharable. In a multi-level secure
database, records have sensitivity labels. A low level collection can
be more sensitive than might be expected. An example is the NSA phone
book. So, sensitivity levels apply to aggregates as well as
datasets. Information flow is related to an aggregation. It can be
allowed or not between certain aggregates. Flow is permitted if the
sensitivity of the recipient dominates (a safe flow). You also need to
make sure the union of datasets is allowed to have information flow if
both have flows to a single recipient, since it's a new
aggregation. Meadows showed that there is a unique, maximal safe flow
policy that can be computed. Jon then drew an analogy between
aggregation and reconfiguration. Aggregates like compositions and
datasets like components. He needed a special lambda to correspond to
sensitivity level. Flow policies are like reconfiguration policies. An
induced reconfiguration policy is a flow on states. If the flow is
allowed and realizable, it is allowed in the reconfiguration
policy. He viewed a system as a dataset and examines the consequences
of maximality. First he needed to invent a notion like sensitivity. It
is based on the notion that the more services a component supports,
the more sensitive it is. The sensitivity level of a composition is
the set of service-sets of realizable states supported by that
composition. He uses this formal analogy to discuss safe flow policies
and maximal flow, and to determine if an induced reconfiguration
policy is service preserving. Determining the maximal safe flow gives
a localizable policy (component replacements).  A questioner asked
about future generalizations. Jon wants to extend it to hierarchies of
support.  He would like to apply it to mission critical systems. It
can minimize the amount of information needed to make rational
decisions about substitutions.

The next paper was "A user-centered, modular authorization service
built on an RBAC foundation" by Mary Ellen Zurko (Iris Associates),
Richard T.  Simon (EMC), and Tom Sanfilippo (Groove Networks). I
presented. (This part of the writeup is based on my presentation
notes, which explains its length.) Our work is called Adage
(Authorization for Distributed Applications and Groups). The goals
were to provide user-centered authorization support for security
administrators and applications, to be policy neutral, to be a modular
part of an evolving infrastructure and to take advantage of new
services as they were needed, and to use a role-based (RBAC)
backend. Our GUI provided affordances for new and intermittent users,
while our authorization language provided extensibility and
programmability for power users. Our APIs were as simple as possible
for application developers. We support a variety of policies,
including Bell and LaPadula, Biba, Chinese Wall, ACL granularity,
roles, separation of duty, and ad hoc policies observed in
nature. Our RBAC backend provided excellent support for user queries
and allowed us to experiment with deployment issues for RBAC. The user
interface supports three basic entities; actor, application action,
and target. Actors can hold all the principals associated with a
single person, and are needed for separation of duty policies. Targets
use application-specific namespaces. It supports heterogeneous and
homogeneous groupings of those entities. The modular architecture takes
inputs from authentication and attribute authorities and responds to
authorization requests from applications. The front end has been
integrated into the Apache web server, Tcl, CORBA, and third-part
administration clients written in Java. The backend can use a variety
of database and communications technologies. Two databases represent
the user view of authorization information and a simplified engine
view. The authorization language is a declarative language for
defining and performing operations on authorization objects . Its
built on a more general purpose language (currently Tcl). The
management APIs are Tcl pipes which don't need to be altered when new
management commands are added to the system. The RBAC authorization
engine uses roles derived from UI groups to determine the relevant
rules. Attributes on entities and groups determine their levels, and
constrain when actions may be performed. Rules are made up of actors,
actions and constraints, and targets. Policies group rules and form
the context for queries. Only one policy is active at any time. We
conducted two types of usability testing, in addition to our initial
user-centered design techniques such as scenario-based design, to
verify our claim that the system is easy to use. We performed
contextual usability interviews, which capture a rich view of the
problem domain and are good at finding unknown conceptual
problems. Our unstructured interviews included the user's background,
job responsibilities, organizational culture, work habits and
processes, and security policies (if they existed). Our formal
usability testing elicited detailed feedback on a specific GUI
prototype. Users performed typical tasks while thinking out loud, and
an exit questionnaire provides measurement of the user's
experience. This testing verifies that simple tasks are easily
executed, gauges the satisfaction of novice users with the initial GUI
implementation, documents novice user errors and confusions, and
provides fodder for future changes. We asked the user to accomplish
four tasks in one hour; remove a user from the system, add a new user
with appropriate controls, service a complaint about lack of access to
a resource, and design a policy to control the release of
information. We used five subjects from two organizations who had
experience doing security administration with distributed systems. Our
contextual interviews showed that a security administrator's job is
unobtrusive, dynamic, cooperative, and learning oriented. No subject
had a documented, detailed authorization/security policy. They
preferred GUIs, but required command interfaces as well for situations
when they couldn't use a GUI. The formal usability testing verified
that novice users can perform meaningful tasks in under an hour. They
asked how rules combined, and were satisfied with the answer that all
must be passed. They like the notion of flexible, named groups for
targets but underutilized them. We had hoped they would use those
groupings for the fourth task, but most relied on namespace groupings
instead. All of the subjects were unfamiliar with static separation of
duty. The third task was constructed so that the obvious solutions
would run afoul of separation of duty. The error message made the
problem clear to most of the users.  Several administrators suggested
simply circumventing the constraint. On a 6 point scale (1 being
best), they rated ease of viewing the database at 1.4, ease of finding
things at 2.6, and overall satisfaction at 2.8. One questioner asked
about sharing of information across administrative domains. We had
hoped to incorporate Hosmer and Bell's work on multiple policies, but
lacked time. Another asked how difficult it was for users to translate
their policy into the GUI for task 4. Our GUI supported the notions
they wanted and expected to use, like wildcarding. Another asked about
changing constraints and checking backward compatibility. TIS had a
paper on this issue at last year's Oakland. Another pointed out that
testing on military system administrators would likely produce
different results.

The first afternoon session was on Verification, chaired by John
Mitchell (Stanford University).

The first paper was "Secure communications processing for distributed
languages" by Martin Abadi (Compaq Systems Research Center), Cedric
Fournet (Microsoft Research) and Georges Gonthier (INRIA). Cedric
presented. They want to reason about security properties, while still
shielding high level code from both network security concerns and the
complexity of the network. They aim for a high level model with strong
implicit security and network transparency. Their work is based on
distributed implementations of secure RPC and RMI that do
serialization and call cryptographic APIs. The high level model
represents programs in the join calculus. This ensures correct
protocols and is translated to their distributed implementation,
programs in the sjoin-calculus. One correctness theorem is that the
translation does not enable attacks. This means that distribution does
not enable attacks (of the servers, between machines). It is simpler
to reason about security in the high level model than the low level
cryptographic calls. Join-calculus is a simple model of distributed
programming. Processes can send messages on channels. Processes can
also define new channels.  Channel definitions can express
synchronization patterns. Two processes are equivalent when they have
the same behavior in every context of the join-calculus. The goal is
that, no matter what the attacker does, it won't get information by
interacting with a process. Their work is based on capability-based
control. It also deals with integrity and anonymity. Correctness
ensures that various program transformations are sound. At the
implementation level, such properties are hard to achieve. Processes
cannot communicate through private channels. Contexts can express many
realistic low level attacks. Sjoin-calculus uses more details and is
more explicit. It supplements join-calculus with public key
cryptography. It adds keys and ciphertexts, and encryption and
decryption constructs. Two channels represent the public network,
which is available to any process. Processes use these two channels
only to emit and receive. Messages can be observed/intercepted on the
network. The anonymity of a sender is not guaranteed, even if the key
is kept secret. A protocol is correct when its processes satisfy a few
properties in the sjoin-calculus (e.g., communication occurs
silently). They set up countermeasures for message interception,
replay, session interaction, and traffic analysis. The protocols
combine several standard techniques (message replication, challenge
responses, nonces, confounders, noise). For a given process, they
program a filtering context that intercepts every message that escapes
this process and goes to another. Whenever a message crosses the
filter, new filters and processes may be unfolded, to ensure that all
further messages are filtered. The translation depends only on the
interface of the process. The translation reflects the actual
communications processing in the implementation of the distributed
language. Protocols can be uniformly applied to obtain a secure
distributed implementation of a programming language with a precise
definition (as a translation), with correctness theorems, and "with
considerable expense!" Their results do not depend on the particular
program being considered, its runtime distribution, and its
interaction with untrusted machines, for the equations to be
preserved. A questioner asked what they mean by trusted. They mean
running on a machine implementing their language.


The next paper was "Verification of control flow based security
policies" by T. Jensen, D. Le Metayer, and T. Thorn (IRISA). Thomas
Jensen presented. They discuss how to use techniques from the
semantics of programming languages to verify sec properties. They
produced a formal program model and specification language for
verifying global security properties of code that may contain local
security checks. They model security properties related to the flow of
control in the program. This can model various Java security
architectures, such as sandboxing (code can call methods from its own
site/sandbox or (certain) local methods), resource protection (A can
only call C via B, no other way), and stack inspection (a method can
be called only if all code participating in the call on the control
stack has permission to do so). Verification is automatic based on
static program analysis and checking. The program model is a control
flow graph. Call edges indicate control, nodes mark that methods have
finished. A program is modeled by a set of call stack
traces. Properties on nodes include origin, endorsement, read/write
permissions, and so on. Properties of traces hold if all stacks
satisfy a given property. If there are loops, the stacks can grow
infinitely high, so can't be verified statically. For given property
and program, they find a bound on the size of the stacks on which a
property has to be checked. Intuitively, the number of next nodes in
the formula decides how many times to unfold the loop. They apply this
work to Java. In related work, Schneider uses a finite state automata
to describe secure behavior and enforce security in a system by
parallel composition with a secure automaton. Wallach and Felten
formalized stack inspection as a belief logic and propose secure
passing style as a generalization. Programs are transformed so that
procedures/methods take an extra argument, their permissions. This
work produced a formalism for specifying sec properties based on
temporal logic, and a semantics-based method for automatic
verification of security properties. There is no GUI; no one but
experts can use it. The framework can model the stack inspection from
the new Java security architecture. In the future, they like to take
data flow into account, deal with mutually recursive calls across
protection domains, and specify dynamic segregation of duty-style
properties such as the Chinese Wall. A questioner pointed out that the
control flow information is not necessarily perfect. If the code calls
a base method, it could go many places. In that place, you have to say
it can go anywhere.

A panel called "Brief History of Twenty Years of Computer Security
Research" closed the day. The chair was Teresa Lunt (Xerox PARC).

G. R. Blakley (Texas A&M University) started with 20 years of
cryptography in the open literature. Engineers like to make problems,
not solve them.  The Wright Brothers didn't solve any existing
problems; people were institutionalized if wanted to fly, or burned at
stake if did fly. He started with the Paleolog era. In 1974 George
Purdy had a high-security log-in scheme based on spare polynomials,
publicizing the idea of one-way functions.  In 1976, Diffie and
Hellman introduced the general idea of one-way functions, trapdoor one
way functions, and digital signing. In 1977 the US National Bureau of
Standards adopted DES. There was a lot of worry then about trapdoors
in the algorithm. In 1978 RSA was published. Merkel and Hellman
published their knapsack public key cryptosystem. In 1979 Shamir and
Blakley published threshold schemes. In 1980 people wanted to show
that cryptographic systems are secure. Even and Yacobi published
cryptosystem that was both NP hard to break but almost always easy to
break. This see med to support Bob Morris' assertion that crypto
systems have lifetimes. "We are just leaving a period of relative
sanity in cryptography that began shortly after the 1st World
War. During that time, people spoke of crypto systems that we secure
for hours, ... sometime years. Before and after they spoke of crypto
systems that were unbreakable". In the Mesolog, in 1980, the IEEE
Symposium on Security and Privacy began. Davida pointed out that
threshold schemes could exhibit security very far from Shannon perfect
security (merely cryptographic security). In 1984, Meadows and Blakley
introduced ramp schemes that were more economical than threshold
schemes, but exhibit only Shannon relative security. In 1985 Miller
introduced the elliptic curve PKC. In 1994, NIST adopted DSS. That
brings us to the Neolog (today). There is still no proof of DES
security, knapsack or RSA.  Trapdoor knapsack is basically
broken. Secrecy systems, authentication systems, integrity systems;
few have proofs, all have lifetimes. Electronic signing and sealing
are replacements for inked signatures.

Virgil Gligor (U Maryland) then spoke on 20 years of OS Security (Unix
as one focus), moving us to the inflammatory. 35 years ago, we had
Operating System security. Multics, PSOS, Hydra, CAP, KVM, KSOS,
PDP-11, capabilities, and so on. Much of what followed was based on
this early work. The reference monitor is the way of the future and
always will be. It encapsulated all the properties of the security
policy; you could look at it, prove it, and were done. Then we made
some concessions. Some properties were embedded in outside trusted
processes. Privileged processes could circumvent isolation. Then, we
went to many reference monitors, used recursively at application
level. Untrusted subjects became smaller and smaller, more had to be
trusted and verified. The reference monitor may be necessary, but not
sufficient for policies we're interested in. We can't encapsulate, for
example, denial of service. Information flow property is not a safety
property. Fred Schneider showed we can encapsulate safety properties
in a reference monitor. Otherwise, have to look at the untrusted code
itself. We want to make sure no Trojan horse exploits information
flow. We prove information flow of approximations of systems (not
abstractions). We do no prove the unwinding theorem because the OS is
a large covert channel, so non-interference proofs will always
fail. Penetrate and patch wins in the short run. It requires only a
small investment, assuming penetrations are rare. Open OS is useful;
everyone can take a crack and post the fixes. Open may be more secure
in the long run. In Linux, problems get fixed. He's not sure about
proprietary operating systems. Cookbooks of OS security assurance have
regrettably had a limited role so far. There are no metrics to measure
and they offer largely theoretical advice. Practical advice assumed
that security was a central. It really is after both functionality and
performance. Intrusion detection is of fundamental concern. We still
have to find out what insiders have done. We need anomaly detect ion
and event signatures for accuracy. Intrusion detect will fail unless
we can improve performance and filter the events that we really want
to audit. The difference between security and privacy is that we need
privacy guarantees.

Steve Lipner (MITRETEK) spoke on 20 years of criteria development and
commercial technology. He said Virgil gave about half his talk. He
chose a political theme. He asked, Are you better off than you were 20
years ago? Are you more secure than you were 20 years ago? Well,
maybe. We have laptops and constant electronic connection. We might be
holding even. In 1980, we had some identification and
authentication. Most security was in the OS. We had DAC, auditing, and
basic assurance from the vendors. IBM was committed to penetrate and
patch. We wanted MAC, higher assurance by neutral third parties
(beyond the author saying "yep, it's good alright"; we though the
Orange Book and Evaluated Products List were the answer). We wanted to
process classified information in environments with users not all
cleared to that level. We didn't want information to leak out over the
Internet.  Many sessions early in this conference focused on
evaluation and criteria. We got the OB and evaluations, we got
products into evaluation. Every major vendor through the 80s was
working on B and A level systems. There was an initiative to populate
EPL; C2 by 92! C2 was supposed to be easy. By 1992, we got
identification and authentication, DAC, auditing, and basic assurance
by neutral third parties. But users bought what they bought. The C2
evaluation was after the product shipped.  In 1999, it's the Internet,
stupid! We have firewalls, intrusion detection, cryptography and
public key infrastructures, and strong authentication. You can buy
lots of security products; products and technologies that actually
improve security, are compatible with the technology that users want,
perform well, and are not evaluated. Steve sends encrypted email to
his colleagues pretty easily. Everyone wanted to connect up. Everyone
was spooked by it and went looking for products. Money was an amazing
motivator. They're not perfect, but they improve. For evaluation for
the new century we have the common criteria, which Virgil said won't
work. We have commercial evaluation labs that evaluate any feature
set. The user must understand the evaluation and integrate
products. They have explicit responsibility.  Government users still
need MAC and higher assurance by neutral third parties. Government
users insist on COTS products and features. Will the market demand
mandatory security and assurance?

Jonathan K. Millen (SRI International) spoke next on 20 years of
covert channel modeling and analysis. A covert channel is a form of
communication.  He likened it to a game of volleyball. There are
rules. Some restriction on communication needed (access control). You
can communicate in spite of that by effecting a shared resource, but
it might not be the usual form of communication. With information
hiding, you're allowed to communicate, but the referee isn't supposed
to notice an additional message. There are several branches of study
of covert channels; modeling, theory of information flow, searching in
the code and specifications, bandwidth estimation, bandwidth
reduction, audit, and so on. Jon also mentioned some basic papers from
the Lampson note on the confinement problem from '73 to Hu reducing
timing channels with fuzzy time in '91; the Denning lattice on
information flow, Gypsy, Goguen and Messenger paper led to
noninterference work, Kemmerer produced a practical approach - the
shared resource matrix, Millen's finite state estimate capacity of
covert channels based on Shannon's work, and Hu's work on fuzzy time
(the notion was that defense against timing channels was to perturb
the system realtime clocks). It was the first published paper on
hardware channels. Early work assumed the channels were noiseless as a
conservative, simplifying assumption. This produced an upper
bound. Moskowitz and Miller looked at the effect of random delays from
an interfering processes. When Jon saw the function they were using,
he decided to change his focus to protocol analysis.  In database
systems, they looked at polyinstantiation and transaction
processing. In cryptosystems, they researched subliminal channels and
protocol attacks. Jon then shared some questions and answers from the
field. Have covert channels even been exploited? Yes. Do they require
a Trojan horse? Yes, and a nondiscretionary security policy. How fast
can they get? Thousands of bits per second. What's the difference
between storage and timing channels? No one knows. Can you detect all
potential storage channels? Yes, and then some, by analysis of the OS
and trusted software. Can you eliminate all covert channels? No,
unless you're willing to redesign the system in a fairly drastic way.

John McLean (NRL) spoke about 20 years of formal methods, and why some
may be of limited use. He had several evocative soundbites, such as,
"What's the use of being formal if you don't know what you're talking
about?" and "Algebraic symbols are what you use when you don't know
what you're talking about.". Formal methods are a system of symbols
together with rules for employing them. The rules must be recursive
and computer processable. He searched the CD of all the past Oakland
proceedings for "formal methods" and started counting the
papers. There were papers on formal models, flow analysis,
verification, and application to A1 OSes in first few years. '84 had a
whole sessions on formal methods; verification methods, security
models, and flow. He then showed us the results we got with Excel, the
2nd great program he learned to use after becoming a manager
(Powerpoint being the first). In '84 there were 14 formal methods
papers. That began the heyday of formal methods, which lasted through
'96. In '84 there was work on Ada and an A1 OS. Millen's interrogator
foreshadowed cryptographic work in '90. In '89, after system Z, he
learned not to take on the NSA lightly. Most of the Clark and Wilson
impact happened outside of this conference, though the paper itself
was published here. John had several lessons learned. If you're going
to say something, make a mistake, or no one will write about
it. Security is not simply good software engineering, and security
verification is not simply an issue of software
correctness. Correctness is not preserved by refinement or
composition, and testing doesn't work either. Proofs can be flawed or
they can prove the wrong thing. A large verified OS is too costly and
stagnant. Formal methods are not appropriate for lightweight
security. Some security properties are not treated by formal methods
(integrity). It's good for small, complex specifications or
programs. Formal methods can shed insight into properties they have
been used to analyze. There are lots of algebras, and so on, and all
have different uses. Formal methods require supporting
technologies. To be used by practitioners, formal methods must be
built around the practitioners' established ways of doing things. They
need to show added benefit when the developers don't do anything
differently. Formal methods isn't all we have to think about.

Steve Kent (BBN/GTE), the last presenter in the panel, spoke on
Network Security: Then and Now or 20 years in 10 minutes. He showed a
comparison on a variety of topics between '79 (though it probably
should have been '80) and '99. In key management, we had Key
Distribution Centers (KDCs) in '79, we have Public Key Infrastructures
now. We're still worried about what happens when crossing
administrative boundaries. Net encryption for the DoD has the same
goals as then, to interpose between trusted and definitely not trusted
parts of the network. Net encryption for the public used proprietary
link devices, and now we have IPsec. There was little of it back in
'80. Some people used DES. In Crypto modules, today custom chips
implement part of the protocols. For public algorithms, we had and
still have DES and RSA. DES was new back then; RSA was becoming
known. Today we have DES, RSA, DH, and the advanced encryption
standard. In access control and authentication, we had MAC and DAC
then and now, and now we have role based access control too. For
access control models we had and have ACLs and capabilities. For
perimeter access controls we had guards and now we have firewalls and
guards. Firewalls are now the major form of perimeter control, even
sometimes in the military. For user authentication we had and have
passwords. We also now have one time password systems, fancy tokens,
increased use of certificates, and more biometrics than is good for
us. Now we send passwords over SSL. In net intrusion detection we used
to say "huh?" and now there are many alternatives. For network
protocols we need to secure, we used to have telnet, ftp, email. We
still have all those, and the list goes on, including HTTP, BGP, SNMP,
and so on. In '79, for security protocols, "yeah, we made 'em up!" Then
Jon Millen started analyzing them and took all the fun out of 'em. Now
we have IPSEC, SSL, GSSAPI, SOCKS, and so on. The IETF still has many
cowboy properties. For assurance we had SFA and code reviews. Now we
have FIPS 140-1, TNI, ITSEC, an d common criteria, and "we are so, so
much better for it." "Every good talk should end with assurance,
though we do it up front."

Then came the question and answer period. Snow said that the emphasis
on penetrate and patch saddens him very much. It assumes the best
penetrators will post a patch. Virgil said it was economic
justification. There's zero up front cost so no large investment, no
extension of time to market, and they're betting that there will be
few intrusions. Lipner said that sadly, Virgil is exactly right. Time
to market is important. Companies start collecting revenue and beat
their competitor and get market share. Mike Reiter pointed out that
penetrate and patch doesn't hold in avionics; they don't let a plane
crash and fix it later. Lipner said that software is soft, and the
criticality of applications isn't acknowledged. They can patch after
it's fielded. People dead gets more lasting attention than flurries
around a visible computer flaw. VLSI folks do assurance much more than
OS vendors, because a flaw in the field (any flaw) is a big
deal. McLean said it's good enough for the markets; they'll respond
with insurance , lawyers, interest rates, and so on. "In the Navy, we
classify our mistakes." (so that outsiders don't find out about them.)
McHugh suggested we fight efforts to absolve developers of
liability. Olin Seibert said we focused on OSes to stop people from
doing things. Melissa shows that programs designed to have flaws to
allow these bad things. We can't stop a program from sending mail
when a user can. Where should we go? Lipner said you want to get away
from programming generality. It's a mater o f design. Drew Dean asked
for reactions to open systems, where problems are found and
fixed. Also, what about the folks who don't update because they have a
system that works (or almost works); how do we deal with legacy?
Virgil said it was the same as other problems like choosing good
password. Users and system administrators have a responsibility, and
must live with the consequences. Kent pointed out it's human
nature. Home burglar alarms are mostly installed after the client or a
neighbor was burgled. Prevention not high on the list. Peter Neuman
said that the Web paradigm takes care of upgrades. You download just
you what need, do dynamic linking, reload, all in a secure fashion. On
CERT, Steve Bellovin went back and looked at advisories for the
previous year. 13.8 of them are buffer overflows. We've been talking
about buffer overflows for 30 years. We're talking about software
engineering; good practice is necessary. Hilary Orman said that
OpenBSD is committed to eliminating buffer overflows from source. Kent
pointed out that people got excited about using SSL to protect credit
cards for all the wrong reasons. To steal credit cards, all you have
to do is set up a site and ask for them, and be sure to use SSL. A
questioner asked, except for cryptography, has anything else in the
security community changed the outside world? Virgil said we have had
an impact, but its not measurable.  Vendors do know more, know what it
costs. McLean said we have graduate students now. Bob Blakley said
that he made an economic argument 2 years ago about why security isn't
going to work. He suggests something deeper, like an adaptive response
to complexity. We can get 5 billion people to do half the work for
free. Is there any other adaptive response? McLean said we rely very
much on cryptography. Virgil said that investing in up front costs is
one response, another is penetrate and patch, and the 3rd approach is
to educate students and future designers. He asked how many don't use
structured programming? We were taught how to do that in
universities. A questioner asked about approaches to the problem of
trusted insiders doing harm. Kent said that network security says
perimeter security. Commercial clients realize its inadequate. They
deploy network security in a more layered fashion, separating parts of
network. Lipner said we have been beating on technology, technologists
and suppliers. End users have to take some level of
responsibility. Organizations have to establish policies, and put
controls in place. One of the most valuable employees he ever had was
a great system administrator. He read the audit trails, and knew what
was doing what.

Tuesday started with a session on Intrusion Detection, chaired by
Cynthia Irvine (Naval Postgraduate School).

The first paper was "A data mining framework for building intrusion
detection models" by Wenke Lee, Sal Stolfo, and Kui Mok (Columbia
University).  Wenke presented. After outlining problems with
traditional approaches, he outlined their approach based on data
mining. Their work automatically learns detection rules, selecting
features using patterns from audit data. A set of data mining
algorithms computes frequent patterns. The performance of their
approach was tested at a DARPA evaluation. Traditional approaches
consist of pure knowledge engineering. They handcode patterns for
known intrusions or select features based on experience or
intuition. There is a push for an automated approach. They call their
work MADAM ID. It learns detection models automatically. It selects
features using frequent patterns from audit data. A frequent pattern
is like a statistical summary of system activities. It combines
knowledge engineering with knowledge discovery. The raw audit data
(such as tcpdump packet data) is not suitable for building
models. They process it into packet based ASCII data and summarize it
into connection records. The tool then mines patterns and suggests
additional features, then builds detection models. They build
classification models, such as intrusion vs normal, determine
relations between attributes, and find sequential patterns. They use
various algorithms to find each. The key is to include useful features
into the model. They automate feature selection by mining frequent
sequential patterns from the audit data. They identify intrusion only
patterns to construct temporal and statistical features. The basic
algorithms is based on association rules and frequent
episodes. Extensions exploit the characteristics of audit data.  This
approach is efficient and finds only relevant and useful
patterns. Examples include "30% of the time when a user sends email it
is from a particular host in the morning; this accounts for 10% of the
user's overall activity". They have extended the basic data mining
algorithms. An axis attribute is the service used, like
HTTP. Reference attributes include the subject of a sequence of
related actions. They construct features from parsed patterns. The
DARPA Intrusion Detection evaluation was in '98. They got 7 weeks of
labeled training data (tcpdump and BSM output). It was labeled normal
or intrusion. They then got 2 weeks of unlabeled test data. They were
to label each connection and send it back. There were 38 attack types
in 4 categories (denial of service (dos), probing, remote gaining
access to local (r2l), and user gaining root privilege(u2r)). Some of
the attacks were new. They used features constructed from mined
patterns. There were temporal and statistical traffic features that
describe connections within a time window, such as % of rejected
connections. They had a content model for TCP connections. For
example, sometimes a user gets root going through su root, sometimes
through buffer overflow. They had a traffic model for all
connections. They had a very good detection rate for probing and an
acceptable rate for u2r and dos. They did poorly for r2l. There are
many different ways and there was a lack of representative data in the
training. In the future, they want to do anomaly detection for network
traffic. They can currently classify only at the end of the
connection; they also want to get evidence during. One questioner
asked what types of probes were detected. Some were slow moving; 1 per
1/2 hour for instance. They can sort by destination host. It's an
expensive computation to sort that way. A questioner asked why they
believe that some subspaces more dense than others. Wenke replied that
it is the nature of intrusion; you have to do bad things quickly. A
questioner pointed out that denial of service can be one packet. Wenke
said that the model is not 100% accurate. A questioner asked about the
right time threshold level for detecting attacks. They don't
know. They would like to incorporate a sense of cost into intrusion
detection activities; how much time or resources to commit. Some
people do and some don't care about slow probing.

The next paper was "Detecting intrusions using system calls:
Alternative data models" by Christina Warrender, Stephanie Forrest,
and Barak Pearlmutter (University of New Mexico). Christina
presented. They did a comparison of several different analysis methods
for intrusion detection. UNM showed that you can monitor program
behavior to detect intrusions by watching just the system calls, not
any arguments. While the traces of calls can vary widely, small
patterns within them are characteristic of normalness. They use simple
pattern matching techniques which are, well, simple. They asked, can
we do better? New and more sophisticated analysis techniques can be
applied to the same data. They chose some and tried them on lots of
different data sets. Their original technique was to use fixed length
windows, say of 3 or 6 calls, and extract all the fixed length
sequences from the trace.  In testing they look for anything not in
the database of normal sequences. A single mismatch may just be a new
normal. But clumps o f mismatches in a short portion of the trace may
indicate a problem. There are also frequency-based methods, like
EMERALD, that look at which sequences appear and how often. There are
data mining methods that identify the most important sets of system
calls. Finite state machines describe the process that produces the
traces in their entirety. There are numerous language induction
methods, neural networks, time-series prediction. Traces have
relatively large alphabet sizes (20 - 60 is typical) and varying
lengths. History matters locally but not globally. They are generated
by deterministic computer programs, so stochastic methods might not
help. Their goals were to provide precise discrimination, efficient
on-line testing (catch it while its happening) and reasonable off-line
training. The representatives they chose to test were their sequence
time-delay embedding (stide), where patterns never seen before are
suspicious, stide with a frequency threshold (rare patterns are
suspicious), rule induction (RIPPER) where sequences violating
prediction rules are suspicious (it has a smaller data representation
and better generalization), and Hidden Markov Model, where a trace
that is difficult for HMM to produce is suspicious (this allows for
nonstationary behavior). HMM can run for hours or days for training;
they ran it for 8 weeks. They used a variety of data sets to do an
exploratory study. They collected data while were users going about
their daily business. Some were constructed through explicit test runs
as well. Most of the detection methods have sensitivity thresholds
changing how many intrusions are detected and how many false alarms
are triggered. HMM has less sensitivity and RIPPER has a very narrow
range in the true positive vs.  false tradeoff. Their evaluation of
these was not symmetric. For true intrusions, either it detected it or
it didn't. However, they count each alarm for false positives. They
compared the results to a random classifier; all did better. Their
simple frequency based methods was not a s good as the others. All the
others have points that are near ideal, but none are exactly
ideal. With average data, HMM seems closest to ideal. There were
widely variant results across the different data sets. They posit that
if they add another data set, the average and median results for each
will change. Their conclusions were, most methods perform well, the
computational effort is not directly related to the results, large
differences in performance are due to the individual data sets, and
finding data appropriate to the problem is as important as using
sophisticated analysis methods.  One questioner asked, if I was an
attacker outside trying to get in, what fingerprint could I see
external to system that would allow me to adjust strategy. A coworker
answered you could try to figure out which programs are
traced. Whether a fingerprint is left depends on how it is
implemented.  Another asked if they had looked at how quickly they can
detect and identify an intrusion before it begins altering the
system. Most are currently off line tests of what they hope to be
online. A questioner asked where their window size came from. It was a
tradeoff.  They found 6 - 10 worked well. With smaller, there was less
information. With larger, the larger database made it more time
consuming. Another asked how you know that normal is normal. There are
no guarantees. The method they use didn't really address that. Another
asked if you'd need training sets for each part of an
organization. Yes, the definition is very sensitive to the system and
users.

The final paper of the session was "Detecting computer and network
misuse through the production-based expert system toolset (P-BEST)" by
Ulf Lindqvist (Chalmers University of Technology) and Phillip
A. Porras (SRI International). Ulf presented. The talk outline covered
expert systems for misuse detection, P-BEST integrated into EMERALD, a
P-BEST language introduction, rule examples for P-BEST for the audit
trail and network, and student testing of the usability of the
tool. The desired characteristics for a misuse detection system is
that it is fast, with low resource consumption, has the ability to
describe all types of intrusions, known and not, the language is easy
to understand and use, and it supports manual and automatic update of
rules. Rule based expert systems have rules and facts, with a forward
chaining system, which is data-driven by facts and uses rules to
produce new facts. There is a separation of rules, facts, and the
inference engine. The rules are production rules (if situation then
action (antecedent -> consequent)). There are traditional reasons
against expert systems as intrusion detection systems. The performance
is too low, they are designed for user interaction, not for automatic
event analysis, they are difficult to integrate with other systems,
and the language complexity puts up a learning threshold for new
users. P-BEST is a general purpose forward chaining inference
engine. Rules are translated into C code, not interpreted, which
produces a significant performance advantage. It's ready for extension
and integration; you just write another C function and call it. You
can call the engine from other C program. The language is small and
thus easy to comprehend for beginners. P-BEST was first used in MIDAS
which monitored docmaster and did analysis on a separate Lisp
machine. P-BES was used in IDES and NIDES. It has 39 rule sets and 69
rules in NIDES. An EMERALD monitor is a generic architectural building
block. A monitor instantiation watches over a particular target. A
resource object describes the target. The profiler engine performs
statistical anomaly detection. The signature engine does misuse
detection. The resolver compiles reports from the engines and responds
with warning messages, and so on. There is distributed monitoring with
communication between the monitors. In the P-BEST language, a fact is
declared as a pattern type (ptype). P-BEST rules have an antecedent
and consequence. The antecedent is a bit like a template over
attributes that need to match. The consequent can extract event
information and produce side effects, and manipulate the event as
well.  It makes sure a rule is not eligible to fire again by deleting
the event. This also keeps the fact base from running out of space. It
can keep state between rules, as in repeated authentication
failures. For example, if bad logins are older than a time period, it
can remove the facts and decrement a counter. Ulf showed an example of
network monitoring for an FTP server. It tries to detect the result of
the attack, instead of the method of attack, such as an anonymous
login occurring. He also gave a SYN flooding example. The SYN
detection "pool guard" shows an alert when too many people are in the
pool too long. At a threshold, it sends an alert, and deletes old
facts/events. His student exercise was to monitor an FTP server. They
were evaluated against recorded real data. There were 46 groups with
87 students. 25 were completely correct; 8 misinterpreted the vaguely
formulated requests. Some suggested improvements to the usability of
the tool. In conclusion, Ulf said that P-BEST is powerful in
performance and functionality, flexible in the problem domain, easy to
use for beginners and experts, and qualified for student exercises and
deployment in modern intrusion detection systems. I asked for more
information about student reaction. Ulf said that they said it was fun
to work with P-BEST, and that they questioned everything. A questioner
asked if the level of abstraction is too low. It can be difficult to
implement even a simple state machine, but after implementing the
first, it's easier. Another questioner asked what happened if 2 rules
can require the same fact. You can mark the facts; each rule has own
unique mark. The lowest priority rule deletes the fact.

The Tuesday panel was "Near Misses and Hidden Treasures in Early
Computer Security Research." It as chaired by Stan Ames (MITRE). Stan
introduced his panelists (Tom Berson of Anagram Labs and Xerox PARC,
Dick Kemmerer of UC Santa Barbara, and Marv Schaefer of Arca) as Tom,
Dick and Hairy (you had to be there, or know Marv, to see the
humor). They allocated chunks of the past conference proceedings to
the panelists chronologically.

Tom began with the first several years of the conference, calling it
"Mining the Codex Oaklandius '80 - '82." In '80, the hotel was in
Oakland. The near miss was it could increase its rates if it moved to
Berkeley, so it changed its postal address to the back instead of the
front. Tom's criteria for selection was papers not enough people know
about or were almost right but didn't quite make it. Carl Landwehr
thought this panel up. Tom looked for papers that were prescient,
excellent and/or fun. He looked over 60 odd papers; it was like being
on a program committee. Some he remembered with a certain thrill, but
were perhaps forgotten. It was fun to reread them knowing what we know
now. This all happened before the web. He noted that there was
currently sloppy scholarship, here and elsewhere, and that faculty is
to largely blame, while the program committee also bears some of the
responsibility. For this presentation, his own blinders leave out
things happened that didn't get recorded here and things that happened
before this conference. For each year he presented papers that were
Notable (still have a message), Quotable (fun), and Hidden
Treasures. In '80, the Notable papers were by Liu, Tucker Withington,
and Simmons. Liu's "On Security Flow Analysis in Computer Systems"
discussed expression flows to certify systems and static authorization
functions. Tucker Withington's "The Trusted Function in Secure
Decentralized Processing" said what Virgil said yesterday. Simmons
never double spaced his papers. His "Secure Communications in the
Presence of Pervasive Deceit" discussed how symmetric and asymmetric
techniques differ only in the secure exchange, and the problem of
authenticating a public key directory. The '80 Quotable paper was by
Ames and Keeton-Williams, "Demonstrating Security for Trusted
Applications on a Security Kernel Base". "The problem with engineers
is that they tend to cheat in order to get results. The problem with
mathematicians is that they tend to work on toy problems in order to
get results. The problem with program verifiers is that they tend to
cheat at toy problems in order to get results." The '80 Hidden
Treasure was by Merkle, "Protocols for Public Key Cryptosystems". It
anticipates the use of signed mobile code distributed over
communications networks, timestamping, witnessed digital
signatures. It discusses digital signatures with no disputes signed by
a network administrator for distributing code. In '81 the Notable
papers were by Millen and Simmons. Millen's "Information Flow Analysis
of Formal Specifications" built a tool to analyze systems. Simmons'
"Half a Loaf is Better Than None: Some Novel Message Integrity
Problems" shows that signatures can be disavowed by publishing or
claiming to have published the secret key.  He had three Quotable '81
papers. Tasker's "Trusted Computer Systems" was a PR piece where the
government decided to get commercial vendors to submit designs for
evaluation. "Involvement of computer manufacturers is, of course,
essential. The DOD does not want to be in business of building and
maintaining its own line of computers and operating systems." Ames's
paper, "Security Kernels: A Solution or a Problem?": "Why then is
there such a growing concern about computer security? I believe that
the answer lies in two areas of computers that are somewhat
unique. The first is that management has little, if any, idea of what
being done in their computer centers, or how vulnerable they are. This
could be referred to as the "Ostrich" problem. The second problem is
that the computer industry has grown out of a technology based upon
program hacking. The general rule of thumb has been to build it now,
design it later, and only be concerned with making it work after
delivery while on a maintenance contract." The government was paying
for the patching; now no one is. In a panel on "Cryptography" chaired
by Davida, Wilk said "The main message is that government policies
which apply to cryptography are a few decades old - formed at a time
when nondefense interests were trivial. These policies may need to be
updated to accommodate evolving private sector needs." And Morris said
"Another observation I can't help making from being at this meeting
and from previous experience is this: I talk to all of you, I look at
your name tags and see the organizations you represent, and I see that
with very few exceptions you represent suppliers of cryptography or
security systems and I keep asking the question, Where are the users?
Which of you represent the potential users of cryptographic services?
I've met 3 or 4.  Crocker bank is represented, for example, and there
are some others.  Where is the market? How are the people doing
selling DES chips? There's a warning here." The '81 Hidden Treasure
was by Miller and Resnick, "Military Message Systems: Applying a
Security Model." It said that the Bell & LaPadula model is neither
adequate nor appropriate as the basis of a military message system. It
anticipates object oriented security pollicies, and defines policy
from the users point of view of the system. Ap plying the Orange Book
outside of its appropriate scope was a tragedy for security
research. '82 had three Notable papers. Lipner while at DEC wrote
"Non-Discretionary Controls for Commercial Applications".  He said
that the lattice model may be applicable to commercial processing, but
it requires different ways of looking at it. Goguen and Meseguer wrote
"Security Policies and Security Models", the foundation of information
flow. It was impossible to read then, and its still impossible to
read. They are not modest and they don't mince words. Kemmerer wrote
"A Practical Approach to Identifying Storage and Timing Channels",
which had a tremendous impact. It put forth a shared resource matrix
methodology for all phases of the software lifecycle. Four papers were
Quotable in '82. Millen's "Kernel Isolation for the PDP-11/70"
includes a two paragraph review of the state of play of security
kernels and the current state of research on them. Turn's "Privacy
Protection in the 1980s" for shadowed the current state of privacy,
including the recent Economist cover story that concluded privacy has
eroded in last 20 years, and will continue. The introduction to
Anderson's "Accelerating Computer Security Innovations" neatly lays
out the tensions between defense needs, commercial need, defense
deployments and commercial deployments. Purdy, Simmons, and Studier's
"A Software Protection Scheme" is a plea for tamper resistance. The
'82 Hidden Treasure was Simmons and Holdridge's "Privacy Channel". It
might be a near miss. It warns against the small-codebook problem in
public-key cryptography. It prefigures the PKCS#11 padding
attack. Tom's final message was beauty is altogether in the eye of the
beholder; search for your own hidden treasures.

Dick had '83 - '85, though they all looked through all 10 years
too. He looked at the cryptography vs. non-cryptography papers. There
were 10 non-cryptography and 8 cryptography papers in first year. They
did all cryptography papers in a row, and each camp didn't listen to
the other. He looked for papers that he would have students go back
to. Rushby and Randall's "A Distributed Secure System" discussed
standard Unix systems and small trustworthy security mechanisms. It
was based on the separation kernel idea. The best work is always based
on previous work. Millen's "The Interrogator: A Tool For Cryptographic
Security" was the first published paper on tool-based analysis of
encryption protocols using formal methods.  It was Prolog
based. Goguen ad Meseguer "Unwinding and Inference Control" was based
on their '82 paper. Unwinding reduces the proof of satisfaction of a
policy to simpler conditions, which by inductive argument guarantee
that the policy holds. It was the foundation for the non-interference,
deducibility, and non-deducibility work. It made things more
practical. In "Fingerprinting", Wagner gives a taxonomy for
fingerprints. He suggests a subtle encoding for computer
software. This work is mostly unknown in the fingerprinting
community. Bonet in '95 does refer to it; otherwise there are no
references to it. The taxonomy includes logical (machine processable)
vs. physical (on a drinking glass), degree of integration (from
perfect, where if it is altered, the object is unusable, to
statistical, where to some level of confidence, if there are n altered
objects, you can determine source), association method, granularity
(discrete or continuous). These were Dick's terms. Dick also noticed
that nobody's affiliation seems to be constant. He asked How many
others are still working in the same place as 19 years ago (a few
raised their hands). He closed by asking, Whatever happened to Ada?
SDC? Don Good? The details are all in the papers; enjoy reading them.

Marv remembers when hacking was a job qualification, not a federal
offense. He had '86 - '89. The times were very trying. We were trying
to do things, trying to get things evaluated and couldn't, trying to
build practical systems, trying to build Seaview. He did a breakdown
on the papers in those years. Approximately 18 were dedicated to
modeling, 17.5 on database management (The .5 was on encrypted
database mgmt), 15 on formal verification and methods, 14 on secure
systems and architecture, 11 on operating systems by themselves, 7 on
networks, 5 on cryptography and key management (a key generating and
encrypting system was on the cover the first two of those years), 5 on
information flow and covert channels, 5 on viruses (all in same
issue), 3 on authentication, 1 on graph theory applications, 2 on
intrusion detection, and 1 on A2 criteria. He read all the papers
twice each for hidden gems. There were fads that came and went
quickly. There were 3 papers in one session on capability based
systems, and 3 in a row on Compartmented Mode Workstations. The Q&A
destroyed the continuity and a special session called for that
evening. It created quite a flurry. The covert channel papers were fun
in the beginning. There was 1 in '86, 4 in '87, 1 in '88, and 1 in
'89. There were designer channels in the megabit per second
range. Papers on the philosophy and foundations of computer security
appear in '87 - '88. McLean and Bell wrote papers producing a
tremendous amount of insight based on the Bell & LaPadula model. The
Clark & Wilson model was published in '86. Seaview had 1 paper '86, 1
in '87, 2 in '88. Virgil has changed his position on something. In
'86, he said of Secure Xenix that you cannot have compatibility,
performance and security. This year he said you can't have
functionality, performance and security. There were important pape rs
and a lot of very good papers that were not followed through on
today. In '86 Chalmers wrote about military and commercial security
policies. Also in '86 Dobson and Randall wrote "Building Reliable
Secure Computing Systems Out of Unreliable, Insecure Components",
which anticipated the systems we have today. Birrell, Lampson,
Needham, and Schroeder wrote "A Global Authentication System Without
Global Trust." Dan Nesset wrote on distributed system security,
crystallizing the understood issues for building a distributed secure
system. Gligor wrote about a new security testing method, defining
black white, grey box testing. '87 was the year of CMW and the Clark
and Wilson paper. McLean wrote on reasoning about security models. It
was the most important paper that year. A year later he wrote on the
algebra of security models. McCollough wrote about the hookup property
showing that two secure systems together could be not secure. Berenson
and Lunt modeled the Berlin subway. Akl and Denning wrote on checking
fo r consistency and completeness. Kemmerer upset the NSA by using
formal method techniques to analyze encryption protocols. The Trojan
horse made the cover in '88. There was Mclean's algebra paper and
Bell's rebuttal on modeling techniques for B&L. Go back and read the
papers. Clark Weisman's paper didn't get published that year. It was
on Blacker A1 security for the DDN. It won excellent paper award and
was published in '92 as an appendix.  Jackson Wilson's paper on views
of secure objects in MLS is still cited. Gligor wrote on a bandwidth
computation of covert channels. McCollough wrote on non interference
and composability. He showed how you can build a very insecure network
only from secure components. Gligor's paper on the prevention of
denial of service was a hidden gem until the Morris worm
manifested. We got a virus cover '89. Lunt's paper on aggregation and
inference was a capstone on 15 years research. Williams and Dinolt
wrote on a trusted file server model; there was much discussion. Gong
published his secure identity based capability system and Karger wrote
on new methods for immediate revocation. Brewer and Nash published the
Chinese Wall and Crocker and Pozzo wrote on a virus filter. "With
Microscope and Tweezers" was a good panel on the Morris worm. Badger
published multi-granularity integrity policies, and Estrin wrote
security issues and policy writing. Schaefer et al. showed that a
system meeting the Trusted Network Interpretation could fail in 3
interesting ways; leaking information, being remotely modifiable, and
falling to denial of service attacks.

It was then time for the question and answer period. A questioner
asked if we had grown in strength or weakened. A panelist answered
that in '80 it was fascinating, and he expects to leave wide eyed
today. There are fads or fashions, particularly fashions aligned with
ones own interest. The prejudices in the program committee react to
those fads. Another questioner asked when "research" came and went
from symposium title. In the beginning, it was applied work, real
examples of systems being developed. Later, verification, modeling,
and equations became more prevalent. The selection process determined
what should be here. It's been an awfully good conference with awfully
good people. Year 1 there were 40 people. In year 2, everyone came at
last minute. A questioner asked if you can correlate fashions with DoD
money in those areas. Someone answered "You know we're all above
that." Someone else answered "You know we're all below that." A
questioner asked what about ideas that were bad and not forgotten fast
enough. Early on there was a wide variety of wild ideas. Funding mono
cropped the conference to what was relevant to the DoD. Someone asked
which papers made a difference. McLean's work, non-interference by
Goguen and Meseguer., Millen's tool based work, Firetag and Berenson
on how log in, challenge response authentication. Formal protocol
analysis effected how cryptographic protocols were developed. In early
conversation, we asked what would happen if we really had a
penetration. Intrusion detection and insider attack papers were looked
down on, as beyond A1 would keep the TCB small and safe. A questioner
asked if there were any rejected papers that were fantastic. Dick
answered, yeah, he had a couple.

The next session on Information Flow was chaired by John McHugh
(Portland State University).

The first paper was "A multi-threading architecture for mulitlevel
secure transaction processing" by Haruna Isa (US Navy), William
R. Shockley (Cyberscape Computer Services) and Cynthia El Irvine
(Naval Postgraduate School). Bill presented, as Haruna called up to
Europe the night before. Bill started by saying that the counter
reformation is in full swing. This may be last paper on MAC and
security kernels accepted here. Why did they target distributed
transaction processing? That's where the money is. His talk outline
included discussion of how their architecture differs from earlier
approaches, technical issues, and current status. The Palm Pilot is a
good place for high assurance, as it has a trusted path. How good
security is depends on how good the keys are. The master keys to the
world should not used very often. You need to store those someplace
really safe.  That's also a good target for high-assurance
security. Systems that implement critical functions for very large
outfits are very high value target s, as are systems that support
multiple users who demand isolation but need to be hooked up to the
web. The value throughput is huge.  Traditional Transaction Processing
(TP) involves a DBMS, TP Monitor, and TP programs coupled with
queues. Queues make it simple to couple programs and make them easy to
rearrange. There are known show stoppers; an unacceptable performance
penalty, a security architecture that supports the wrong processing
style, an architecture that does not support incremental migration
with a low barrier to trial, an architecture that does not accommodate
heterogeneous levels of assurance. Most PCs will not be secure. You
want to add high security for certain critical functions. The paper is
a piece of this. They adapt an existing security kernel design
(SASS). It's a paper design that does static allocation at
initialization time. They want to support multi-threaded processes and
multiple execution points in the same server space, queue-driven
server processes, and a mandatory policy with a gazillion potential
categories (one per dot.com), so you can protect your own stuff. They
retained the minimized TCB B3 style architecture. Their first
unsuccessful idea was to use one server process per access class, with
an isolated address space.  The gazillion access classes made this a
performance show stopper. That suggested the need to re-use
pre-initialized processes. The second unsuccessful idea was a single
trusted server process, which is the current de facto approach. The
one trusted server creates a thread per request. The performance good,
but the security is poor. The per thread context held in a single
address space is shared by all threads. The ideas set up a false
dichotomy. The first incorrect assumption: there is only one address
space per process. The Intel CPU architecture provides two. They
associated one with the process, and one with the thread. No one else
uses that second address space. The second incorrect assumption was
that access classes of incoming requests are random. They postulate
that there may well be exploitable locality of access class. They have
a three tier process architecture. Ring 0 is SASS. Ring 1 is the
trusted minimized per process task manager with invariant TCB
code. Ring 2 holds the untrusted per access class tasks. All tasks in
same process share a read-only Global Descriptor Table (GDT) tailored
to their query. Ring 3 holds the per query threads. They have a
multilevel secure transaction processing architecture with isolation
of tasks by process. The process in ring 1 is multi-level, in GDT
space. The tasks are in Local Descriptor Table (LDT) space. They are
not yet at the demonstration phase. They have much of the
framework. They can start a process and switch LDTs and GDTs. They
would like a segment descriptor table per ring in the CPU. There were
required kernel modifications including functions to manage an
additional table and modifications to the scheduling algorithm. "And
yet it moves."

The next paper was "Specification and enforcement of classification
and inference constraints" by Steven Dawson (SRI International),
Sabrina De Capitani di Vimercati (University of Milan) and Pierangela
Samarati (SRI International). Pierangela presented. They address the
existence of internal databases that have to undergo external
release. Not all the information has to be disclosed. Information
release is to be controlled by a mandatory policy. There are
classification requirements that explicitly assign access classes to
data and establish constraints between classifications of some data
with respect to the classification of other data. There are simple
classification constraints that explicitly assign access classes to
attributes, possibly depending on conditions. Association constraints
put constraints on the classification on the combination of
attributes.  Inference constraints are constraints due to inference
relationships existing among attributes. Classification integrity
constraints are constraints required by the underlying multilevel
DBMS. Previous related work has been done on view based
classification, classification upgrading to block inference channels,
database design, and data upgrading and controlled release. In their
approach, they input an unprotected database, a classification
lattice, and a set of constraints (classification requirements). The
desired output is a classified DB that is secure (satisfies all the
constraints) and minimizes information loss (does not classify data
more than needed to provide protection). They use a set of
classification assignments that satisfy all the constraints. A
constraint is a labeling expression and a selection condition. A
labeling expression is a level, or a relationship to a level or
another classification. A selection condition is a conjunction with
operations on attributes (like a template). They want a correct and
minimal classification. One possible approach is to evaluate all
constraints against the DB until a fixed point is reached. There may b
e multiple solutions, and it may find a non-minimal solution. Also,
the DB may be large. Their approach is to evaluate the constraints
once, against the schema. They derive a set of classification
assignments that can be enforced with one pass over the DB to produce
a correct and minimal solution (possibly satisfying some preference
criteria). Least upper bound (lub) expressions have different
solutions to a set of constraints from the different combination
(strategies) of the ways each lub constraint is solved. As a
simplification, each lub expression can be solved by requiring
dominance of one of the attributes in the expression. They decompose
each complex constraint in a set of simple constraints. They construct
all possible combinations of simple decomposed constraints plus simple
constraints. They define a set of satisfiable combinations of
conditions appearing in the classification constraints. For each
condition pattern there is at least one strategy that produces a
minimal solution. The space of strategies is independent of
complicated patterns. The solution to a set of simple constraints
(strategy) is straightforward. They achieve correctness, minimality,
completeness, and consistency. Complete means everything gets a
classification. Consistent means no two classification assignments
specify different security levels for the same DB element. This
supports explicit classification as well as inference and association
constraints and ensures minimal information loss. In the future,
they'd like to work on efficient approaches to produce a solution,
relaxation of restrictions due to the application of the approach at
schema level, partially ordered lattices, enrichment of constraints,
derivation of constraints, modifications to the DB, and insertion of
cover stories. A questioner asked if their approach was NP
complete. The optimal solution is NP complete. You pay an exponential
price. A questioner noted that this is good for DBs of low
dimensionality that are densely packed but bad for CAD or web DBs.

The final paper of the session was "A test for non-disclosure in
security level translations" by David Rosenthal and Francis Fung
(Odyssey Research Associates). David presented. Sometimes there is a
need to translate MAC security levels between domains that have
different levels. For example, sending email in MISSI. Ideally a
complete non-disclosure analysis should occur when a common domain is
designed. This common domain information can be used to properly build
translation functions. If a level is named the same but has slightly
different qualifications, what do you do? They developed and analyzed
properties that could be used to check for non-disclosure based on
more limited information that may be operationally available
- the orderings of the levels of the two domains and the translations
functions. The non-disclosure property that they would like is the B&L
Simple Security Property. A relabelling should not result in lower
level users being able to read higher level information. They want a
level incre asing property; relabelling can only further restrict
access. They asked what property should we use for analyzing just the
security orderings of the
domains and the translation functions? They introduced the Security
Level Translation Property (SLTP). Start at a level, map across to the
other domain, pick any level higher, map back to the original domain,
and make sure you're higher than when you started. Why is this the
best possible condition? SLTP holds precisely when there is some
potential common domain in which the translation functions are secure
(level increasing). They call these potential common domains
comparison domains. The existence of comparison domains means that it
is possible that mappings really are secure, given only the limited
information being analyzed. Note that this best possible
non-disclosure check is not a sufficient condition. The potential
comparison domain may not be the real common domain. It's just the
best they can do; it's not for sure. The construction of the
comparison domain is based on extending the level ordering to obtain
the right transitivity conditions with the translation functions. They
introduce another property called order compatibility. It is not
necessary for non-disclosure but it is usually desired. They don't
want to unnecessarily translate levels too high. If the translations
are total and order compatible, LTP can be expressed in a simpler
form. They applied their analysis to a particular representation of
military messages and allowable translation functions. The levels
consisted of a hierarchical part, restrictive categories, and
permissive categories. They provide a restatement of SLTP in terms of
this representation. They don't want to talk about all possible
subsets. For a restrictive category, if you apply SLTP, and have a
subset ordering, the result must include the one you started with. For
permissive categories, after you map, you have to end back where you
started, because of reversed subset ordering. SLTP could be used in
automated checks to see
if the representation of translation information would lead to
insecure translations. A questioner asked if it works with more than 2
domains. For multiple hops, it works. For multicast, they have to
handle each case separately. I asked if it was good enough for MISSI
use. They could use as a check on the translation functions.

The final session of Tuesday was the Work-In-Progress (5-minute
Presentations), chaired by Heather Hinton (Ryerson Polytechnic
University). The talks were:

"Information Power Grid: Security Challenges for the 21st Century" by
Thomas H. Hinke (University of Alabama in Huntsville)

"The History and Future of Computer Attack Taxonomies" by Daniel
L. Lough, Nathaniel J. Davis IV, and Randy C. Marchany (Virginia
Polytechnic Institute and State University). Dan presented.

"Simulating Cyber Attacks, Defenses, and Consequences" by Fred Cohen
(Sandia National Laboratories and Fred Cohen & Associates).

"Developing a database of vulnerabilities to support the study of
denial of service attacks" by Tom Richardson, Jim Davis, Doug
Jacobson, John Dickerson, and Laura Elkin (Iowa State University). Tom
presented.

"System Assurance Methodology Using Attack Trees to Drive Design
Improvements" by Dale M. Johnson (TIS Labs at Network Associates).

"NetHose: A Tool for Finding Vulnerability is Network Stacks" by Anup
Ghosh, Frank Hill, and Matt Schmid (Reliable Software
Technologies). Anup presented.

"High Assurance Smart Card Operating System" by Paul A. Karger, David
Toll, Vernon Austel, Elaine Palmer (IBM T.J. Watson Research Center),
Jonathan W. Edwards (IBM Global Services), and Guenter Karjoth and
James Riordan (IBM Zurich Research Laboratory). Paul presented.

"NCI Security Architecture" by Peter Xiao, Garret Swart, and Steve
Weinstein (Network Computer Inc.). Peter presented.

"A Scalable Approach to Access Control in Distributed Object Systems"
by Gregg Tally, Daniel Sterne, Durward McDonell, David Sherman, David
Sames, Pierre Pasturel (TIS Labs at Network Associates). Lee Badger
presented.

"Using Software Fault Isolation to Enforce Non-Bypassability" by Mark
Feldman (TIS Labs at Network Associates).

"Porting Wrappers from UNIX to Windows NT: Lessons Learned" by Larry
Spector and Lee Badger (TIS Labs at Network Associates). Larry
presented.

"Sequence-Based Intrusion Detection using Generic Software Wrappers"
by Douglas Kilpatrick and Lee Badger (TIS Labs at Network
Associates). Lee presented.

"Implementing Specification-based Intrusion Detection using Wrappers"
by Calvin Ko (TIS Labs at Network Associates).

"JSIMS Security Architecture Development Process" by Richard B. Neely
(SAIC) and Bonnie Danner (TRW). Rich presented.

"Security Approach for a Resource Management System" by Cynthia Irvine
and Tim Levin (Naval Postgraduate School). Tim presented.

"High Assurance Multilevel Secure Mail Service: Session Server and
IMAP Server" by Steven Balmer, Susan Bryer-Joyner, Brad Eads, Scott
D. Heller and Cynthia E. Irvine (Naval Postgraduate School). Steven
presented.

"Integration of Natural Language Understanding and Automated Reasoning
Tools within a Policy Workbench" by James Bret Michael (Naval
Postgraduate School).

"Data Damming in Proprietary/Public Databases" by Ira S. Moskowitz and
Li Wu Chang (Naval Research Laboratory). Li Wu presented.

"Interactive Zero Knowledge Proofs: Identification Tools Based on
Graph Theory Problems" by C. Hernandez-Goya and P. Caballero-Gil
(University of La Laguna, Spain). Hernandez-Goya presented.

"A Note on the Role of Deception in Information Protection" by Fred
Cohen (Sandia National Laboratories and Fred Cohen & Associates). Fred
gets the quotable quote of the session: "All conferences should do
these 5 minute sessions - they're the way to go."

"Sequential and Parallel Attacks on Certain Known Digital Signature
Schemes" by M. I. Dorta-Gonzalez, P. Caballero-Gil and
C. Hernandez-Goya (University of La Laguna, Spain). Dorta-Gonzalez
presented.

"Athena, an Automatic Checker for Security Protocol Analysis" by
Xiaodong Dawn Song (Carnegie-Mellon University).

The first session on Wednesday was on Authentication and Key Exchange,
chaired by Dieter Gollmann (Microsoft Research).

The first paper was "Software smart cards via cryptographic
camouflage" by D. Hoover and B. N. Kausik (Arcot Systems). Doug
presented. They use cryptographic camouflage to protect private keys
in public key infrastructure. Putting the protection in software makes
it cheaper and easier to manage. Public keys are scalable, provide
distributed authentication for entities that don't need to be known in
advance, and you can add users easily. Their work resists dictionary
attacks like a smart card does. The most common form of authentication
is login passwords. A password is easy to remember, but not very
strong (small and portable, but vulnerable like a house key). A
software key container of the conventional kind is a file somewhere
encrypted with a password containing a private key. This provides a
strong key, but users can't remember it, so they leave it under the
equivalent of a door mart that someone could break into. You also can
protect keys with a smartcard (like a credit card or PCMCIA card). It
costs mor e to have smart card readers and everyone thinks it's a pain
to manage. Their notion is to put a lot of keys under the doormat. If
the burglar picks the wrong one, the alarm goes off.  The key
container must be ultimately protected by something like a
password. Smart card avoids dictionary attacks by locking up after
some number of tries. They can't use your key if it's
stolen. Dictionary attacks work because the password space is small
enough to search and you can tell when you've guessed the right one,
by recognizing some structure or context. If you make the password
space large, people can't remember their passwords and put notes on
their machines with the passwords. Another traditional approach is to
try to make decryption slow (with a salt), such as PBE with iterated
hashes. This is vulnerable to increases in computer power. Camouflage
makes the plaintext unidentifiable.  They only encrypt random-looking
plaintext. Symmetric keys are usually random. Private keys are either
random (DSA X) or are essential ly random (RSA private exponent). If
there is a test by contextual evidence they make sure many false
decrypts will pass it. Somebody has to tell you whether you've got the
right private key. With camouflage you can't distinguish the right key
until you try to authenticate with a server, who can take defensive
action. This prevents dictionary attacks. Camouflage requires a number
of variations from usual practice. It denies even probabilistic
tests. With RSA, the private key is determined by the modulus and
private exponent and the public key is determined by the modulus and
public exponent. This is bad practice with camouflage. You don't
encrypt a structured encoding like BER. You don't encrypt several
objects with known relations (like n, p, q, d). You don't encrypt
something with known mathematical properties, like the modulus
n. Therefore, you can only encrypt the private exponent d. They allow
a d of a fixed length longer than n. It is not obvious how to test
whether a candidate could be a private expone nt for n. Prime factors
should be included in the private key. They provide a test for a valid
private exponent. They permit the private exponent to be computed from
the public exponent. Then you cannot use a short public exponent to
make server side operations more efficient. If the public exponent is
known, you test a candidate modulus. With camouflage, only trusted
entities have access to the public exponent. A full hash serves as a
test for the correct password. A partial hash can be kept to catch
typos by the user. The security of the system comes from the number of
passwords that can pass the hash test. A known signature attack is
possible. If the attacker has both a message and that message
encrypted with a private key, she can see if she gets the same
result. They randomize signatures by padding randomly as in PKCS #1
instead of deterministically. Theirs is a closed PKI; public keys are
only revealed to trusted parties. They encrypt the camouflaged public
key with the agent's public key before embedding it in a
certificate. The servers don't have to know you, but they must be
known at certification time. Thus, they have two -factor
authentication (what you know and what you have), but the key
container can be copied, thus having the key container is a bit weaker
than usual. It's as strong as storage smart cards, but weaker than
cryptocards. They can use this for user authentication/login, document
signatures for verification by trusted parties, and electronic
payment. It's not useful for stranger to stranger authentication such
as email. Extra computing power does not help to break it until it
breaks the cryptography (e.g., factors the modulus). The security is
comparable to a somewhat weaker than usual storage smart card. A
questioner asked how this is different from symmetric key
encryption. The server doesn't have to know you in advance. You could
use symmetric encryption and put the key in a certificate encrypted
the same way. The security of server is slightly more sensitive. If
someone breaks into a server and observes your public key, they still
need to steal your key container to break into your account. Another
questioner asked, if the signature is also a secret, how do you
protect the key. For RSA, they just randomize the padding, they don't
keep the signature secret. They would have to with DSA, so it would
only be good for authentication where it's thrown away
immediately. Another questioner asked, could an attacker with a tracer
could break camouflage offline by recording a successful run of an
authentication, trying to take trial keys out, and trying to run the
previous run of protocol. No, the signature is randomized for that
reason.

The final paper of the symposium was "Analysis of the Internet key
exchange protocol using the NRL protocol analyzer" by Catherine
Meadows (Naval Research Laboratory). Applying formal methods to
security protocols is a growing problem. We increasingly relying on
network computer systems to perform sensitive transactions. We have
more complex protocols supporting specialized transactions, and more
general protocols supporting a broad range of transactions. These
protocols allow a choice of algorithms, mechanisms and
sub-protocols. We must avoid insecure interactions between
protocols. Cathy extended the reach of formal analysis of
cryptographic protocols by looking at a good sized cryptographic
protocol with different related sub-protocols. She used the NRL
protocol analyzer (NPA) to look at Internet Key Exchange (IKE). It
does key generation and negotiation of security associations. This
work led to improvements in the NPA. IKE had already been looked at by
a number people. She wondered if you can learn anything new by using
formal methods. NPA is a formal methods tool for analysis of
cryptographic protocols. It uses a model of intruders that's standard;
they can read, intercept, and so on. It can determine if it is
possible to get to a bad state, such as a secret leaking or a
participant being spoofed. You specify the insecure state and NPA
works backwards trying to find a path. The search space is initially
infinite, so it needs to be cut down with filters using lemmas. It
uses a generate and test strategy. This can be slow, but it is easy to
put in new lemmas. IKE is a key agreement protocol developed by the
IETF for IP. It is currently in RFC status. It agrees on security
associations as well, which include the keys, algorithms, protocol
formats, and so on for subsequent communications. There are two phases
whose variations make up 13 different protocols in all. In phase 1,
the security association is negotiated and key exchange occurs. In
phase 2, you negotiate and exchange session keys for communication. In
1, there are 4 choices of authentication mechanisms (shared key,
digital signature, and two types of public keys). There are two modes:
the main one hides identities until relatively late in the protocol
while the aggressive one reveals them earlier allowing fewer
messages. Thus, there are 8 possibilities in phase 1. In phase 2, you
may or may not exchange identifies, may ore may not use Diffie-Hellman
(DH), plus there's a new group mode.  Reminding the audience of the
comment earlier about engineers cheating to get results, Cathy stated
that this qualifies as an engineering paper.  However, she wants to be
honest about the cheating. She assumed that the initiator sends a list
of security associations and the responder picks one.  It's actually
more complex, so she only looked at 10 out of the 13 variants. The new
group mode is technically difficult and generates an infinite
regression. One of the public key protocols was introduced after she
started. She didn't go back and respecify everything. She looked at t
he two phases separately, so she didn't find interactions. DH relies
on commutativity, which NPA doesn't model. She hacked the
specification to work around commutativity. So, she looked at the two
specifications, phase 1 with 6 protocols and phase 2 with 4
sub-protocols. She looked at the other public key protocol later. NPA
gives more user control of the search strategy. You can discard some
search states as trivial. It allows users to specify state members as
trivial, as they may not be important to an attack. If it finds an
attack, you need to verify that it's a true attack. She added a rule
tester to the NPA that generates states that are likely to be
encountered. It speeded up the analysis by about 170%. Her results
verified the standard properties of IKE. Keys are not compromised. If
a key is accepted, it and the security association were offered by the
claimed participant. There is prevention of replay attacks. In phase
2, there is perfect forward secrecy with DH. There were no major
problems. There was a funny "attack" originally found by Gavin
Lowe. The ambiguity could lead to implementation problems. There was
an incomplete specification in phase 2 where you can not transmit
identities when they're identical to IP addresses. You're going to see
them anyway, but they're not authenticated.  Shared Keys are indexed
by identities so spoofs can't be decrypted. The "attack" relies on a
particular implementation assumption. Participant A thinks it's
sharing two keys with B, but it's really sharing the same key with
itself, twice. It can converse with itself, thinking it's B. A is its
own man in the middle. What was missing? A message ID is pseudo
random, so it's unique. The protocol checks the message ID. If there
is none, it's assumed to be the initiation message. If there is one,
it assumes the next message is in the ongoing phase 2 exchange. So,
the attack is impossible, but it took a long time to figure that
out. The pseudo random requirement was not in IKE, it was in
ISAKMP. Some implementers weren't doing it. Nowhere did it say the
message ID must be used to determine what the message is for. It was
just easier to check a message ID than decrypt and read the
message. But it's security relevant, so it should be stated. Cathy
concluded that formal methods can be useful even in protocols that are
much discussed. For example, they can identify hidden assumptions. The
IETF specification was written by volunteers and tended to be
sketchy. She identified some potential problems outside of main
requirements. Lowe's attack was probably not much of a threat now, but
now they know about it. The exercise also improved NPA. A questioner
asked how many hours she spent on this including modifying NPA. She
said months, though she didn't keep track. She did a complete redesign
of NPA. Another asked if she did any regression testing to determine
if NPA works on previous problems better. Yes.  The changes only help
with larger protocols. For example, on Needham-Schroeder, it only
helped 10%. You get exponential help as the protocol gets
bigger. Another questioner stated that Windows 2000 has IKE; does
Microsoft know of her results? She told the results to the IETF and
they said they would pay attention. Finally, she was asked if she's
trust her privacy and security to IKE. "As much as to any protocols
out there."

The final session was a panel called "Time Capsule - Twenty Years From
Now" chaired by Jim Anderson. Jim framed his predictive talents by
saying that in '80 he predicted Oakland would be over in 5 years.

The first speaker was Mike Reiter (Lucent Bell Labs), on the future of
computing, filling in for Mark Weiser, who was chief scientist at
PARC. Mark was diagnosed with cancer and died 3 weeks before. After 15
seconds of silence, Mike spoke representing Mark's opinion based on
his reading of Mark's text. He spoke on how computers will be used
differently in the next 20 years. The relationship of people to
computing is the most important factor.  It has changed twice already;
we are in the midst of a third change. The first was the era of
mainframes. They were for specialized use, by specialists, and the
machines were nearly worshipped; they got the air conditioning, the
nice raised floors, and 24 hour attention. There was a von Neumann
quote that it would be a waste to use computers on bookkeeping; it was
too menial. The second was the era of the PC. They are our special
helpers. We use them for chatting, games, and taxes. We can use them
for hours, because they are dedicated to us. We can reboot it and no
one else is effected. The third is the era of the Internet. It is
involving hundreds machines in everything you do (web access,
searches). It is the inverse of the mainframe. Now computers are
abused, are we get the air conditioning, and so on. It is the start of
the transition to ubiquitous computing. The PC diminishes as the
interface. The appliance becomes computers (tables, chairs). They are
nearly invisible, but they make us smarter in almost everything we
do. The hardware is low energy, wireless and uses the TCP/IP
infrastructure. The form factor is getting smaller and becoming
invisible. Gentle sounds will change to convey information about
friends, family, stocks, work projects. Displays built into objects
will tell you your to-do list, who is calling you, what's for dinner,
where to park. They will do it without any login; the room knows it's
you. Smart cameras will watch you pay bills, read the newspaper, and
pre-fetch additional information you will need if you chose to have it
watch. "The heroes of the 20th century will be the teams of designers,
anthropologists, psychologists, artists, and (a few) engineers who
integrate the power of technology with the complexity of the human
interface to create a world that makes us smarter, with us, in the
end, barely noticing."

Roger Needham (Microsoft Research, Cambridge) spoke on the future of
hardware technology. It's always difficult to predict the future,
particularly before it happens. He shares much of Mark's vision. If
there's a difficulty, the technology might make it go away. This is a
problem for the researcher, who sees an opportunity in every
problem. Things are getting bigger and faster, there's more available
bandwidth. This takes away the need to be clever. It is impossible to
foresee the consequences of being clever. Therefore you should avoid
it whenever you can. He sees it in cryptographic protocols. We put as
little in the messages as possible. This has a strong tendency not to
work, and is a consequence of being clever.  We can use cycles to buy
simplicity, which is a wonderful thing. In the next 5 years, a paper
on OS security will be redundant. Most computers have one user, or
application, or both. The conceptual operators from 60s, that there
are hostile multiple users, don't apply. The application knows what
the security requirements are. If it's doing something for one person,
there are no security issues. There will be division of function,
using the network as an insulator, not a connector. This is a bit of a
despondent conclusion. What if it makes things more difficult? Mark's
vision is fine if you're doing things that don't matter. If they're
used for anything important, say a computer in every shirt button,
rather a lot of things, the way we identify is liable not to
work. Objects can come together in a short term alliance, like renting
a car. They will be of large enough scale to do the paperwork and
associate you and the car. This is on a slow time scale - days. What
if there is a bowl in a hospital with smart thermometers, each
emitting a temperature gained by radio. You establish a relationship
between the doctor, thermometer, and patient so that something
undesirable doesn't happen. In a few years, papers will appear on how
to establish and tear down these relationships reliably, for serious
purpose, with short term relationships between smart objects. Privacy
will be somewhat at risk. It was bad enough having dumb cameras all
over the place.  Britain now has the most intensive surveillance of
anywhere. Something could record everything said by and near you for
your life. Either we get over our decreasing privacy, or there are
serious opportunities for technical people like us, to try to push
back, and to go and get grants. If you look 20 years forward, you see
how important our subject will be. Roger won't be around in 20
years. It will be an active but smaller field, with many of the things
we are struggling with replaced by conventional engineering. There
will be room for some good people on the sharp edge.

Howard Shrobe (MIT AI Lab) spoke on the future of software
technology. We will be computing in a messy environment with an
emerging computational infrastructure. There will be several orders of
magnitude variation in bandwidth in devices. The networks will be
unreliable and subject to chaotic excursions. There will be no
centralized control and we will be vulnerable to intentional
attack. There are two responses; "life sucks and then you core dump"
or engineer to be bullet proof. Just because you're paranoid doesn't
mean that they're not out to get you. Software technology is not
ready. We think we can give up and start again (like rebooting a
PC). That will be unacceptable. We can think in terms of a fortress or
an organism; rigid vs. adaptive. "Almost everything I predict is
wrong." It's more likely to be like Visual C++, "have a nice day". The
right things happen, they just longer than he likes. He sees systems
as frameworks for dealing with a particular problem, like
robustness. There are dynamic domain architectures. In domain
engineering, you look at a domain and find a lot of variability within
common themes. You capture the themes, reify them as services, and
deal with the variability in lower level routines. Reuse is at the top
layer. You reuse the asset base with variant implementations of common
services. There is an analogy with Object Oriented programming. You
get a lot of method calls for certain tasks. You use the type
signatures as a description of when an approach is applicable. For a
given component, you want to know how likely it is to succeed and what
is the cost, so you can try maximizing the benefit/cost ratio. If it
fails, you debug where you are, and try something else. Every super
routine has a plan. Plans are more general than super routines. A
rationale can guide the system. In the development environment, we
focus on super routines. At runtime, instrumented code, synthesized by
development tools, monitors, gives feedback, and so on. It will do
precisely the things programmers do badly, like error management. We
will have to think about what's going on in the future in the runtime
world. We will need to diagnose and recover from failure. There are
two mindsets about trust; binary vs. continuous. There will be
cumulative, distributed partial authorities that are goal based and
cope with partition and probability. There will be an active trust
management architecture that is self adaptive and makes rational
decisions. It will be focused on noticing attacks. You want to know
how the system has been compromised. There is a difference between
privacy and performance compromises. We will make the system behave
like a rational decision maker without very expensive computations
that seem necessary. The future of programming will be dominated by
domain analysis. Through community process and virtual collaboration
we will come to agreement about the structure of a domain, including
plans and rationale. The programming environment will act as an
apprentice. It synthesizes the code that's tedious and hard to think
about, that collects data from running applications. It asks for
justification for the non-obvious in the rationale. Debugging involves
dialog with this apprentice.

Hilarie Orman (Novell, Inc.) spoke on the future of networking,
calling it "Networks a Score Hence." She was impressed by the backward
looking talks; they had so many facts and were so authoritative. Dave
Farber says you can only look 10 years out realistically. Hilarie
thought this was great; she won't have to be realistic. She talked to
Bob Kahn, who helped Gore invent the Internet. She said to him, surely
you knew how amazing it would be. No, he said he had very little
expectation of it taking off. He understood all the realistic
obstacles. The power and value of networking is undeniable.  She
collected some pithy quotes. "IP Over Everything" - Vint Cert. It's a
great architecture. "We don't believe in kings, presidents or
voting. We believe in rough consensus and running code." - Dave
Clark. The Net Will Rock You! Communication is a fundamental force in
human civilization. Only two media will remain - speech and the
network (today as the web). Things will replace the web as a dominant
paradigm. Non-networked existence will be a lesser form of
participation in civilization. A higher percentage of the world
population will be involved. 2/3 of the world has never made a phone
call. That will have pluses and minuses. We can empower more in more
interesting ways. Things will go faster, faster, faster.  Maybe
instead of Moore's law we'll have more laws in 20 years as we hit the
wall. Networks will be more ubiquitous. They will be completely global
and beyond. They will be deeply serviced with pervasive objects,
location, location, location, and instant notifications. Naming,
caches, middleware and applications will all join the
infrastructure. Various ways of slicing light will enable this. There
will be optical switching for global light pipes.  Globally engineered
protocols will provide multicast and a programmable infrastructure. It
will be tuned for a global structure. There will be large distributed
systems with 100 million users and 100 billion
objects. Synchronization and replication will be stressed. In terms
of wireless communications, "I have one word to say to you,
batteries." Security will change a lot. There will be tradeoffs about
who's ahead (protectors vs. hackers). Cryptography and the speed of
light and quantum states will influence this. We are at the frontiers
of virtual legalism. We need a complete reconsideration of
intellectual property (Farber). There will be heterogeneous
interoperating systems for identification, authentication, and
authorization. Video, video, video and global commerce will be
important. There will be an empowerment of resource limited
countries. It will be a a world that models itself. We will have a
touchy society with instant fads, extreme behaviors, and a tower of
Babel. The net responds quickly and instantly. Look at day trading
today and the rapidity of news. There will be an end of privacy and
property. Surveillance, surveillance, surveillance is already
happening. The context you use property in will have the value, and
maybe there will be value using where it lives.  "This is the first
century of the rest of your millennium." There will be nothing common
in 20 years that we don't see today. There will be an accelerating
rate of technical change, becoming chaotic.

Brian Snow (National Security Agency) spoke on the future of
security. He's been with the NSA since '71. The future is not assured
- but it should be!  He contrasted the previously ominous "I'm here to
help you" with the fact that he is here to help us. It's not your
father's NSA anymore. They're polite and collegial, and a little less
arrogant. He feels like Cassandra, who was loved by the god
Apollo. She told the future, but those who heard her would not
understand, or would not act. "It's ASSURANCE, Stupid." Buffer
overflows will still exist in "secure" products. They should not be
called secure, but will be sold as such. Secure OSes will still crash
when applications misbehave. We will have sufficient functionality,
plenty of performance, but not enough assurance. He's very confident
of these predictions. Resource sharing was the bottom line. A cloud of
processes will follow you about in your suite of computers. The nodes
you walk by will be supporting hordes of people in the mall, matching
their taste that day. Bandwidth will be cheaper but never will be
free. There will be no real sense of locality. Computing will be
routed through timezones for less busy processes. There will be
non-locality and sharing. For security, you need separation to keep
the good safe from the bad. This requires a strong sense of locality
(the safe, the office). How will it fit on top of industry relying on
sharing and non-locality. Your job won't go away. There will be very
hard work to do in 20 years. There is a great fear of being as
effective as we need to be. Security functionality has been around for
centuries as confidentiality. Other aspects such as integrity,
availability, and non-repudiation did not all occur at the same
time. Availability as security is recent. Non-repudiation was not on
the list for the electronic domain 20 years ago. Something else will
happen like this. Something from the social world will find enabling
technology. They might not really be necessary. We have a fair amount
of mechanism today; we might not need much more. But the new thing
will look necessary. Predicting 20 years out is hard, so he did a
little cheating. He limited himself to 20 net years, which is 5 years
real time, about which he has great confidence. There will be little
improvement in assurance and little true security offered by
industry. We will counter many hack attacks but will not be safe from
nation states. We will think we are secure and act accordingly. We
will be vulnerable. He's delighted we're getting so much
attention. Once we get the mechanisms to stop the hacker, we'll think
we're done. It's a two sided sword. The last nine yards are the
hardest. The hacker just wants to score; intelligence looks for a
source. You'll never know the other guy is there. The mechanisms won't
hold up in malicious environment and there's plenty of that. We need
advances in scalability, interoperability, and assurance. The market
will provide the first two. We don't need new functions or
primitives. We fail to fully use what we have in hand. Th e area
needing help is humans seeking convenience over security. We need
mechanisms designed in development, but present throughout the life
cycle. We need to do more of design processes, documentation, and
testing, which will cause a delay to market. The OS should make a
digital signature check prior to module execution to keep it
benign. This can add enough performance degradation to bring a machine
to its knees. This can also kill viruses.  There are assurance issues
in the signing. OS security is the basis for secure systems, and the
current black hole of security. Software modules need to be well
documented from certified development environments, tested for
boundary conditions, invalid inputs, and improper sequences of
commands.  Formal methods need to show how it meets the specification
exactly, with no extraneous or unexpected behavior. In hardware, we
need smart cards, smart badges, tokens or other devices for critical
applications. We need an island of security in a sea of crud. We need
third party testing from NIST, NSA labs, or industry consortia to
provide a test of vendor claims. This will only be successful if users
see the need for other than proof by emphatic assertion. The results
must be understandable by users. Integration is the hardest part and
we're dumping it on the user. We need independent verification of
vendor claims, legal constraints, due diligence, fitness-for-use
criteria, and liability for something other than the media the
software is delivered on. Y2K may provide a boon. Most attacks result
from the failures of assurance, not function. Each of us should leave
with a commitment to transfer the technology to industry. We need more
research in assurance techniques. The best way to predict the future
is to invent it.

A questioner asked how open source change assurance and the
development of security. Howie said it's not such a new thing. From
the mid '80s Symbolics has been doing it. They did have people point
out vulnerabilities. The ratio worked in their favor. Mike said it's
not a replacement for assurance; it can find the low hanging
fruit. Another questioner asked if there will be a Luddite effect in
the next 20 years. Roger said obviously yes. Brian said society is
fragmenting, the bulk will go to connectivity, but who will take care
of the have nots. Hilarie expects Luddites express themselves
vigorously over the Internet. A questioner asked if digital signatures
had stood test of court. Brian said they are not yet part of the full
body law.  Somebody has to stand up and start doing it. Then those who
don't will be liable. Hilarie said we need a contest in court and to
get a ruling in favor of digital signatures. Roger said we should use
a paper to say that we are bound by our electronic signature. That
ends the leg al question. There was a quantum computing
question. Brian said they're started working on a replacement for DES
called AES. Insurance against quantum computing is covered. Mike said,
at the risk of disagreeing with someone at NSA, and he's not a
cryptographer, the biggest risk is to public key algorithms, not
secret key algorithms. Will public key survive? There are known
algorithms for factoring that could be used on a quantum computer.
What happens if it all falls down? An audience member suggested that
only the NSA will be able to afford quantum computing. Mike said
no. Predictions that it's expensive may be wrong. Scientists can
impress us with what they achieve. Hilarie said that people who work
with symbols are impressed with people who work with
physics. Factoring larger number doesn't break all algorithms. It
changes the lifetime factor. A questioner asked about export control
restrictions on secure OSes above B2. Brian wanted to go back to the
last question on quantum computing. He said that export controls are
not just from the NSA. Congressional hearings are the place to work
this out. You may not like the answer that results. He may be
discontent. Work the process through the political system. In quantum
computing, he agrees with Hilarie. It doesn't destroy cryptography, it
just changes matters of scale. Key management or initial material is
more expensive.

A questioner asked, if market won't provide assurance, should we
require assurance by legal means? Jim said we see people redefining
the problem so that low levels of assurance are adequate. We only need
high assurance in small number of cases. Brian said yes, it's a good
way to go about it as well. Y2K may help. We like to sue. The other
levels of assurance are pathetic. Open source will help, but not
enough. A questioner asked about licensing engineers. Brian said in
his talk that it would help. Hilarie replied "we don't need no
stinking licenses." Howie said if architecture is done wrong it always
has serious consequences. This is not true for the bulk of
software. It fails often, and we don't care. There are different
categories of software. Plane software is different. Maybe we should
have licensing for producing software with serious consequences if it
fails, but it's wrong for everyone else. Roger stated that it is known
what other engineers ought to do. Brian pointed out that Romans built
marvelous bridges, but over compensated. It was practice, not
science. Now we have less of a safety margin. If we lack science and
engineering principle, we use art. We're still at the art
practitioner. It's hard to license. We should reduce it to practice.
A questioner asked how long until smart cards are as popular as credit
cards in the US. Roger said that smart cards are quite commonly used
in Europe.  Local phone calls are billed for in Europe, so there is an
economic advantage in doing any transaction without the phone. Going
back to the licensing discussion, a questioner asked what happens when
Powerpoint is a potential part of plane software. Hilarie reacted "God
help us." People want analogies that may not be possible because of
the rate of technical change. She learned to save her files every 5
minutes on early machines as they failed every 15 minutes on
average. We may work out new paradigms for assurance. Roger said the
comparison with software for anti-lock brakes might be a better
analogy. Huge amounts of money are spent on minute programs, looked by
more than one independent testing outfit. Then they sell it in a large
multiple. We write huge software for which the concept of right
doesn't exist. Hilarie said embedded software is getting larger. A car
bus may be running a group membership protocol to agree on which
wheels are working. Everything will be drive by wire. You have to be
able to afford quality software in that environment. A questioner
pointed out that the Navy uses NT to drive its ships. That software
wasn't developed for embedded systems. We should be licensing the
integrators, it's a huge dollar industry, and they decide what
components go in. Roger said that makes a lot of sense. Hilarie said
that that's the person who delivers the claim. It will drive assurance
back the food chain or produce new players for new niches.