Eighth USENIX Security Symposium
Washington, DC, USA
August 23-26, 1999

by Kevin E. Fu fubob@mit.edu
PGP key: finger fubob@monk.mit.edu

Key Note Speech: Peter G. Neumann

Peter Neumann of SRI International substituted for Taher Elgamal as
the key note speaker.  Most of the talk dealt with issues such as
security, survivability, reliability, and predictable behavior.
Neumann has many claims to fame including the moderation of comp.risks
and a 1965 publication on file system access control in Multics.
Neumann used stories, quotations, and ever-so-clever puns to discuss
principles for good software engineering.

Past efforts fundamental to software engineering include Multics,
T.H.E. system, domains of protection, principles, confined execution,
PSOS (a provably secure OS by PGN), and isolated kernels.  But most
existing commercial systems are fundamentally inadequate with respect
to reliability and security.  Survivability is also not well
understood.  Unfortunately, not much good research is finding its way
into products.  Developers ignore the risks, and the same old flaws
appear over and over.  For instance, 8 of 13 CERT reports this past
year resulted from buffer overflows.  Programming languages do little
to prevent security problems.  Few systems are easy to evolve.

Neumann challenged the old adage of KISS (Keep It Simple Stupid).  He
argued that such advice does not work for extremely complex systems.
He also disagreed the "build one to throw away" principle of Brooks
because this encourages companies to perform beta testing with normal
users.

There remains much to be learned from past mistakes; these problems
are highly multiple dimensioned.  However, Neumann reasoned that we do
not learn much from failures because the builders tend to move on to
something else or build another system with the same problems.

The insider problem is largely ignored.  This is a difficult problem
to solve because the insider is already authorized.  In defense of Wen
Ho Lee, Neumann referenced the Los Alamos National Laboratory incident
as an example.  Merely shutting out access of outsiders will not solve
the problem.  You have to worry about insiders.

Neumann recommended that developers specify general requirements,
create stronger protocols, use good cryptographic infrastructures,
design for interoperability, force commercial developers to do the
right thing (maybe with open source), perform network monitoring and
analysis, and find a good way to integrate systems.

The Q/A section mainly consisted of additional stories from the
audience.  Matt Blaze asked the rhetorical question, "Why in poorly
designed systems does everything become part of the trusted computing
base?  Why limit [this characterization] to only poorly designed
systems?"

A punny John Ioannidis explained that Neumann is not "preaching to the
choir" but to "lots of soloists."  People do not listen to the
soloists.  Ioannidis argued that education of users is just as
important as proper engineering practice.  Otherwise uneducated users
will simply bring in the Trojan horse.  The discussion led to an
insane number of analogies about singing and security.

Causing the crowd to erupt with laughter, Greg Rose said it's amazing
we can build great flight simulators and all, but we can't build good
air traffic control systems.

A philosophical Dan Geer quizzed Neumann on the difference between
brittle and fragile.  Neumann responded that brittle means an object
breaks when tapped a bit; fragile means the object will probably fall
off the shelf.  In Multics, a failure in ring 0 would cause the whole
system to collapse.  A failure in ring 1 would only cause the the
process to crash.  In ring 2, a failure might only generate an error
message.  The rings contained the error.

What is Neumann's final prognosis?  All reports conclude that the
state of practice is not good.  Risks abound.  For related information
including a pointer to the Risks archive, see
http://www.csl.sri.com/neumann/


PDAs Refereed Papers Track
Session Chair: Jim Duncan, Cisco Systems, Inc.


----
The Design and Analysis of Graphical Passwords

Ian Jermyn, New York University; Alain Mayer, Fabian Monrose, Michael
K. Reiter, Bell Labs, Lucent Technologies; Aviel Rubin, AT&T Labs -
Research

Fabian Monrose analyzed the security of graphical passwords and
proposed two graphical password schemes which could achieve better
security than textual passwords.  This paper not only won the best
student paper award, but it also won the best overall paper award.

In spite of textual passwords falling vulnerable to dictionary
attacks, passwords still serve as the dominant authentication
mechanism today.  On the other hand, graphical passwords have
convincing evidence showing better memorability and less information
revealed about its distribution.

Results from cognitive science show that people can remember pictures
much better than words.  Combining this with the commonly found
Personal Digital Assistant (PDA) allows new graphical input
capabilities.  Graphical schemes can decouple position from the
temporal order in which a password is entered.

Monrose listed three desired properties of graphical passwords.
First, it must be at least as difficult to exhaustively search as
traditional passwords.  Second, keys must not be stored in clear.
Third, graphical passwords need to be repeatable and memorable.

After building a strawman scheme where graphical passwords can emulate
textual passwords, Monrose described a second scheme, dubbed
Draw-a-secret.  It takes as input a picture and order of its drawing.
This scheme keeps track of boundary crossings and pen lifts from the
screen.  This information then passes though a hash function to
produce a raw bit string as a password.

Monrose pointed out that there is yet no characterization of the
distribution of graphical passwords.  This means one picture is not
known to be more likely than another as a password.  On the other
hand, textual passwords have well-known distributions related to
dictionaries. 

Rather than focus on the unrealistic upper bound of graphical password
choices, Monrose analyzed a minimum bound by creating a grammar
(similar to LOGO) which describes common ways to enter a graphical
password.  By further assigning complexities to the terminals,
evidence indicates that graphical passwords with short complexities
(pen strokes) can surpass the power of textual passwords.

Peter Honeyman went into an evil thesis committee mode by shaking the
earth with many difficult and antagonizing questions.  Assuming at
least 60 bits of entropy are necessary to protect confidential
information, Honeyman questioned whether 5 pen strokes in a 5x5 grid
is enough.  Monrose did not answer the question directly.

Another audience member asked what would be the typical graphical
password (a smiley face, picture of a dog, etc).  Suggesting that
attacks similar to dictionary attacks may exist, the audience member
asked how to classify what is memorable.  Monrose explained that his
team did not have enough experience to characterize distributions.

Another audience member asked for anecdotal evidence about
memorability.  Monrose explained that in practice, a 6x6 grid results
in poor memorability.  It's simply too fine-grained for the average
user to repeatedly enter a graphical password correctly.  The 5x5 grid
creates a good balance between security and memorability.

Peter Neumann pointed out that written Chinese has a known key stroke
order that may impose a distribution.  Graphical passwords may pose a
challenge for writers of Chinese.  While a native writer of Chinese
confirmed Neumann's point, she believed the graphical passwords have
good merit.

Asked about incorporating timing and pressure for entropy, Monrose
replied that it has not been considered.  Neumann added that his group
found pen pressure useless as a source of entropy.

While there is no concrete proof that graphical passwords are stronger
than textual passwords, Monrose gave convincing evidence that
graphical passwords stand a good chance and at least deserve further
analysis.  In the future, Monrose hopes to better classify
memorability of graphical passwords.  For source code and further
information, see http://cs.nyu.edu/fabian/pilot/

----
Hand-Held Computers Can Be Better Smart Cards
Dirk Balfanz, Edward W. Felten, Princeton University 


Dirk Balfanz reasoned how a PDA can serve as a better smart card.  By
implementing a PKCS#11 plugin for Netscape Communicator and the 3Com
PalmPilot, Balfanz reduced the number of components in the Trusted
Computing Base (TCB).  The trusted part of system remains only on the
Pilot.  Smart cards usually do not have a trusted authentication path
to the user.  The user often communicates a secret PIN to the smart
card via a PC and smart card reader.  This places the PC and smart
card reader in the TCB!  Had any malicious software existed on the PC,
it could have easily obtained digital signatures of bogus data.  Of
course, a user interface or warning light on the smart card could
prevent such malicious use.  Unfortunately, most smart cards do not
have a user interface.

The implementation of PilotKey currently works for Linux, Win9X,
Solaris, and Windows NT in concert with Netscape Communicator and the
Pilot.  Preliminary results show a 512-bit RSA signature takes about 5
seconds, and key generation about 2-3 minutes.  However, a 1024-bit
RSA signature takes 25 seconds and a whopping 30 minutes for key
generation.  Balfanz pointed out that the Pilot uses something like a
16MHz processor.  He expects the processor to speed up in the future.

The PilotKey program implements the PKCS#11 functions for the Pilot.
The program enables a user to provide randomness via the user
interface.  Because PilotKey works directly with the user, the PC does
not see the PIN.  Finally, the program lets the user choose whether to
send decrypted messages back to the PC.  Incidentally, PilotKey cannot
securely handle email attachments unless you can perform base64
decoding in your head.

Alas, the Pilot does exhibit some problems acting as a smart card.
First, the Pilot is not tamper resistant, and the operating system
does not provide memory isolation among programs.  Hence, one should
not use untrusted software on the Pilot in conjunction with PilotKey.
Second, it is important not to loose the Pilot.  While the secret
information is password-protected, this offers only limited
protection.  Third, the PilotKey is not appropriate for
"owner-is-enemy" applications.  For instance, a program keeping track
of a cash balance is inappropriate.  The Pilot owner could easily
change the balance.  Finally, HotSyncing could reveal secrets to the
host computer.  Balfanz claimed these problems are not show stoppers.
Inciting a few chuckles, he explained that you "just need a secure OS
on the Pilot."  In the paper, Balfanz makes some good suggestions on
how to secure the Pilot OS.

Peter Honeyman began the inquisition with a hail of questions.
Another audience participant reminded the rest of the crowd to turn
off the Pilot's IR when not in use.  One questioner asked why not to
fix the OS on the PC if we can just fix the OS on Pilot as suggested.
Balfanz admitted this is the same problem, but fixing the OS on the PC
is not any easier.

At this point, a punny Peter Neumann sought attention.  He pointed out
that this was the first talk where Bob does not appear with Alice.
Alles in Ordnung.   You would have had to be there to
understand....

A participant suggested that splitting trust seems to simply push the
problem down the line.  People will want more features (e.g., signed
Excel spreadsheets).  Balfanz explained that shifting trust to PDA
comes at expense of usability.  Another participant argued that
removing all the software from a Pilot results in a relatively
expensive smart card.

Questioned about Europe's desire for autonomous smart card readers and
smart cards with displays, Balfanz responded that he would not trust
the ATM or the reader.  Bruce Schneier wrote a similar paper on
splitting trust in the first USENIX Symposium on Smart Cards last May.
For more information on PilotKey, follow up with
balfanz@cs.princeton.edu.

----
Offline Delegation in the File Repository
Arne Helme, Tage Stabell-Kulø, University of Tromsø, Norway 

Arne Helme explained mechanisms for offline delegation of access
rights to files in the context of a distributed File Repository (FR).
In this model each user has her own file repository, but would like to
share files.  Offline delegation refers to delegating authority
without network connectivity.  For instance, one could use verbal
delegation.

PDAs challenge centralized security models, provide users with
personal TCBs, and support the construction of "delegation
certificates."  This meshes well with the design criteria for offline
delegation: a delegation should not allow impersonation, credentials
must form valid and meaningful access rights, and the authority
granted in a certificate should not be transferable or valid for
multiple use.

The implementation consists of an access request protocol and a
prototype for the 3Com PalmPilot.  The software contains a
parser/generator for SDSI-like certificates and a short digital
signature scheme using elliptic curve cryptography.

The access request protocol (analyzed with BAN logic in the paper)
describes the process of creating, delegating, and using offline
certificates.  The delegator must read 16 4-digit hexadecimal numbers
to convey an offline certificate (e.g., by phone).  Helme's Pilot
software allows the delegatee to quickly receive these numbers via a
convenient user interface.

Helme clarified that certificates in this scheme are valid for only
one use.  Servers keep a replay cache.  One problem with the current
signature scheme is that the FR cannot distinguish between two
different files which had the same file name at different times.

Helme concluded that offline delegation is a natural extension of
services to the File Repository, PDAs support verbal delegation, and
performance is satisfactory.  For more information, contact
arne@acm.org.



Keys Refereed Papers Track
Session Chair: Carl Ellison, Intel Corporation

----
Building Intrusion-Tolerant Applications with ITTC
Thomas Wu, Michael Malkin, and Dan Boneh - Stanford University 

Tom Wu discussed how applications can store and use private keys in an
intrusion tolerant manner.  As a side benefit, Wu's methods also
provide high availability of private keys.  Existing public key
architectures rely on non-shared keys or split-key storage.  The
private key is reconstructed, creating a single point of failure.
Methods in Intrusion Tolerance via Threshold Cryptography (ITTC)
create no single point of attack because private keys are never
reconstructed.

In Shamir's Threshold Scheme, a dealer generates a key and splits it
into several shares for trustees.  The trustees can later reconstruct
the original key at a central location.  In this scheme a dealer sees
the original key and all the shares, creating a single point of
failure.  Compromise to the dealer's memory results in total
disclosure of the key.  Wu uses an intrusion-tolerant scheme which is
not vulnerable to such a single point of failure.  He employs methods
due to Boneh and Franklin to generate a private key already in a
shared form.  To create shares, Wu uses an idea due to Frankel.  Share
servers apply the share to an operation (e.g., sign, decrypt, encrypt)
rather than give the share to the application as if it were a dealer.
Without a threshold number of results from the share servers, an
adversary can not reconstruct the results of a private key operation.
SSL protects the communication between the application and share
servers.  In order to break the ITTC scheme, an attacker must
compromise multiple systems in a short period of time.  One can also
identify malicious share servers.

Wu's implementation adds intrusion tolerance to a Certificate
Authority and Apache web server.  Integration of ITTC was trivial with
the OpenSSL code.  By relying on load balancing and multiple threads
for parallelism, the ITTC adds only a 17% drop in throughput and 24%
drop in latency when compared to non-ITTC RSA.  Wu pointed out that
this tolerable slowdown is insignificant considering the overhead of
SSL session establishment.

Wu concluded that ITTC improves security and reliability for servers
and CAs.  Meanwhile, the performance impact is minimal.  An audience
member asked about the security consequences of an intruder obtaining
a core file from such an intrusion-tolerant web server.  Wu happily
replied that the intruder could only retrieve the result of a single
decryption, not the private key of the web server.  At no single point
is the key entirely reconstructed.  

For more information, see http://www.stanford.edu/~dabo/ITTC/.

----
Brute Force Cryptanalysis on UNIX Passwords with SIMD Computer
Gershon Kedem and Yuriko Ishihara, Duke University 

Gershon Kedem gave an overview of brute force cryptanalysis.  He
listed the primary barriers to brute force: expertise, non-recurring
time and cost, access to tools and technology, replication costs,
reusability, and performance.  Based on work with a SIMD computer, he
proposed a design of a SIMD computer dedicated to brute force
cryptanalysis.

Kedem spent a long time on a table comparing the experience, cost, and
time necessary for brute force when using software, FPGAs, ASICs, and
custom chips.  See the paper for the table.  Because the talk spent so
much time summarizing past research, the audience mostly lost touch
with the contributions from this paper.

A Single Instruction Multiple Data (SIMD) machine is made of a large
array of small processors.  This configuration makes it possible to
get close to the theoretical limits of the processor.  PixelFlow is a
SIMD machine made of many flow units.  Each flow unit includes an
array of 8192 processing elements.  Using 147,456 PixelFlow SIMD
processors, Kedem was able to perform brute force cryptanalysis of
40-bit RC4 (38,804,210 key checks/second) and the UNIX crypt scheme
(24,576,000 UNIX password checks/second).  PixelFlow could try all
40-bit key combinations for a particular RC4 ciphertext in about 7.87
hours.

Because UNC Chapel Hill created the PixelFlow machine for image
generation, it does have some limitations when used for cryptanalysis.
It has few registers, no dedicated shift unit, no pipelining, and no
memory indexing.  The lack of memory indexing prevents fast table
lookups.  Had PixelFlow used memory indexing, Kedem explained there
would have been a 64X speedup for RC4 and a 32X speedup for DES (but
the speedups from Kedem's talk are twice that of the figures in the
paper).  These limitations are specific to PixelFlow, not SIMD
machines in general.  Kedem then proposed a SIMD design for brute
force cryptanalysis and compared this to FPGA-based machines.

Adi Shamir's TWINKLE project was one buzzword mentioned in this talk.
However, an audience participant pointed out that TWINKLE does not
take into account that LEDs fade over time.  

For information related to brute force cryptanalysis, see
http://theory.lcs.mit.edu/~rivest/bsa-final-report.ps or contact Kedem
at Duke University.

----
Antigone: A Flexible Framework for Secure Group Communication
Patrick McDaniel, Atul Prakash, and Peter Honeyman - University of Michigan 

Patrick McDaniel introduced Antigone, a flexible framework for
defining and implementing secure group policies.  To demonstrate the
usefulness of Antigone, McDaniel integrated it into the vic secure
video conferencing system.

Antigone is unique in that it focuses on the following goals:

* Applications can flexibly use a wide range of security policies
* The system supports a range of threat models
* It is independent of specific security infrastructures
* It does not depend on the availability of a specific transport protocol
* The performance overheads are low

Taking into account that security policies vary from application to
application, Antigone provides a basic set of mechanisms to implement
a range of security policies.  First, a session rekeying policy allows
sensitivity to certain membership changes including join, leave,
process_failure, and member_eject.  Second, an application message
policy guarantees types of security such as confidentiality,
integrity, group authenticity, and sender authenticity.  A third
policy specifies what kind of membership information other members can
obtain.  The fourth policy determines under what circumstances can
Antigone recover from failures.

One participant asked how to demonstrate that all policies are
complete.  McDaniel explained that his current work does not have a
complete answer.  Another participant asked for comparisons between
Antigone and other software to implement security policies.  McDaniel
responded that Antigone currently does not take into account peer
groups, voting, negotiating protocols, or ciphers.  However, he sees
no fundamental reason preventing Antigone from addressing these
issues.

In the future, McDaniel hopes to investigate adaptive and role-based
policies, implement new mechanisms, benchmark Antigone, and integrate
the software with real applications.  For more information, see
http://antigone.citi.umich.edu/ or email pdmcdan@eecs.umich.edu.



Security Practicum Refereed Papers Track
Session Chair: Wolfgang Ley, DFN-CERT

----
The Design of a Cryptographic Security Architecture
Peter Gutmann, University of Auckland, New Zealand

With longer hair than last year, Peter Gutmann gave a fast-paced talk
on how to design a versatile, multi-platform, cryptographic
architecture.  The implementation works on many platforms ranging from
16-bit microcontrollers to supercomputers and ATMs.

Most security toolkits specify an API, not an architecture.  In
contrast to the traditional outside-in approach, Gutmann's
architecture takes a general cryptographic architecture, then wraps an
interface around it.

Gutmann built his architecture based on two concepts: objects
encapsulate the architecture functionality while a security kernel
enforces a consistent security policy.  Each object has tightly
controlled I/O for security reasons.  Objects are either action
objects (e.g., encrypt, decrypt, sign) or container objects.  The
containers further decompose into three object types: data containers,
key and certificate containers, and security attributes.

Gutmann found that C did not work well for implementing this
architecture.  The implementation comes in several languages ranging
from C/C++ to Perl and VisualBasic.  Gutmann also wrote a formal
specification and used the Assertion Definition Language (ADL) to
verify his code.

An object can be in one of two states, low or high.  In the low state,
one can perform all allowed operations on the object.  In the high
state, one can only perform a limited, safe subset of those
operations.  An audience member asked whether Gutmann's design
prevented an object from selecting from more than two states
(i.e. whether it was implemented as something like a single bit flag).
In Gutmann's experience, two states are sufficient.  The security
kernel supports an infinite number of states, but expecting the user
to manage them all is very complex (they would have to track a complex
FSM as an object moves from one state to another) and so far there has
not been any real need to use more than two.

Questioned about flexibility of message passing, Gutmann replied that
his architecture supplies a fixed set of standard messages.  Another
participant felt that the cryptography seemed incidental to the talk.
Finally, someone asked about performance issues with message passing
through the security kernel.  Gutmann found surprisingly little
overhead from the message passing.  He believes that the extreme
multi-threading mitigated such performance hits.

Everything is implemented in Gutmann's cryptlib library, which lives
at http://www.cs.auckland.ac.nz/~pgut001/cryptlib/.

----

Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0
Alma Whitten, Carnegie Mellon University; 
J. D. Tygar, University of California at Berkeley

Giving a breath of fresh air to the USENIX community, Alma Whitten
raised serious issues about deficiencies in cryptographic user
interfaces.  The audience asked lots of questions and received the
talk extremely well.  After explaining user interfaces (UI), she spoke
about the critical results of a usability evaluation of PGP 5.0.

Recognizing that security is fundamentally different from
traditional software, Whitten defined usability for security as:

* A user can tell what needs to be done
* A user can figure out how to do it
* A user does not make dangerous errors
* A user does not get annoyed and give up

Whitten noted that applications such as word processors would focus on
the second point.  But one cannot assume the second point in security.
For instance, in security the undo function cannot reverse the
accidental mailing of plaintext.

Whitten explained that "security is not a word processor" because of:

* Unmotivated users (security is a secondary goal)
* The barn door problem (the undo problem)
* The abstraction problem
* The software is only as strong as the weakest link
* Lack of feedback

Whitten chose PGP 5.0 with the Eudora plugin on a Macintosh for a case
study because the interface was reasonably well designed by general
consumer software standards.  Her team used two methods to evaluate
PGP: laboratory testing (objective, limited, and expensive) and
cognitive walkthroughs (subjective, comprehensive, and cheap).

The cognitive walkthrough showed that visual metaphors needed more
thought.  For instance, the quill head can confuse users.  One user
mixed up a .sig file with signatures.  There is also information
overload with too much visual data such as displaying the whole key
ring and metadata.  With respect to barn door and feedback problems,
we need more protection against irreversible errors such as sending
plaintext.

In the laboratory test, users had 90 minutes to perform a series of
actions.  Users worked in the scenario of a political campaign.  The
user had to securely coordinate five virtual campaign members.  The 12
participants were of a variety of ages from 20-49 years, education
from some college to a doctoral degree, and backgrounds from fine arts
to computer technology.  Half of the participants were male.

Everyone pretty much could generate key pairs (after a few users
misunderstood by generating keys for each campaign member).  25% of
the users managed to send plaintext instead of ciphertext.  2 of these
3 people realized the mistake.  Several users encrypted the message
with their own public key instead of the recipient's key.  Nearly
everyone fell into this trap.  Eventually after receiving error
messages from the virtual campaign members, the participants were able
to send encrypted mail correctly.  Half of the participants eventually
encrypted a message.  25% did it without much help.

By the time the first tests were done, only 5 people got far enough to
decrypt a message.  Most could decrypt.  However, some mixed up
ASCII-amoured keys with PGP messages since both look the same.  The
study concluded that PGP 5.0 with Eudora has a nice UI, but competent
people in the target group could not handle this.  Whitten suggests
that to fix these deficiencies, one should simplify the UI, minimize
what is less important, and add the right kind of help.

A participant suggested that maybe the problem is the PGP model.  Can
we get a reasonable interface with PGP?  Whitten responded that her
group looked where the PGP model did not match with users'
expectations.

Avi Rubin asked how much documentation and training the test subjects
received.  Whitten gave each subject a printed, bound copy of PGP and
Eudora manuals in addition to a quick tutorial on how to send email
with Eudora.  In other words, the test subjects had more resources
than most users.  Moreover, the test subjects read the manuals.

Another audience member asked how many users did an out-of-band check
to see if the encrypted messages worked.  These details are in the
paper, but Whitten noted that one technical person noticed the keys
were signed and understood that trust in the key had to do with
signatures on the key.  The majority of users did not understand the
trust metric.

Derek Atkins humorously defended himself by saying he designed much of
core API, but not UI for PGP.  He asked about problems relating to
confusion between various key types.  Whitten said that one virtual
campaign member had an RSA key and would complain to the test subjects
about problems decrypting email.  Only one subject figured out that
the email had to be encrypted once with RSA and once with DSA.

Greg Rose added his anecdote that USENIX used to run a heavily
scripted keysigning.  It worked well, but about 2/3 of messages would
require human interaction.

Another participant asked about the importance of interruptions (e.g.,
dialog box warnings) to the user about security.  Whitten explained
that there was no time to look at user tolerance levels.  Ideally one
would make the UI sufficiently obvious to prevent such dangers in the
first place.

Questioned about how much of the problems resulted from poor
interaction with the Eudora plugin versus the core PGP interface,
Whitten explained it is hard to divide.  The UI needs a display to
make it obvious what is encrypted and what is not.  For further
information, email alma@cs.cmu.edu.

Good quotation: "Security is not a word processor" - Whitten

----
Jonah: Experience Implementing PKIX Reference Freeware
Mary Ellen Zurko, John Wray, Iris Associates; Ian Morrison, IBM; Mike
Shanzer, Iris Associates; Mike Crane, IBM; Pat Booth, Lotus; Ellen
McDermott, IBM; Warren Macek, Iris Associates; Ann Graham, Jim Wade,
Tom Sandlin, IBM

John Wray described the reference implementation of the Internet
Engineering Task Force's (IETF) Public Key Infrastructure (PKIX).
Wray explained the motivation behind the project.  IBM needed a
pervasive, open PKI for business; major vendors pledged support for
PKIX; and there was a need for reference code to exercise the PKIX
specifications.

The team implemented as a C++ class library the PKIX specifications
which consist of several RFCs for X.509, Certificate Management
Protocols (CMP), Certificate Request Message Format (CRMF), and LDAP
V2.

Wray highlighted what the team learned from this experience.  What did
they do right?  They used:

* A single data structure for protocols and persistent data
* C++ destructors to minimize memory leaks
* A single backend good for promoting code reuse
* An unfamiliar platform (NT) for development

However, Wray also noted what they did wrong:

* No proper error architecture (but Wray has never seen a good one)
* Interoperability testing came too late
* Sloppiness with respect to case sensitivity.  NT is case insensitive,
  but UNIX is case sensitive.
* Standard Template Library (STL) problems


Asked how easy is it to use the CDSA architecture, Wray replied that
indeed CDSA 1.2 presented difficulties.  Questioned about certificate
revocation, Wray replied that the implementation publishes certificate
revocation lists.

For more information, see http://web.mit.edu/pfl/ or read the archives
of imc-pfl@imc.org on http://www.imc.org/imc-pfl/.