USENIX ECommerce '98: The Third USENIX Workshop on Electronic Commerce
September 1-3, 1998, Boston, Massachusetts

by Radha Poovendran, University of Maryland,
and Kevin Fu, Bellcore,

[The report authors provided notes independently on selected papers,
which we  have hacked together. ---Ed.]

The 3rd Usenix Workshop on Electronics Commerce was held in Boston Ma,
September 1-3, 1998, preceded by a day of tutorials on August 31.
Bennet Yee from University of California, San Diego was the program
chair and Daniel Geer from CertCo, LLC the PKI sessions coordinator.
One exciting conference moment was when a fire alarm went off during
the night of Sept. 1, rousing conference attendees from their beds.
Efforts to locate the source of the alarm turned futile.

Stuart  Feldman from IBM Institute for Advanced Commerce was the
keynote speaker. His talk was on Research Directions in electronics
Commerce.  He focused on communications, computing, commercial and
privacy aspects of the e-commerce.  He identified (a) Internet, (b)
WWW, (c) Crypto Protocols, (d) Payment Technologies, (e) Recommender
Systems as some of the research directions and went on to identify the
following areas as some of the important specific areas: (a) privacy,
(b) variable prices and negotiated deals, (c) evolving market place,
(d) managing the end customer, (e) impact of globalization, (f) system
foundation.  From the service side, he identified speech processing,
agent technology, complex multimedia including video as potential
problems to be addressed. In discussing issues related to the
communications, he referred to the last mile problem indirectly by
mentioning the issues related to the bandwidth problems for the local
end customer and also at the international level. Role of the Quality
of service as a metric was mentioned and it was noted that the metric
may suit for business critical applications and high end applications.
Examples of Nagano Olympics (>= 100k hits/min) and Wimbledon (>= 145k
hits/min)services on the web were presented.  In discussing the
e-commerce related privacy, he noted that many companies may not have
appropriate policies regarding customer privacy and this being a hot
button of 1998 for US & EU. It was noted that from the privacy view
following are critical for customer confidence:  (a) preventing the
leakage of customer information, (b) developing anonymous transaction
technologies, (c) need for having an information policy, (d) need for
establishing credibility and enforcement of the policy.  In terms of
establishing an e-commerce setup (a) image building, (b) variable
prices and negotiated deals such as auctions were discussed in detail.
In discussing technology issues he noted that (a) rapid yet correct
implementation, (b) deploying the new technology without disturbing the
old one, (c) auditable business process, (d) identification of new
market places and implications, (e)insurance and travel, all are to be
carefully addressed.  He also noted that with the e-commerce come the
following set of new dynamics (a) new intermediaries, (b) hierarchicy
vs new markets, (c) breaking and reformation of large firms, (d)
unpredictable surges of demand, (e) unknown interaction patterns and
implications, (f) complex software interactions. Scalability was
discussed in terms of (a) number of agents each customer may have, (b)
network scale, (c) computing scale, (d) mobile human and agent support.
Database related issues such as integrating new and old customer were
discussed.  Questions were raised about the Nagano and Wibledon web
access numbers provided.  Someone noted that it was not clear whether
the numbers given were the peak hit numbers or the average numbers.
Another person pointed that it possibly was peak number since many
people check the results of certain games and not all the games.

Advances in Payment Technology: Chaired by Clifford Newman, University
of Southern California

Electronic Commerce and the Street Performer Protocol
Bruce Schneier and John Kelsey, Counterpane Systems,

Bruce Schnenier presented two peer-to-peer software-payment systems
designed so that every user can be a buyer as well as a seller. One of
them was for online and the other was for off-line clearing. They noted
that any payment system should meet the following criteria (a) secure,
(b) cheap, (c) widely available, (d) peer-to-peer. They noted that
making the protocol light weight meant using no or fewer public keys
and hence the need for interactions with a central server which drives
up the communications.  In their online clearing, they allowed the
users to hold only local secrets and allowed the banks to hold the
global secrets. Users authenticated themselves to the trusted server of
the bank instead of to each other.  This protocol requires that the
person accepting the payment to have an accurate clock and is very
similar to Kerberos protocol. A small amount of memory to keep track of
the amounts transferred recently is also needed here.  All the peers
have sequence numbers that should never be repeated or go backward-they
increment one value at a time. If Alice wants to make payment to Carol,
Alice first  forms a message with her current sequence number, payment
request, payment amount, hash of the audit log; Alice chooses a random
key and encrypt the random key K_0 with the key she shares with the
bank- K_A.  She then encrypts the message with the random key and sends
her ID, and the encrypted messages. Bank checks for correctness and for
"good" requests generates the authorizations and send the needed
authorization and additional verifier for Carol via Alice. Carol can
check the amount and the authorization.  They modified the protocol
based on the observation that the synchronization may be lost at times.
The modified protocol is more complicated and calls for verifying from
time to time if the users have received all the deposits they were
supposed to at the time of verification. For the case of off-line
clearing, they note the similarities to a checking account. It uses
public keys and certificates with no CRL. The certificates are short
lived. Clock synchronization is implicit in this protocol as well.

Variety Cash: A Multi-Purpose Electronic Payment System
Mihir Bellare, J. Garay, C. Jutla, M. Yung

Variety cash has the issuer as the mint. Coins are tokens authenticated
under issuer master key. System is on-line. Issuer has a coin database
and the merchant checks with the issuer for validity of the token.
Master key is only at the issuer site and the spent coins can be erased
from the database.  At the time of withdrawal, the issuer checks the
association of the user ID and the token but the database of ID
association is separate from the coin database that the merchant can
lookup. They noted that the coins can be bought in any number or
denomination and the protocol has atomicity for withdrawal and
spending. They noted that the main cost arises from the requirement for
on-line issuer. This leads to some investment cost in processing
capability for  the  issuer but may be reasonable for moderate load of
users. Jutla noted that they have an implementation in place.

Netcents: A Lightweight Protocol for Secure Micropayments
Tomi Poutanen, Michael Stumm, Heather Hinton

Netcents supports transactions from penny to a larger amount. It is an
off-line scheme and does not require the issuer to be around at the
time of transaction.  Key idea is the extension of the scrips in
Millicent to floating scrips. A Netcents floating scrips is a signed
container of electronic currency passed from one vendor to another such
that at any time it is active at only one vendor location.  The
Netcents scrips is not vendor specific and contains a public ( also
called vendor scrip) and a private part. The vendor scrip
 contains the public key and the monetary balance and is signed by the
issuing authority and distributed to the vendor upon customer request.
Purchasing is executed with the help of an electronic purchase order
(EPO) which contains a snapshot of  the scrip signed by the private key
in the customer scrip. The EPO identifies the payer, payee, purchased
item and the balance remaining. Netcents scrip is not vendor specific
and hence eliminates the need for nay broker services. Netcents
proposes to prevent vendor fraud by having the vendor pay an up front
fee of some sort to the bank as an insurance against such fraud.
Netcents is atomic in money and goods but does not support Tygar's
notion of certified delivery. Netcents provides online arbitration with
the audit of signed EPO.

Session 2:

Public Key Implementation Case Study
Presenter: Juan Rodriguez-Torrent IBM and NACHA
Respondent: Steve Cohen, ncipher Inc.

Juan noted that the mission of the NACHA Internet council is to
"facilitate the development of global electronic commerce by enabling
businesses and consumers to utilize present and future payments over
open networks in a secure and cost-effective manner". He noted that the
there are three Internet council working groups based on three
components of the e-commerce-namely (a) trust, (b) risk, (c) payments.
Pilot assumptions were (a) Financial institutions function as CAs for
their customers, (b) pilot uses process
 of authorizations for pre-authorized debits as testing environment,
 (c) pilot requires sufficient diversity to demonstrate
interoperability, (d) pilot will test online validation and CRL's.
Pilot Participants are :  Bank of America, Citibank, Mellon Bank,
Zion's Bank, CertCo & Digital Trust Company, Entrust, GTE CyberTrust,
IBM Corporation, VeriSign.  Other organizations involved that will form
the CARAT are NASIRE, NASPO, NASACT, individual states, federal govt.
agencies etc.  Pilot components included technical implementation,
business practices, legal agreements, certification policy. He then
presented the four corner
 model and identified  the bank, customer and the merchant
 communication technology requirements. Lessons learned to date were
summarized as from the legal team: certificate policy is the glue ;
from the technical team: (a) clarity and specificity, (b) agreements at
the technical level are  not sufficient, (c) there is no such thing as
a small project when competing interests are present. He identified the
show stoppers as (a) lack of generalized client server software, (b)
lack of user interface feedback, and (c) lack of consistency in the
user interface.

Auction Markets: Chaired by Avi Rubin from AT&T Research

In a humorous effort to address the research topic of the session, Avi
began the session by conducting an English auction. The initial asking price
was $40.  I was tempted to ask $10 below his asking price.  With different
bidders from the audience willing to pay more money, the auction was won by
Win Treese, who paid $90 for a plaid USENIX dress shirt, a Crowds T-shirt,
a ribbon that labeled him a child process (he had a baby girl a couple
of weeks earlier), and the computer security book of his choice. Avi
used a protocol with Bennet Yee, the program chair, as the trusted third party
and the $90 went to the charity of the winner's choice.

The Auction Manager: Market Middleware for Large-Scale Electronic
Tracy Mullen, Michael Wellman, University of Michigan

Tracy presented. Their paper addressed the problem of hiding the
complexity of purchasing from a vast and dynamic array of goods and
services and presented a solution using the auction manager. Their
auction manager model was part of the university of Michigan Digital
Library commerce infrastructure. Their solution was based on applying
inference rules for specific buyer and seller and the possible market
offerings at the time of the query. They also noted that their model
was capable of responding to the dynamic nature of the demands
 by composing or decomposing market offering. Task is accomplished by
generating and tracking auctions, matching agents to potential markets,
and providing means to notify agents when the markets of interest to
them are being offered. Tracy noted that they intend to experiment with
various policies for auction creations in the future.

Internet Auctions
Manoj Kumar, Stuart Feldman, IBM T. J. Watson Research Center

Manjo presented. In early stage of his talk, manoj noted that his work
was implemented such that it reused several part of the already
existing IBM software products and hence reflecting Stu's Keynote
speech of not having to disturb the existing technology in another
sector. Manoj noted that an auction application should be flexible
enough to support various types of auctions around the global market if
it is to be used successfully. Manoj and Mike (co-author of the next
paper) took time to review different types of auctions. In English
Auction or Open-Cry auction, buyers gather physically or virtually at a
prespecified location at a fixed time. Each buyer is allowed to hear
the bids of the competitor and is allowed a time window within which
he/she has to offer a higher bid to be a successful bidder. In a sealed
bid auction, buyers are required to submit their bids before a deadline
and the bids are kept secret till the time deadline. At a certain time
after the deadline, the bids are revealed and the winner is chosen.
Sealed auctions can be conducted in multiround format to simulate the
"excitement" of the frenzy noted in the loud cry auction (which Avi
conducted). Dutch auctions are better suited for perishable goods and
the auctioneer asks a higher price at the beginning and keeps
decreasing the price till the buyer emerge for the asking price. In
this type of auction, not all the buyers will be given the same price.
Nature of the negotiated price depends on the "desperateness" of the
auctioneer. One example is the airline service that has very higher
price fluctuation depending on the time of purchase.  Manoj noted the
security being a major issue and referred to Franklin and Reiter's work
on a secure auction protocol in their early work.  His talk touch upon
the legal issues, cheating, sabotage, scalability, and availability,
social issues, double auctions and stock exchange. His talk created a
flurry of question as in the case of panel discussions.

Electronic Auctions with Private Bids
J. D. Tygar, Michael Harvey, CMU

Mike presented. They used the concepts of secure computation using
polynomial schemes to develop computational methods for first-price and
second-price auctions without revealing the individual secrets to the
auctioneers. Making use of the results by BGW paper and the error
correcting polynomial constructions therein, the authors were able to
choose appropriate conditions on polynomials to ensure not only
anonymity but also error correction in their computational model.

Wednesday September 2

Trust Models
Presenter: Paul van Oorschot, Chief Scientist, Entrust Technologies

Respondent: Bill Frantz, Electric Communities 

After an announcement that both Northwest and Air Canada were now both
on strike, Paul van Oorschot talked about trust models for Public Key
Infrastructures (PKI).  Bill Frantz responded to van Oorschot's
definitions.  Van Oorschot received his doctorate from the University
of Waterloo and is the co-author of the Handbook of Applied

Bill Frantz has been working in the computer security business for over 25
years.  He has worked on security for commercial timesharing systems,
private communication systems, and systems designed for the open Internet.
He is one of the early designer/implementors on the KeyKOS operating
system, a secure capability-based system designed to meet B3 security

Van Oorschot defined general ideas concerning trust.  The statement "A
trusts B" denotes that "A assumes that B will behave exactly as A
expects."  In questions after the session, Greg Rose suggested that
this definition was enlightening - in fact, it explains why we all
trust Bill Clinton.  The rest of the presentation concerned trust
relationships and trust models.

Common mechanisms to establish trust include hard-coded information in
software (e.g., Netscape certificates), an out-of-band digest (e.g.,
certification validation via fingerprints), secure communication
protocols, and signatures from a trusted authority.  In selecting a
mechanism, one considers cost, the requirement for trust maintenance,
compatibility with existing organizational capabilities, and the
ability to scale.

Van Oorschot outlined four basic trust models: hierarchical, enterprise
(or distributed), browser, and personal.  All the models utilize the
concept of a certification authority (CA) for scalability and
distribution of risk.

In the hierarchical trust model, all parities start with a CA's public
key.  In order to establish trust in another party, one needs a chain
of certificates from the root CA to the party in question.  The
hierarchical model makes sense when there is a closed system or an
obvious entity to play the role of the root CA.  For instance, this
makes sense for SET where there is one root -- organizations such as
VISA would operate below the root.  However, it is rare for an
organization to be truly hierarchical, except for a dictatorship or
centralized corporate structure.  Moreover, the root CA becomes a
single point of failure.  This would be particularly problematic if
the PKI were part of a critical infrastructure.  Van Oorschot also
noted that a PKI should be about building communities of trust you
want to belong to.  This differs from the case of a closed system
where everyone would more likely join a single community of trust.

In the enterprise (or distributed) trust model, parties trust local
CAs.  Although there is no all-powerful authority, qualified
relationships can be established between local CAs.  Moreover, special
cases allow for hierarchies and spoke-and-hub (e.g., ANX automotive
exchange) models.  But if everyone trusts the hub to certify, the
model becomes analogous in some ways to a rooted hierarchy.  The key
difference is that the defining characteristic of a rooted hierarchy
is that trust is anchored at the root, i.e. that is where trust chains
start; however in hub model, trust flows through the hub.  The
enterprise model makes sense when there is no obvious entity to play
the role of root.  Additionally, this model allows bottom-up growth
(like the Internet) and no CA becomes a single point of failure.

In the browser trust model, each CA operates as a root for its own
hierarchy.  The browser comes stocked with a set of hard-coded CA
public keys.  This model is capable of large scale.  However, the end
user has no idea which CA key is being relied on to establish trust
relationships.  Also, a typical user will simply click notices away to
clear the dialog boxes.

In the personal trust model (related to the web of trust), all
entities are end users.  Since there are no CAs, interactions are one
on one.  The end user imports public keys from other end users.  This
model best suits security-aware individuals.  The model has poor

It may be desirable to be able to revoke a certificate.  For instance,
one could specify an expiration date or certificate revocation list
(CRL).  In practice, revocation is the hardest problem.  The
difficulty arises not in the cryptography, but in the software
engineering.  Van Oorschot emphasized that short lifetimes do not work
well for signature keys.  A signature key typically needs to last a
long time.  However, short lifetimes can work when certifying
encryption keys.

[Radha also observed:
Van Oorschot  noted that the characterizing questions/issues of trust
models are:  (a) who certifies the keys, (b) how easy to create,
maintain and update, (c) granularity of trust, (d) ability of
technology to adopt to supporting existing businesses, (e) how easy it
is to revoke?.  He noted that the granularity of trust increases in the
order of hierarchy--> browser--> enterprise --> web of trust (personal
trust) and the increasing capability to represent inter-business trust
is in the order of hierarchy--> browser--> personal--> enterprise.  He
summarized his talk by pointing to the issues of (a) managability of
trust relationship, (b) each model has its place (which I call PVO
Lemma). ]

In closing, van Oorschot predicted that if a global PKI arises, we
will see a variety of trust models in use.

Next, Bill Frantz responded to a few points from van Oorschot's
presentation.  He began by telling a story of trust, "I trust my wife;
she meets the definition of trust."  But it is not clear whether this
model is useful for commerce.  Frantz also questioned van Oorschot's
loose definition of trust.  One trusts someone or something for a
particular purpose.  Trust is not binary nor is it transitive.

What level of trust is needed for commerce?  Frantz drew an analogy to
trade in Malaysia.  Commerce is as natural to humans as is breathing.
A vendor trusts the paper money.  A customer trusts that the goods to
fulfill an expectation.  There is a minimal level of trust necessary
for commerce.

Several audience members lined up to ask questions.  An audience
participant asked van Oorschot why he discussed trust as a binary
relationship rather than a degree of trust.  For instance, it may be
common to ask with how much money do you trust a person.  Van Oorschot
responded with a question.  Do you trust someone 60% of the time?  Is
there a limit to trust?  As far as transitivity goes, it does not hold
up well.  In practice, one needs contractual set agreements behind all
these statements.  Another person asked why van Oorschot's model does
not say, "A trusts B to degree D."  Van Oorschot simply responded that
you need to walk before you run.

Greg Rose pointed out that trust comes with risk.  Whenever we
introduce trust, there is automatically a risk.  Rose asked whether
this can work backwards.  That is, whenever there is risk, is there
some implied level of trust?  Frantz gave an inconclusive answer, but
he believes trust and risk are related in the majority of cases.

An attendee stated that the browser trust model left out the browser
manufacturer and delivery agent as trusted parties.  Users are
trusting more than just an explicit CA.  Van Oorschot agreed that
there is some degree of trust, but he chose to answer the question

Dan Geer pointed out that trust is the issue, but revocation is the
hard problem.  Is the main issue how risk is propagated?  Bankers
think about cashing in by packaging risk.  Aside from management of
risk, a problem comes up with how to resolve disputes.  Was a
certificate revoked at a certain time?  Frantz responded that in terms
of risk management, we should not move risk around.  We pay for risk
one way or another.  Rather, one should work on a system to reduce
risk.  Geer agreed, but he reasoned that if CAs are driven by banks in
the future, banks will consider risks as something bought and sold at
a profit.  Frantz claimed this is already true for credit card based
Internet commerce.  It all comes down to insurance and a 3%
transaction fee.  Geer finally asked whether revocation models and
trust models are necessary one for one or whether they can exist in an
overlapping world.  Van Oorschot said there is no single answer.
Whoever issues certificates should be responsible for revoking its own
certificates.  Frantz eluded the question by saying, "I hope to read
the proceedings in the next conference."

The next question involved identification.  Van Oorschot explained
that an organization selling certificates could sell a money-back
guarantee that a particular individual is bound to a particular key.
In $10 million transactions, there is no way to expect a $10
certificate to hold its water.  Authorization is different from
endorsing a public key.  It's a matter of trust in a public key versus
trust in what the public key stands for.

Another audience member asked whether bootstrapping based on passwords
is weak.  "All cryptography reduces you to trusting keys," van
Oorschot reminded the audience.  The cheapest method is a password.
If one is willing to pay more, one can require a hardware token.
Biometrics are another option.  Frantz further commented that password
security depends on physical security.

The following question began with a monologue, summarized below, on "a
neat toy" called public key signatures.  Since we have this high tech
wax seal, we are tempted to find new uses for it.  For years, people
have made commercial transactions under common law which dictate how
to deal when someone cheats or refuses to pay.  It seems as if we try
to solve everything with public key cryptography.  We are
technologists, not attorneys.  Why fit business models to tools
instead of fitting tools to business models.  Van Oorschot responded,
"If you can save money."

Another audience member asked how Entrust Technologies intends to
apply its patent on CRLs.  Entrust is making it available royalty free
on the condition that those who want to use the CRL technology must
make any of their related technology free as well.  This statement
drew much applause from the audience.

Earlier Frantz advised reducing risk rather than moving risk around.
An audience member commented that one can shift risk to a place you
you do not care much about, but you are almost always moving risk.
Frantz responded with a counterexample.  Online credit card validation
reduces risks when compared to offline credit cards.  A clerk could
thumb through revoked credit card numbers -- or not bother.
Technology has reduced the risk.  Van Oorschot hinted that looking at
who owns the risk is a good place to start.

Secure Systems
Session Chair: Mark Manasse, Compaq Systems Research Center 

A Resilient Access Control Scheme for Secure Electronic Transactions
Jong-Hyeon Lee, University of Cambridge 

Jong-Hyeon Lee spoke about a way to authenticate customers without
having to disclose customer secrets to a merchant.  Lee is a student
of Ross Anderson.  Incidentally, Lee is also capable of security in
two dimensions -- Aikido.

Despite the vulnerability to copying, passwords and Personal
Identification Numbers (PINs) commonly authenticate customers to
service providers.  Lee sought for a simple and secure electronic
transaction model without having to explicitly transfer customer
secrets and without having to use public key cryptography.  A scheme
by Needham to control PINs is simple, provides for privacy, separates
capabilities, and is customer-oriented.  However, it is susceptible to
replay attacks and bogus ATM machines.

Inspired by Needham's scheme to control bank PINs, Lee developed a
customer-oriented transaction model in which the customer generates
and maintains personal secrets.  The model allows for a transaction
procedure amongst three principles: a customer, a merchant, and a
bank.  Principles can participate in registration, transaction, or
secret-revocation procedures.  A somewhat lengthy protocol explains
the communication amongst the principles.  By using only hash
functions, Lee's model enhances privacy for the customer and ensures

The registration procedure mimics that of Needham's scheme and the
transaction procedure uses a technique from Kryptoknight.  In Lee's
online scheme, the customer is involved with all procedures.  An
offline scheme works in a similar manner, but there is some extra
communication between the merchant and customer.

Asked whether there exists an implementation, Lee explained there is
yet no implementation for this scheme, but there is for Needham's

See for more information.
Trusting Trusted Hardware: Towards a Formal Model for Programmable
Secure Coprocessors
Sean W. Smith and Vernon Austel, IBM TJ Watson Research Center

Sean Smith from the Secure Systems and Smart Cards group of IBM
presented his findings on proving the security of secure coprocessors
with respect to FIPS 140-1 level 4 certification.  His group worked on
three goals: achieving level 4 certification as a research project,
verifying the soundness or finding the holes in the coprocessor, and
formally describing the coprocessor.

The Federal Information Processing Standard (FIPS) 140-1 specifies
security requirements for cryptographic modules.  The most stringent
level in the standard, FIPS 140-1 level 4, requires a formal model of
a system and formal proof of security.  As of this writing, level 4 is
yet an unachieved grail.

A secure coprocessor is a piece of hardware that must survive in a
hostile environment.  It must guarantee that memory contents will be
zeroized upon any foreseeable attack.  A secure coprocessor needs to
defend against threats such as changes in voltage, temperature, and
radiation. Such a programmable device is useful for e-commerce.

A mechanical theorem prover iterated over a logical abstraction of the
coprocessor.  First, a formal model was made from a finite state
machine.  Then a specification was written in LISP to prove
simple properties of security.  The proof must show that the
coprocessor maintains its security guarantees despite hardware
failures and hardware attacks.  Guarantees for security fall into
three categories: safe execution, safe access, and safe zeroization.
Other assertions include authenticated execution, recoverability, and
fault tolerance.  The proof involves 2000 lines of C, 75 execution
states, and 7500 lines of a mechanical proof.

Right now just the hardware and bootstrap is being submitted for level
4 certification.  IBM's plans for actual certification are still
undecided.  In this research, IBM went through a lot of the legwork
for the boostrap layer as an exercise.  However, Smith notes it would
be "really cool" to go all the way with it.  In the future, Smith
hopes to evaluate the programs on the coprocessor.  However, Smith
expects complications since the hardware could interrupt the software
and the software could start interrupting the software.

Pointing out that FIPS is aging, an audience member asked Smith to
share hints on where FIPS is falling short and where it goes too far.
Smith replied that on the too-stringent side, FIPS requires the use of
DSA for signatures. Everyone wants to use RSA, but to be FIPS
compliant the coprocessor must contain algorithms no one wants to use.
On the other hand, FIPS does not address security requirements of the
manufacturing process.

Another audience member brought up the topic of differential power
analysis with current fluctuations.  Many security attacks result from
crossing levels of abstraction (power analysis, buffer overrun, etc).
Smith was ambivalent on whether good proof techniques can capture
these attacks.

For more information, see and
the IBM 4758 product brochure G325-1118.


On Secure and Pseudonymous Client-Relationships with Multiple Servers
Daniel Bleichenbacher, Eran Gabber, Phil Gibbons, Yossi Matias, and
Alain Mayer, Lucent Technologies, Bell Laboratories
{bleichen, eran, gibbons, matias, alain}

Alain Mayer talked about Janus, a cryptographic engine to establish
and maintain pseudonymous relationships.  Mayer enjoys hacking
JavaScript and having fun on the web.  Coincidentally, Mayer uses the
same Microsoft clip art in his presentation as does the Crowds project.

Janus facilitates relative pseudonymity.  That is, a client is
anonymous with respect to the client population (e.g., an ISP customer
base).  The server knows a message came from a particular client
population, but it does not know which member of the population.
Janus also allows for persistent relationships between clients and
servers.  Weak or strong authentication via passwords or keys allow for
repeat visits.

Absolute anonymity is hard to achieve.  There is a penalty in ease
of use and performance.  The work on Janus is complimentary to other
anonymizing efforts and can be combined with other techniques.

There is a distinction between data anonymity and connection
anonymity.  In data anonymity, data flowing over a connection does not
reveal an identity.  In this case the adversary would attack server
endpoints.  In connection anonymity, the connection itself does not
reveal an identity and the vulnerability is traffic analysis.

There are several candidate Janus functions.  Mayer has three
requirements of the function.  First, it must ensure uniqueness of
aliases among clients and resist impersonation.  In other words, it
must be hard to find an input that results in the same alias.  Second,
the function must not reveal information about client.  Third, there
must be forward secrecy and statelessness for client mobility.  Mayer
described one such function involving a password-keyed hash of a
client identifier, server identifier, and a usage tag.  Mayer finds
the CBC-MAC approach more promising than a simple hash because secrecy
under a chosen message attack implies secrecy of passwords.  The
CBC-MAC approach fulfills all three requirements.

Janus works with email aliases.  Aliased email can also help filter
junk mail.  A client may have a different mailbox for each server.
One can filter (even by a third party) by ignoring mail to a
particular alias for a server.

Mayer also explained several places to house a Janus engine.  In a
local approach, the Janus engine lives in the client.  Aliases would
be routed through a proxy.  This minimizes outside trust and
cooperates with mobile code and Personal Privacy Preferences (P3P)
repositories.  In a gateway approach, a client need not download
software.  This allows easy upgrades and maintenance.  In a third
party approach, the Janus engine would exist in the outside world.
The third party preserves subnet anonymity.

Mayer pointed out that if you look at a gateway or local approach, the
domain name or IP address does not reveal its alias or real address.
A vendor could ask for a credit card for identity validation.

An audience participant asked whether anonymity is really beyond
research and useful in real world.  Mayer responded that according to
surveys on electronic commerce, end users worry about privacy.  A high
percentage of users leave sites which present a fill-out form.  To
demonstrate practicality, Mayer offered the example of personalized
web pages.  A user no longer must remember passwords for services such
as My Yahoo or NYT.  Janus can be a tool to make personalized sites as
easy to visit as regular sites.

The Lucent Personalized Web Assistant uses a Janus engine.  See for more information.

Luncheon Talk:  Digital Bearer Settlement and the Geodesic Economy
Robert Hettinga, Philodox Financial Technology Evangalism

Robert gave me the following site for details on his work
(  He mentioned in his talk it is not about privacy.
It is about reducing risks and financial costs. Building hierarchical
societies along the way to civilization was noted and it was noted that
applying too paranoid models ==> non-profitability. Applying financial
cryptography to bearer settlement should be cheaper.  Since his talk
was directed to indicating that there was no strict need for structural
procedures, someone pointed out that "Bunch of strangers collaborated
in Titanic" and someone else pointed out Titanic was a disaster.  I
could not really extract more out of his talk.

Panel Discussion on Electronic Commerce Needs No PKI
Presenter: Win Treese, Open Market
Respondent: Joan Feigenbaum, AT&T Labs- Research

Win noted that why do we need certificates? He noted that
Cisco/Amazon/Dow are already online line without PKI.  He also noted
that PKI is not an enabler for e-commerce. He then presented the
following trap that often used: "e-comm needs security --> PKI is
needed --> Bring PKI".  He noted that before plugging the PKI, one
needs to look at what the business model is and how the money is
supposed to be made.  He presented the following examples

                                        Basic Models
                    |   Xaction                         Relationship    |
           Retail   |                      Consumer Reports|
                    |                                   Business Week   |
                    |                                   Financial Times |
                    |                                                   |
  Business to       |  Computers                        Supply chain    |
  Business          |  Office Supplies                  EDI             |

He noted that the PKI systems that need to  have the following properties:
(a) simple, (b) usable, (c) understandable, (d) solves business problems,
(e) framework should be simple so that, user need not worry about it
in terms of legal issues, and the  business implications of being simple.
He pointed that the journey is as important as the directions and concluded.

Joan responded by noting that the commonly assumed phone book metaphor of
the PKI is not "good" and  pointed that the DH paper probably led to this
interpretation. Instead of narrowly defining  the PKI as identifying an
individual to a key ( or as phone book), she preferred to bind it to an
authorization related to some privilege.   She noted in the e-commerce, a
public key may be used to authorize a transaction by signing it, and the act
of signing is an authorization bound to the key and not a directory listing
based binding to the name. Joan further pointed that the the binding of
credentials to the key should incorporate more information related to the
authorization the key carries for different applications. She also noted that
more e-commerce applications will benefit from widespread deployment of an
application-independent, general-purpose notion of "proof of compliance".

One of the audience kept arguing that the PKI is absolutely not needed.

Deployable Internet/Web Services
Session Chair: Doug Tygar, CMU

Secure WWW Transactions Using Standard HTTP and Java Applets
F. Bergadano, Universita di Torino, Italy; 
B. Crispo, University of Cambridge and Universita di Torino;

M. Eccettuato, Universita di Torino, Italy

Francesco Bergadano presented an alternative for securing HTTP
transactions.  This solution uses local Java applets on the client
side to establish a secure link with the server.

Existing solutions include modifications to the application protocol
(e.g., SHTTP), a secure transport below the browser (e.g., SSL/TLS,
DCE-Web transport APIs), proxy-based services, and network layer
changes (e.g., IPsec).  Bergadano's group wanted to achieve privacy,
authentication, and possibly non-repudiation.  However, they did not
want to implement a new browser or modify existing browsers.
Moreover, they wanted to provide strong cryptography and make the
source code freely available.

The proposed architecture uses normal HTTP, TCP, and a Java-capable
browser.  Essentially the client runs an applet from the server.  This
applet triggers a local applet that communicates with a local
application on the client.  This application in turn creates an
encrypted channel with the server.

This approach requires relatively few changes.  More important,
Bergadano claims it does not require trust of the browser.  It is
desirable to separate security routines from the browser.  This
approach is similar to a proxy-based approach.  However, a proxy must
intervene with all communication.  Bergadano's approach only becomes
active when an HTTP transaction is explicitly asked to be secure.

Launching several questions, Avi Rubin asked Bergadano to answer just
one: Where did you put security, is it better than SSL, why can't you
run a simple proxy, and are you assuming you can change a firewall
configuration?  Taking a deep breath, Bergadano jokingly asked what
time is dinner.  He chose to answer the SSL and firewall question.  In
the case of SSL, one needs a trusted browser which supports SSL.  In
Europe, one cannot easily obtain a standard browser with strong
cryptography.  As for the firewall, Bergadano reported that the
implementation was run on an open network.  He was unsure about
interactions with a firewall since a secondary channel must be
established between the client and server.

Another USENIX attendee commented that if this approach gets well used
and works, it would be consumed by a browser.

For more information and the source code, see

SWAPEROO: A Simple Wallet Architecture for Payments, Exchanges,
Refunds, and Other Operations
Neil Daswani, Dan Boneh, Hector Garcia-Molina, Steven Ketchpel, and
Andreas Paepcke, Stanford University
{daswani, dabo, hector, ketchpel, paepcke}

Neil Daswani, a graduate student at Stanford, presented the SWAPEROO
digital wallet project.  Started in September 1997, this project aimed
to identify desirable wallet properties and features, define a wallet
interaction model, define clean APIs for a wallet and its components,
and build a prototype.

Daswani's group decided that a generalized wallet should be
extensible, non-web-centric, symmetric, and client-driven.  First, a
wallet architecture should be extensible.  Rather than being
completely proprietary, it should support multiple instruments and
protocols.  Second, a wallet architecture should not rely on a web
interface as the sole common interface.  The basic architecture
should be written once to be run anywhere.  This enables the use of
alternative devices such as Personal Digital Assistants (PDAs).
Third, symmetry allows for common services across commerce
applications.  Current wallet implementations are often non-symmetric;
little infrastructure is shared between the client and server sides.
Fourth, a wallet architecture should be client-driven.  The user
should initiate all transactions.  Vendors should not be capable of
automatically invoking a client's digital wallet.  After all, would
you want a vendor reaching for your wallet as soon as you enter a

Next, Daswani described a wallet interaction model.  This has many
steps and is included in the proceedings.  After starting a
transaction, wallets can negotiate on a protocol.  Because of
symmetry, the user and vendor have similar wallets.

SWAPEROO has been implemented in C++ (PalmOS) and Java (Windows).
Future work includes populating the wallet, experimenting with other
devices (e.g., smart cards), working on the architecture, and
abstracting out the data manager.

One question was asked about symmetry.  Since everyone would have
wallets of a similar design, is there any reason clients would not
want to communicate with each other?  Daswani responded that there are
no restrictions.

Another question involved protection of the wallet's memory.  Given that
the wallet must be run in some protected memory, how are new instruments
and protocols securely installed and initialized?  Daswani answered that
for PalmPilots, this is a problem.  However, by running the wallet in an
environment with a capabilities-based security model, such as the Java
Gateway Security Model, new modules could safely be linked into the wallet
from trusted third parties.

A related paper on the PalmPilot implementation will appear in the
future.  The PalmPilot implementation lets a user buy a food item from
a particular vending machine at Stanford.  For more information, see

Their wallet has a bit more complicated diagram as shown below:

                User             user                   user
                profile          interface              interface
                manager   __        /|\                 API
                         |\          |
                           \         |
                            \       \|/
                Instrument      Wallet                  Client API
                Manager  <----> controller
                            __    /|\
                             /|    |
                            /      |
                           /       |
                Protocol |/__      |
                Manager            |
                     /|\           |
                      |            |
                     \|/          \|/
                    Communication Manager
The Eternal Resource Locator: An Alternative Means of Establishing
Trust on the World Wide Web
Ross Anderson, Vashek Matyas, Fabien A. Petitcolas, University of
{rja14, vm206, fapp2}

This paper presented authors' experience with the development of an
infrastructure that would support reliable e-distribution of medical
books, and hot news and regulations. Authors noted that the use of
X.509 was strongly opposed by the medical community and the EU rules
are based on an argument by Alexander Rossnagel directing that the
electronic structures should reflect the professional practice.
Moreover, attempts by the German and Austrian govts in using the smart
cards as access tokens for both patients and doctors failed since the
cards had the centralizing effects. Authors noted that in their efforts
to implement the system, they were forced to realize that the tree of
hashes should be the primary mechanism for protecting the information
with the X.509 mechanism being used as a secondary for limited tasks in
the case of medical and book publishing.  They then tried to show how
this cane be used for general web publishing.  In web related
publishing, instead of signing the whole web page, they proposed to
sign a part of the page denoted as the HASHBODY using the algorithms
specified in HASH element. The HASH element contains (a) methods
specifying the hash algorithms, (b) value of the hash, (c) a hash chain
path that can be used to check the integrity of a given page, (d) the
URL attribute optionally indicating where the page normally resides.
Authors note that checking the hash involves computing the hash value
on all the bytes of an HTML document between the hash-input border tags
and comparing it with the one provided within the hash input. This
URL-with-hash is called by authors as ERL or "eternal resource locator"
since it makes static objects unique for ever.

Vashek Matyas presented the results of an alternative means of
managing trust in electronic publishing.  He spoke about WAX, a
proprietary hypertext system for medical publishing.  WAX uses hashes
in combination with HTML links as an Eternal Resource Locator (ERL).
Matyas is also the co-editor of the Global Trust Register, a massive
directory with its own rating scheme of "top-level" PGP keys and X.509

In the hierarchical WAX system, there are shelves owned by publishers,
books owned by editors, and chapters owned by authors.  WAX must
protect against several threats: book contents could be altered, an
incorrect book source could be claimed, or a publisher or author could
deny past content.  Matyas stressed that there are no confidentiality
or audit requirements - only integrity and authenticity.

The WAX system originally used RSA for digital signatures.  However,
problems cropped up.  In particular, RSA digital signatures require a
Public Key Infrastructure (PKI), expiring keys cause problems for
long-lasting information, compromised keys are difficult to address,
and RSA-DSI royalties were expensive.  As a result, WAX uses one-time
signatures as an intermediate solution.

New HTML elements allow hashes and public keys to be embedded in
documents.  In addition to the standard linking information, the A
element also includes a HASHVALUE parameter.  When a browser follows a
link, it can hash the appropriate contents and verify whether the
document is authentic.  For instance, a link may appear as link.
The examresults page would contain further information to reconstruct
the hash.

Pure ERLs apply easily to static texts (e.g., health care, law and
contracting, banking).  One can also store hashes with bookmarks for
change control.  Additionally, this system can interact with public
key mechanisms.

Currently, work progresses on medical applications (WAX, British
National Formulary), incorporation of XML discussed with industrial
partners, and formalization of the ERL logic extended by public key

For more information, email or visit or


Detecting Hit Shaving in Click-Through Payment Schemes
Michael Reiter, AT&T Labs - Research;
Vidod Anupam and Alain Mayer, Lucent Technologies, Bell Laboratories

"Sheriff" Mike Reiter analyzed several mechanisms to calculate upper
and lower bounds on referrals to another site.  This is particularly
useful in web advertising where an advertiser receives a payment
directly proportional to the number of "click throughs" generated.
This paper received the best paper award.  Reiter received his
doctorate from Cornell, then moved to AT&T labs.  He is now moving to

A user U "clicks through" site A to site B if A serves a page to U and
then U clicks on a link in A's page to reach B.  Here A is the
referrer and B is the target.  In a click-through payment scheme, B
pays A for each referral that A gives to B.

There are two common forms of fraud in click-through payment schemes.
Hit shaving results when site B fails to credit site A for referrals.
Hit inflation results when site A causes bogus referrals to site B.
This paper discusses practical and immediately useful techniques to
detect hit shaving.

Reiter described two classes of solutions to detect hit shaving.  In a
heuristic approach, the target site need not cooperate or even have
knowledge of the process.  But in a cooperative approach, one can
achieve better accuracy and non-repudiation of click throughs.  For
both classes, the detection techniques are mostly invisible to the

The detection process must enable site A to monitor how often site B
receives a request from any user U with a referrer field indicating A.
This leads to the question of how to calculate upper and lower bounds
on hit counts.  Site A can record an upper bound on its referrals to
site B with no cooperation from B.  When user U clicks on a link to site
B, A is told about the click.  Then user U continues to B.  One can
implement this using HTTP redirection or a CGI script.  A second
approach uses JavaScript and an invisible frame to notify site A of
the intent to follow a link.  These techniques produce an upper bound
because one cannot be sure whether B actually receives the hit.  The
notification represents the intention to visit site B, but not a
guarantee to visit site B.

Techniques to calculate a lower bound are not so clean or simple.
After a user follows the link on site A to reach site B, the user
notifies site A.  A receives notification only if the connection to B
worked.  Reiter described a rather complicated procedure which spawned
a new browser window and used JavaScript.  Since one window cannot
access another window's namespace, there are a few hoops to jump
through.  A detection window probes the namespace of the window
attempting to contact site B.  When the detection window is no longer
allowed to probe the other window, it knows the connection to site B
was successful.  The detection window then notifies site A by
requesting a particular URL.

The lower bound technique has a few caveats.  The user might close the
a window before A is notified.  Additionally, this only detects that
some page is loaded.  The user may have stopped the request to site B
and traveled elsewhere.  A few tricks (e.g., hiding the toolbar) can
make it hard for the user to by-pass the notification process, but it
also can cause annoyances to the user.

Reiter suggests using both lower and upper bound detection on
referrals.  The two measurements should be fairly similar.

In the cooperative approaches, site B acknowledges each referral as
the referral happens.  In a naive solution, B would open a connection
to A for each hit.  In a distributed approach, B's page would make the
user request another page from site A as an acknowledgment.  It is
also possible to provide for non-repudiation with digital signatures.
B includes a digital signature while serving a page.  However, this
could easily become prohibitively costly.  Hash chaining can alleviate
some of the performance problems.

Reiter revealed a few disadvantages of hit shaving detection.  There
is a negative impact on user privacy.  Web sites can discover your
browsing habits.  The schemes are also incompatible with anonymizing
services such as Crowds or LPWA.

Questions began on a humorous note.  How did Reiter become involved
with this project? The saga began when Reiter placed his email address
on a web page.  A spammer sent an email about click-through payments
and that a 1998 Corvette would be awarded for the highest number
click throughs.  Thinking something must be fishy, Reiter began to
analyze click-through payment schemes.

A few questions about ethics and morality popped up.  All concerned
impediments to the user (e.g., awkward windows popping up) and
pornography.  Reiter cleverly escaped the questions with witty
remarks.  However, Reiter made it clear that improving the porn
industry is not his goal.  Click-through payment schemes are relevant
for all types of web advertising.

Finally one attendee pointed out that these schemes act like a poor
man's Remote Procedure Call via URLs.  Asked whether he was on to
something bigger, Reiter replied that there might be overlap or some
related opportunities.

Thursday: Sept 3rd

Consumer Service: Chaired by Win Treese OpenMarket, Inc,

Sales Promotion on the Internet
Manoj Kumar, Quoc-Bao Nguyen, Colin
Parris, Anant Jhingran IBM T. J Watson Center

Manoj presented a sales promotion  application for distributing and
redeeming coupons on the Internet. The talk focused on the fraud related
issues and economic related pricing. Manoj also noted that a model was
implemented at IBM.

[Our report authors were unable to attend the remainder of the
sessions. The rest of the conference schedule is as follows.---Ed.]

General-Purpose Digital Ticket Framework
Ko Fujimara and Yoshiaki Makajima,
NTT Information Communication Systems Labs

Towards a Framework for Handling Disputes in Payment Systems
N. Asokan, Els Van Herreweghen, and Michael Steiner,
IBM Zurich Research Laboratory

Current Mapping of PKI to Law
Presenter: Dan Greenwood, Commonwealth of Massachusetts
Respondent: Jane Winn, Sothern Methodist University School of Law

Name-Centric vs. Key-Centric PKI
Moderator: Bob Blakley, IBM
Key-Centric Presenters:  Carl Ellison, Intel
                         Perry Metzger, Piermont Information Systems
Name-Centric Presenters: Warwick Ford, Verisign
                         Steve Kent, CyberTrust Solutions, GTE

Schedule of Short Talks/Works-in-Progress Reports (WIPs)

Secure JavaScript in Mozilla
Vinod Anupam & Alain Mayer, Bell Labs
Murali Rangarajan, Rutgers University

Electronic Commerce on the Move
John du Pre Gauntt, Public Network Europe, The Economist Group

Electronic Multdimensional Auctions
Otto Kopplus, Erasmus University, Rotterdam

Multi-Agent Contracting
Maksim Tsvetovat, University of Minnesota

Smart Card Deployment Within a Closed PKI
Bob Carter, InterClear

Onion Routing Status Report
Paul Syverson, Naval Research Laboratory

A Trustee-Underwriter Model for Digital Bearer Transaction Settlement
Robert Hettinga, Philodox