The Internet Society Symposium on Network and Distributed System Security (NDSS '99)
February 03 - February 05, 1999.
San Diego, California

by Tatyana Ryutov (tryutov@isi.edu)

[Another writeup of this symposium by Mahesh Tripunitara can be found
on the Web at
www.cs.purdue.edu/homes/tripunit/NDSSReport.htm (html version)
www.cs.purdue.edu/homes/tripunit/NDSSReport.txt (text version)
-Paul Syverson]

Network and Distributed System Security Symposium was held in 
San Diego, California, from February 03 - February 05, 1999. 

The goal of the symposium was to provide a highly interactive and
supportive forum for new research in Internet security.  The symposium
home page is http://info.isoc.org/ndss99.  There were over 200
participants representing 18 countries from business, academia, and
government with interests in cryptology and computer security.  My
apologies in advance for not knowing the names of some attendees who I
quote below. I missed the first session and parts of some other
sessions, so I was not able to describe questions and responses from
the audience for the first session and part of what was said in some
panel discussions.
 
The first session was "USER AUTHENTICATION AND PUBLIC KEY CRYPTOGRAPHY,"
run by Jonathan Trostle (Cisco Systems). 

The first paper was "Secure Password-Based Protocol for Downloading a
Private Key" by Radia Perlman (Sun Microsystems Laboratories, United
States) and Charlie Kaufman (Iris Associates, United States).  The
goal of the proposed protocols is to securely download user's
environment from the network given only user name and password.  These
protocols are variants of EKE and SPEKE modified to provide better
performance. Additional advantages are resistance to denial of service
attacks, stateless servers, and ability to use salt.  The related
protocols providing similar capabilities were discussed.  Some
protocols require knowledge of sensitive information (server public
key), other protocols were excessively strong for the defined goal,
they require more messages or more computation.

The next paper was "A Real-World Analysis of Kerberos Password
Security" by Thomas Wu (Stanford University, United States).  This
paper discusses well-known Kerberos vulnerability to off-line
dictionary attacks, providing and analyzing the results of an
experiment using Internet password cracking package.  The input
dictionary included precompiled word list, user-specific information
and transformations that users often do when selecting passwords,
e.g. adding digits as prefixes and suffixes and capitalizing letters.
Over 2000 passwords (from a Kerberos realm containing over 25 000 user
accounts) were guessed. Analysis of the successfully guessed passwords
provides interesting insights into the ways users select their
passwords.  The author recommended using preauthentication combined
with secure remote password protocol (e.g. SRP, SPEKE) and light
password strength-checking to protect authentication system from
dictionary attacks, rather then requiring enforcement of a harder
passwords administrative policy.
 
The first session ended with "Secure Remote Access to an Internal Web
Server" by Christian Gilmore, David Kormann, and Aviel D. Rubin (AT&T
Labs Research, United States).  One goal is to allow access to the
internal web server from outside of the firewall without any
modification to the internal infrastructure (firewall and internal web
server). Another goal is to make the web browsing session to appear
the same when users are connecting from behind the firewall.  The
first goal is achieved using strong authentication based on one-time
password scheme on top of SSL.  The firewall prohibits initiating
connections from outside, so outside connections are handled by a
proxy server, which has one component operating behind the firewall
and the other one on the outside.  The inside component maintains a
control connection to the external one.  It is used to receive browser
requests from outside and forward them to the internal web server.
The second goal is addressed by rewriting URLs. New URLs are
constructed from the original ones and include some security
information.  Some limitations of the presented system were pointed
out: when URLs (pointing behind the firewall) are dynamically
generated by scripts, there is no way to parse and therefore rewrite
them which leads to the failure of the request.


A panel (session 2), "SECURITY AND THE USER", was moderated by Win 
Treese (Open Market, Inc., United States).
          
Mary Ellen Zurko (Iris Associates, United States), in her talk
"User-centered Security" discussed two approaches to a system design:
user-centered and traditional one. She pointed out the conflict
between them and suggested some ways to deal with it. First, Mary gave
a user-centered perspective, stating that the user should be in
control of the system and the system should provide clear interface
and correct information. In the case of a problem with the use of the
system, it's the system fault, not the user.  Then she presented
traditional information security perspective, the main purpose of
which is protection of organization resources. Cooperation of
administrators, programmers and users is required.  If there is an
error, system is not always the only one to blame.  To cope with this
problem a user-centered approach to security is required: including a
user in the model of the security system, good software engineering
techniques (designing an easy, understandable user interface,
performing usability tests for security software).


Mark Ackerman (University of California at Irvine, United States)
presented "Usability and Security" talk.  Mark discussed the usability
and main aspects of it: social, historical and physical. He tackled
the issues of the human factors and human-computer interaction.


The following report was kindly given to me by Dr. Peter Neumann (SRI, 
United States). I am citing it without any modifications.

"Peter Neumann (who went last) noted a possibly connection between
NDSS and the butterfly theme of Hans's luau: Franz Kafka
(Metamorphosis).  Security is still in the larval stage, and solutions
when they emerge tend to be short-lived. Network and distributed
system security is certainly a Kafka-esque nightmare, having
metamorphosed into a gigantic army of bugs.

PGN stressed the importance of broadening our narrow concerns about
security to include reliability and system survivability in the face
of various adversities.  By addressing the more general problem, we
can come up with better solutions.  Perhaps most significant, he
emphasized the importance of good software engineering practice.
(Mary Ellen Zurko later noted the repeated occurrences of buffer
overflows over the past many years, as an example of why good software
engineering is so important.)

PGN considered the fact that system users (e.g., airplane pilots) are
often blamed for disasters that were in part attributable to bad
system design and bad interface design -- and gave several
illustrations.  For example, see his Website
(http://www.csl.sri.com/~neumann).  He observed that system/network
admins are also users, and that they are also victimized.  He stressed
the importance of better user authentication -- going beyond fixed
reusable passwords -- and the need for people-tolerant systems.
"Fool-proof" is a bad metaphor, because the experts can also mess up.

PGN discussed the importance of open-source software as an alternative to
bloatware systems with incredibly complicated user interfaces, along with
the need to make open-source systems significantly more robust.  One of the
biggest impediments to security created by closed-source proprietary systems
is that intrinsically lousy security cannot be openly assessed and cannot
be easily improved by admins and users."

The question and answer session touched on analogy between
cryptographic key and regular key; digital signature and handwritten
one. These notions are very different. How far will the analogy go?
What are the aspects of usability to users?  Some were concerned with
the risk of brining physical world experiences into the Internet
world.  Someone noted that the analogy is good. Another replied "I do
know what is going to happen in a physical world with my key, however
I do not know much about my software key: what system it's going to
go?"

Some expressed a pessimistic view: user is at risk no matter what he
does because the system itself is not secure: servers, web interfaces
and OS are insecure.  Alfred Osborne asked about the real solutions
for the users.  The answer was: there are no total solutions. Partial
solutions include: increasing robustness of the system and ensuring
trustworthy software distribution path (authentication, versioning),
emphasis on the "open design" versus "hide everything" approach. Hide
just implementation details, all other leave open.

A participant rose the issue of education of users. There are two
steps of getting familiar with a product: (1) product itself
(2) security issues.

An audience member asked if there should be established a national
education standard to include security as a required course.  One
opinion was: security, reliability and good software design should be
widely taught.  Another opinion was: security must be embedded
throughout entire curriculum, not to be taught separately.  A
participant pointed out that there will never be established security
education of users, even developers. Another participant objected that
he has probably forgotten that he was taught some basic security in
elementary school.  Another remark: people do not forget to lock the
doors, they forget to upgrade their doors (or keys).

One questioner asked: whom to blame if something goes wrong?
There was a variety of answers, including:
- everybody except me
- blame the on who can fix it, even thought it's not his fault
- user is not the one to blame, the fault is resulted from concatenation 
  of different things
- system should be "people prove" (fool prove and expert prove)  and user
  friendly

Some person observed: network security has some particular characteristics: a rubber 
who broke into your house will not take everything from your home at once, on the
Internet all your staff is gone immediately and it is all over the Internet. 

Someone asked about liability, to what extend it can be adopted?  To
what degree insurance should be entering the Internet world?  One
opinion was: there is a beginning of this. It is not good for
insurance, leverage for system developers.  One participant joked "if
software fails we will give you another copy of the software".
Another participant noted that insurance of software is not doable
because faults are often the result of human behavior. It's human
problem. How can we educate users?
Someone suggested certifying users :)

In conclusion, the was a question what should we do when return to our work?
- use the language without buffer overflow
- good toolkit
- good attitude toward authentication problem


The third session "CRYPTOGRAPHIC PROTOCOLS"  was hosted by David 
Balenson (TIS Labs at Network Associates, United States).

The session began with a paper "Experimenting with Shared Generation
of RSA Keys" by Michael Malkin, Thomas Wu, and Dan Boneh (Stanford
University, United States) The goal of the paper was to investigate
practical aspects of distributed shared RSA key generation. This
method does not require involvement of a trusted dealer (who
introduces a single point of attack). In this scheme K servers
generate a modulus N=pq and exponents e and d, where e is public and d
is private. The private key d is shared among K servers. The K servers
perform private distributed computation to make sure N is the product
of two primes, however non of the servers knows the factorization. The
key d is never reconstructed at a single location and can be shared in
the way to allow "t out of k" threshold signature generation. This is
useful to achieve fault tolerance, therefore allowing the servers to
issue a certificates without reconstructing the key d. One of the
practical drawbacks of this distributed key generation scheme is that
it takes more iterations: worst case run time estimation for the
algorithm for finding a suitable N is O(n^2) (compare to single user
generation O(2n)). The author presented practical optimizations based
on distributed sieving and multithreading of key generation. He
presented reasonable experimental key generation time measurements: 91
seconds for 1024 bit shared RSA key (3 333mhz PCs, 10Mbps Ethernet); 6
minutes across wide area network.

Steve Kent asked if over n/2 participating parties are bad guys, will
they be able to get sensitive information.  The answer was: the
algorithm is secure if over n/2 participants generate key properly.

Someone asked: the communication is based on SSL, this means
unicast. What are the performance implications of having no broadcast?
The reply was: with the large numbers of servers performance is
degrading, it does work better with fewer number of parties.

The next paper was "Addressing the Problem of Undetected Signature Key
Compromise" by Paul C. van Oorschot (Entrust Technologies, Canada) and
Mike Just (Carleton University, Canada). Mike Just presented.  The
purpose of the paper was to study undetected key compromise, motivate
others to consider this problem and provide solutions for detecting a
compromise and preventing forged signature acceptance.  The key idea
is using a second level of authentication which results in a signature
over the signed message to be returned to the originator of the
message by trusted register. This allows the recipient of the signed
message to make sure that the message was signed by the legitimate
party, thus making possession of the signature key by an attacker
insufficient for forging the signature.  The main distinction of the
proposed scheme is that two independent secrets are maintained by the
signing user: one for the original signature and one for the secondary
authentication. This provides a second level of protection as well as
increases compromise detection likelihood. Detection is based on using
a time-dependent counter, which allows detecting of a lack of
synchronization between the signer and the trusted
register. Preventing acceptance of bogus signatures is achieved by
introduction of "cooling off" period: signed messages are not accepted
until this period has not expired. This technique supports
non-repudiation, since bogus messages would have been detected by the
legitimate users and must have been reported. One drawback of the
presented scheme is a requirement of on-line trusted third party. Mike
noted that this is applicable to high-valued automated transactions.

One participant asked: How do I know when the key expired and I should
not use it any more, or if someone else started using it?

Next was an interesting paper "Practical Approach to Anonymity in
Large Scale Electronic Voting Schemes" by Andreu Riera and Joan
Borrell (Universitat Autonoma de Barcelona, Spain). Andreu Riera
presented.  Their work considered how to implement a realistic large
scale voting system.  Their scheme is based on cooperation of multiple
hierarchically arranged electoral authorities. The advantages of this
scheme are: single non-anonymous voting session (a widely accepted
solution is based on two sessions anonymous and non-anonymous) and no
requirements for external mixes.  The anonymity is provided by
shuffling ballot boxes a number of times.  There are restrictions to
this approach. The proposed scheme can model all commonly accepted
security requirements, except uncoercibility (inability of voters to
prove in which way they voted), which require hardware components to
be added into the scheme.

A participant asked if the scheme was implemented. Andreu replied that
they are working on the protocol.  Someone asked: authentication of
the voter is required, how privacy is maintained?  Andreu explained
that authentication of the voter private key is required, to assure
privacy the blind signature mechanism is used.  One questioner pointed
out that in commercial voting systems all software is proprietary,
they do not allow looking at the code, therefore there are many ways
to subvert election, e.g. by means of covert channels.

Another question was: Is this complexity practical for real system?
Andreu: complexity is inevitable.
A member of the audience asked if it is possible to detect who voted twice.
Andreu: yes.
Another question was about association between a voter and his vote.
Andrew pointed out that it was not possible to detect association
between a voter and his vote.
                           
The last session of the day was a panel "SECURING THE INTERNET'S 
EXTERIOR ROUTING INFRASTRUCTURE ", hosted by Sue Hares (Merit, United States).

Sandra Murphy, TIS Labs at Network Associates, United States) was the
first panelist.  Sorry, I missed this one.

Curtis Villamizer (ANS Communications, United States) talked about
improving Internet routing robustness.  Curtis pointed out that
incorrect routing either malicious or unintended can cause traffic
misrouting and routing related outage. Routing attacks not occur
because of little impact (short-term denial of service) and high risk
to be caught (many sites log routing activity).

Curtis discussed authentication schemes used:
- IGP protocols use peer to peer MD5 or password-based authentication 
- IBGP protocols use MD5 or TCP/MD5 authentication
- some EBGP peer to peer TCP/MD5
He discussed the following approaches to improve external routing robustness:
- information storage (DNS, IRR)
- authorization
- verification of route announcements (sanity filters applied to EBGP peers, 
  signatures on route origin, signatures at each BGP exchange)

In the end of his talk, Curtis discussed signature approaches: origin
and full path.  Signatures vs. filtering Signatures offer security
advantage over filtering.  Filters offer better scalability. Use of
either one will improve routing robustness.

Someone mentioned replay attacks.
Curtis: Can incorporate time stamps in signatures, this is a
fundamental change.
The next question was whether the registry was implemented.  
Curtis: It's already happened.

The third panelist Tony Li (Juniper Networks, Inc., United States)
talked about BGP origin authentication.

He identified the following problems:
- malicious or erroneous injecting prefixes by Autonomous Systems 
- denial of service attacks 
- masquerading as another AS
- tampering with advertisement
He noted that the emphasis of this work was on finding practical solutions. 
Tony outlined the approach: 
1) encode prefixes in DNS --hard part.
He presented different encoding rules.
2) use DNSSEC to provide authentication
3) Use BGP look up each prefix in DNS  (for performance BGP speakers 
can cache relevant RR, cache can persists across reboots)

If there is a matching AS RR and the origin authenticates,
authenticated path is preferred over unauthenticated, even in the case
when authenticated path is less-specific. Only authenticated path is
announced. Checking for authentication expiration. If there is no
authentication information unauthenticated paths are still usable. If
there is a matching AS RR and the origin does not authenticate: select
a different path, in the case the path was advertised withdraw it.


The last panelist was Charles Lynn (BBN Technologies, United
States). He talked about Secure Border Gateway protocol.  First he
outlined the goals: overcome current BGP limitations and to design a
dynamic, scalable and deployable protocol. Advantages of the S-BGP are
authentication of participating entities (prefix owners, AS number
owners, AS administrators participate), authorization of AS for prefix
advertisement and use of a route.

 The design is based on:
1) IPsec to provide authentication, integrity and protection against replay
   attacks
2) PKI to support secure identification of BGP speakers,
   Owners of ASes and owners of address blocks
2) Attestations:  
  - Address attestations validate that a destination address
     was originated by authorized AS. 
   - Route attestations validated that an AS is authorized to use an AS path.
3) Certificates and attestations are used for validation of UPDATES
4) Each UPDATE includes one or more address attestations and a set of route 
attestations. 

Charles presented an address, AS and router certificates format and
encoding of the attestations. Performance issues were discussed
Optimizations were considered; caching validated routes, background
validation Of alternate routes, keeping only necessary certificates
fields in S-BGP databases, offload generation/signing of rout
attestations. Charles concluded that prototype developing is in
progress.  The talk was quite long and no time was left for questions.


The fifth section "POLICY AND TRUST MANAGEMENT" opened next day 
was run buy Warwick Ford (Verisign, United States).

The first paper was "Distributed Policy Management for Java 1.2" by
Pekka Nikander and Jonna Partanen (Helsinki University of Technology,
Finland) Jonna presented. The main idea is to use SPKI certificates to
achieve better scalability and dynamic access control management as
alternative to static local permission configuration. Certificates are
attached to protection domains, as well as retrieved from distributed
certificate repository.  The improvements include: ability to
dynamically extend granted permissions and introduce new permission
types, which may by dynamically derived from SPKI certificates as
needed. Jonna presented the security architecture of JDK 1.2.  She
pointed out drawbacks: permissions associated with a domain must be
defined prior loading the classes and assigning protection domains to
classes is rigid.  She pointed out that this was a default
implementation, not the proposed architecture.  The prototype
implementation was discussed.

Mary Zurko asked if the system was implemented.
Jonna: not finished yet.
Steve Kent asked about certificate revocation problem.
Jonna: on-line validity tests, CRLs, not finished yet.

A participant asked if everyone was allowed to put certificates in DNS.
Jonna: It can be implemented so that to put it on your local DNS and
not show it to anyone to ensure privacy.

Another question was: Which chain of certificates do you select?
Jonna: we have to find a valid chain, our chains are short.

Someone asked if there was a way to establish one-time certificates.
Jonna: Certificates are meant to be used many times, compromised
certificates are revoked.


The next paper was "Distributed Execution with Remote Audit" by Fabian
Monrose (New York University, United States), Peter Wyckoff (New York
University, United States), and Aviel Rubin (AT&T Labs Research,
United States).  Fabian Monrose presented. This work was concerned
with misbehavior of the hosts participating in coarse-grained parallel
computations in metacomputing environments.  He presented design and
Java-specific implementation of audit mechanism to detect such
misbehavior. The technique is based on transforming a task into
checkable units.  For a host to cheat it must corrupt at least one of
the units. This is more difficult then corrupting an entire
computation by returning an error. The limitation is that proposed
scheme detects misbehavior of only cheating hosts (ones who try to
minimize resource expendures) with high probability. This is done by
means of proof of execution, which is sent by the participating hosts
to the verifier. The verifier checks the prove to determine if the
component was correctly executed. The hosts that are trying to subvert
the computations are not caught. The technique is based on the
assumption that the task can be transformed into checkable units that
have the similar execution time, which is not always feasible. This
requirement limits a set of applications that may benefit from it.

A participant asked if the system can be extended to do audit if the
machines do not do what they are supposed to do.
Fabian replied that there was a particular environment that could
support it.

Another one asked if workers can trust the manager.
Fabian noted that workers are being paid for the performed
computation, therefore they have to have some trust.


The next paper "An Algebra for Assessing Trust in Certification
Chains" by Audun Josang (Telenor R&D, Norway) ended the session.
Audun Josang presented an interesting work on algebra for determining
trust in certificate chains. It is based on subjective logic, which
defines logical operations (with some untraditional "recommendation"
and "consensus" operators) for manipulating opinions.  Opinion is
defined as a triplet consisting of belief, disbelief and uncertainty
components.  The motivation behind such metrics is belief that trust
is not binary.  Certificates are accompanied by opinions about key
authenticity and recommendation trustworthiness. Authenticity of the
public key is based on finding two valid chains: certificate chain and
recommendation chain.  To avoid undesirable dependencies, the algebra
requires recommendations to be based on first-hand evidence only. This
simplifies the problem of certificate revocation, since the
recommender has the information about every recipient, therefore he is
able to inform them about revoked certificate.

The notorious VSR programming problem was brought up: how easy will it
be for the end users to make use of this approach?
Audun agreed that it is not easy, there is no an easy way to express
uncertainty.

Another question was: Second hand trust is a useful intuition, why
prohibit it.
Audun pointed out that restriction on the use of first-hand trust only
enforces a certain ways for establishing certification paths.

Next (sixth) session was a panel "A NETWORK SECURITY RESEARCH AGENDA",
run by Fred Schneider (Cornell University, United States).

Steven M. Bellovin (AT&T Labs-Research, United States) began his talk
by defining the problems that in need to be solved.  First, Steven
described cryptography issues such as: need for higher speed for
public key algorithms; PKI scaling problem and revocation of expired
certificates; no one checks certificates; cryptography makes many
things harder, e.g. compression, network management tools, QoS
techniques. Next Steven touched on buggy software problems (notorious
buffer overflows), routing attacks and environmental problems
(operational errors often translate into security problems). In the
end of his talk Steven outlined the challenges: learn how to use
cryptography and write correct code; secure routing infrastructure and
make systems powful but easy to use.


Next two panels were presented by Steve T. Kent (BBN Technologies/GTE
Internetworking, United States) and Roger Needham (Microsoft, United
States), sorry did not get these two.


Hilarie K. Orman (DARPA, United States) presented her talk
"Perspectives on Progress and Directions for Network Security
Research".  First, Hilarie outlined the progress network security has
made: commercial IPSEC, widespread SSL, PGP in products, secure key
exchange standards (IEEE, ANSI, ATM, IETF). Then she discussed
government (manageable security, flexible policy, risk assessment),
industry concerns (performance impacts of security, end-to-end
confidentiality clashes with network management, preservation of
intellectual property) and new network security concerns (impact of
embedded devices on Internet, reliability of the data received from
sensing devices with wireless communication, access control and
authorization issues). As conclusion, she gave an overview of security
research directions: secure group communication and management, secure
multicast routing, mapping policy to mechanism across organizations,
high-speed networks, cryptography in the optical domain, practical
mobile security, integrity of autonomous devices, strong availability
guarantees, scientific/engineering basis for risk assessment, strong
redundancy guarantees and monitors, smart attack/corruption detection
and adaptive and automated response.

Questions for panel given were: Is there a research on legislation?
Why American model data collection model would not work in Europe?

There was some discussion on legislation. In Europe, an agency
collecting private data has to: (1) notify everyone that it is
collecting data (2) state what it is collecting the data for and (3)
report how data was used. In America, private data (e.g customer
e-mail) can be sold to someone else without asking or notifying the
customer.

Other question was "Is it possible to reduce complexity to afford what
we are implementing?"
The answer was: "The problem is complex, this uderlying complexity
does not lead to a simple solution".

The seventh session was "NETWORK INFRASTRUCTURE PROTECTION",
hosted by Christoph Schuba (Sun Microsystems, United States)

The first paper was "PGRIP: PNNI Global Routing Infrastructure
Protection" by Sabrina De Capitani di Vimercati (Universita di Milano,
Italy), Patrick Lincoln (SRI International, United States) , Livio
Ricciulli (SRI International, United States), and Pierangela Samarati
(SRI International, United States). Patrick Lincoln presented.  The
paper was concerned with protecting the routing infrastructure from
malicious and unintentional faults by (1) replicating network
processing and communication resources and (2) using Byzantine fault
tolerant protocols to identify failures.  The routing protocols
operates in clear, ones failure is detected security enhanced
protocols are invoked to fix the problem. Thus the approach relies on
cryptography only when absolutely necessary, therefore treating common
case more efficiently.  PNNI uses a hierarchical organization: nodes
are grouped, each group has a leader.  The group leaders themselves
are grouped at a higher level of hierarchy. Only a subset of nodes,
including a group leader in each peer group is equipped with PGRIP.
These PGRIP enhanced nodes detect integrity compromises by evaluating
changes to the local databases and resolves anomalies.

Someone made an observation: if cryptography is optional then you do
not know who you are talking to.

Next paper was "Client Puzzles: A Cryptographic Countermeasure Against
Connection Depletion Attacks" by Ari Juels and John Brainard (RSA
Laboratories, United States). Ari Juels presented. This was a very
entertaining presentation.  The idea is: in the absence of attack the
server accepts request indiscriminately.  When a connection depletion
attack is suspected, the server starts accepting the connection
requests selectively. Each client wishing to get service is given a
unique puzzle, a cryptographic problem, which must be solved by the
client in order to get the requested resources. A client puzzle
incorporates time of request, server secret and client request
information. Server operates in a stateless fashion: it checks the
correctness of the solution, checks that the puzzle has not expired
and makes sure that an attacker can not use same solution for multiple
allocations.  The idea is "nothing comes for free". An attacker has to
have a large computational resources to mount an attack. The protocol
is very flexible: hardness of puzzles can be dependent on the severity
of the attack.  The proposed protocol can be used to defend protocols
such as TCP and SSL against connection depletion attacks.  A
disadvantage is that client has to have a software for solving the
puzzles.  He noted will be interesting for a server to pick up results
of the puzzles and do research topic.

Someone asked if the server had to maintain state, remember puzzles.
Ari: no, server just checks if pre-image is equal to the answer.

Someone else risen that an attacker can mount slowing down attacks, causing
frustration of legitimate users.
Ari: graceful degradation: stronger attack harder puzzles.

Another question was if puzzles were cryptographically protected.  How
can one distinguish between legitimately generated puzzle and modified
puzzles?
Ari: this issue was not dealt with in the paper.

The last (eighth) session was a panel "IPSEC: FRIEND OR FOE", held by
Dan Nessett (3Com Corporation, United States).

Rodney Thayer (EIS Corporation, United States) presented "Benefits of
IPsec" talk.
The IPsec was developed by a working groups from different backgrounds
(IETF).  It is based on modern technology. It provides platform and
algorithm independence (cryptographic algorithms can be easily added
and delete).  Transparent to applications, different privacy and
authentication options.  IPsec implemented at the Network Layer which
provides protection against network layer attacks, all necessary IP
packets are protected, allows deployment in gateways, which in turn
can provide scaled management of security.  Allows network-wide
security parameters.

Bob Braden (USC/ISI, United States), who is only a simulated foe of
IPsec, presented "Arguments Against IPsec" talk.

1) Operation of IPsec at the Network layer harms many things:
 when used for encryption, IP sec hides the transport layer , this is
bad for network management (traffic flow and per-port usage
information) and TCP performance enhancements (e.g. ACK snooping and
ACK pacing).  When used for integrity, it prevents legitimate and
useful rewriting of protocol headers.

2) IPsec makes network security difficult: intrusion detection is more
limited; the CPU cost of IPsec cryptography makes DoS attacks much
easier.

3) IPsec adds complexity to the IP protocol level

4) Application-level security optimization along with having a good side(common
   IPsec service) has a downside: can not optimize for application
requirements.

5) The decision to require IPsec in IPv6 may delay deployment of IPv6
Conclusion: Don't have enough experience with IPsec to say if it's
good or bad.
 
Steve Bellovin (AT&T Labs Research, United States) gave an overview of
the proposed transport-friendly ESP principals, such as including
protocol number in the clear, specification of the size of unencrypted
leading portion, addition of padding for boundary alignment and cipher
blocksize match. He discussed suggested alternatives SSL and SSL plus
AH. The first one will require changes in each application, vulnerable
to active DoS attacks and does not handle UDP. Addition of AH will
only improve DoS vulnerability, leaving the other two problems.

A participant expressed concern with possible configuration difficulties.
The replay was: there are only 3 choices: (1) expose everything
                                          (2) expose nothing
                                          (3) some intermediate

A participant asked if we can fix the existing architecture.  Someone
replyed: we should make administrative domain a part of the
architecture
Steve Bellovin: technology has changed therefore design Internet differently
Bob Braden: the client has changed, commercialization of Internet
Rodney Thayer: paradigm itself is changing

Someone asked about impact of IPsec on network speed and processing
time per packet?
Steve Bellovin answered: there is progress in this field, some day
they will put it on chips.

Another question was: IPsec required for IPv6, will it be required for
IPv4?
Answer: NO!!!

Someone asked about multicast.
Answer: we do not know how to do key management for multicast.