USENIX Security Symposium VII
by Kevin Fu

Invited Talk: The Security Product Market: Trends and Influences Marcus J. Ranum (Network Flight Recorder, Inc.)

Clad in western-style boots and wearing just-out-of-bed hair, Marcus Ranum spoke about money and its effect on the security industry. Known for his work on firewalls, Marcus now serves as president and CEO of Network Flight Recorder, Inc. and chief scientist at V-ONE Corporation. Marcus gives a disclaimer that his opinions could be right or wrong. Use his opinions at your own risk.

One morning people just "woke up" as security experts. But apparently they never went back to sleep; Marcus calculates that the US security market has a 70% compound annual growth! Such enormous changes come from natural growth, IPOs, and the injection of VC money (or energy, as he describes it). Several security companies announced IPOs in 1995-97. IPOs pressure companies into short-term development. Investors expect *something* each quarter. Investors salivate over Internet-related IPOs coincidentally scheduled with interviews, articles, and industry initiatives. However, IPOs and VC often produce artificial growth.

On an aside, Marcus offered an exaggerated get-rich-quick scheme: Go to security companies and read the guest register. Maybe you will see the names of a lawyer, some investment bankers, and a prominent CEO all on the same day. Then bet with your kids' college tuition. I could almost hear the audience thinking aloud.

With new capital a company can either compete like mad or perish. It can outsell a competitor, claim to own most of the market share, buy the pieces it does not own, or watch its stock value sink. Investors don't care about technical details of security. They care about the right press. Of course, this produces lots of short-term fluff. Most public companies just want to get something out the door because they cannot develop long-term strategies while pleasing investors each quarter.

Marcus then gave his opinion on future industry trends. Instead of licensing technology, companies will acquire or merge with others. For instance, Security Dynamics (authentication) purchased RSADSI (encryption). Likely merger paths include the marriage of authentication with firewall technology. However, Marcus does not expect monster IPOs from encryption companies. And everyone knows the importance of controlling a public key infrastructure (PKI) -- that's the problem! Marcus vividly described wanna-be-CAs as "greedy pigs" who "knocked over [the PKI] while charging at the trough."

The next part of Marcus' talk concerned new growth areas. Intrusion detection will become popular. "Network grep" technology has a big market because it's easy to explain. You passively look for intrusion patterns. [See the summary on "Bro".] However, virtual private networks have their lines drawn; it's too late for startups.

By 2000, analysts expect a $14 billion/year market for complete, managed network systems. Customers want one box (count 'em, one) for security, firewalls, and pizza. The one-trick pony company will give way to one-stop shopping. As a side effect, bogus products and bogus consultants will proliferate, tarring the real security experts with bad press. Name recognition will become the metric of security quality. Customers will choose one of the five big name companies. Expect serious commoditization.

What is the effect on security? The industry gobbled up everyone who understands security. Moreover, security experts can charge top dollar -- attracting wanna-bes and diluting the market. However, Marcus offers some assurance that the joyride will soon slow down; astronomical fees will not last forever.

Throughout his talk, Marcus advised people to learn more about WindowsNT. He explained that "Microsoft takes things the customer wants, then rolls it into an operating system." People want to use NT because they wrongly believe it is easier than UNIX. The UNIX community can partially blame itself for this perception. We bicker over flavors of window managers and GUI's; we "scare people" to use NT.

If you design a plug-in security module for NT, you will get money. Microsoft will either buy you or steal you, but either way you will get a red Ferarri. Otherwise Microsoft will drive you to the ground. "You have to be a total schmxxx to fail," said Marcus. In the end, everyone will use NT. "Cry, but don't laugh," he warns.

An audience member asked for justification that token-based authentication systems and smartcards will soon fade away. Marcus replied, "Why carry a token when you can carry a Pilot?" He further explained that tokens alone do not complete transactions and that smartcards need backing from computer giants for widespread acceptance.

I note that large growth reduces the accuracy of market predictions. If the security market has a compound growth of 70% year, no one can accurately predict the future. For more information about Marcus' work, see

Intrusion Detection Refereed Papers Track Session Chair: Mike Reiter, AT&T Labs - Research

Bro: A System for Detecting Network Intruders in Real-Time Vern Paxson, Lawrence Berkeley National Laboratory

Named for its Orwellian potential for privacy violations, Bro detects intrusions by passively monitoring network traffic. Particular traffic patterns cause a monitor to make intrusion announcements in real-time. Vern is a member of the LBNL Network Research Group and the author of flex, a program to generate a lexical analyzer for the front end of compilers. Developing Bro since 1995, Vern easily earned the best paper award.

Bro's design considers seven goals: high-speed, large volume monitoring; resistance from dropping packets; real-time notification; separation of mechanism from policy; extensibility; avoidance of simple policy mistakes; and tolerance for attacks on the monitor. Vern gave the example of the large LBNL network in which security needs not be airtight, but intrusions must be detectable. Offline intrusion detection has its own applications, but it is nowhere nearly as good as real-time notification. Vern separates mechanism from policy to easily allow policy changes in the future. Moreover, an explicit policy language helps avoid simple mistakes.

Bro consists of several layers. At the bottom, the network layer feeds a packet stream to the libpcap library (tcpdump uses this library). After some initial filtering, packets arrive at the event engine, which is controlled by the policy script interpreter. The event engine filters out unwanted traffic. For instance, you may only care about FTP or portmapper packets. Finally, the interpreter makes intrusion announcements in real-time. A figure in the paper better describes the interface between each layer.

Key to the design of Bro is a policy language. This strongly-typed language aims to catch policy errors at compile-time. One notable feature protects against malicious strings. Rather than terminating a string with a NUL character, Bro represents a string by a vector of bytes and a byte count. If Bro were to use NUL-terminated strings, an intruder could trick a monitor with a string such as "USER nice\0USER root". No explicit looping mechanism (besides recursive calls) exists because Bro cannot afford the execution time and loops open up denial of service attacks. The single-threaded implementation offers timer management (10,000 easily), interpretation instead of compilation, checkpointing, and the ability for offline analysis. The paper gives an overview of the language and offers several policy code snippets.

The mere existence of a monitor invites attacks. As such, Bro defends itself against three categories of attacks. An overload attack forces the monitor to drop packets so that an intruder can sneak in unnoticed. Bro defends against this attack by doubting the monitor's capabilities; the monitor will shed load if necessary. A second attack causes the monitor to crash. An intruder could cause fatal errors by exploiting bugs or making the monitor run out of resources. In defense, a monitor uses a watchdog timer and records a traffic audit trail. Vern notes that crashes are hard to prevent since a single bug in any program can open up attack avenues. A third attack involves subterfuge. A nasty intruder could mislead the monitor. For example, an intruder may fragment IP datagrams in hopes that the monitor will fail to reassemble the fragments. The paper devotes an entire page to this difficult problem. Vern also explained that security through obscurity alone is not sufficient, but you should not advertise a policy script since it could reveal local bottlenecks to an intruder.

Wrapping up his talk, Vern gave the current status of Bro. Currently Bro can analyze four specific applications: finger, ftp, portmapper, and telnet. As for performance, Bro easily handles a 25Mbps FDDI ring during busy hours (the statistic in the paper is incorrect; the paper claims 50Mbps). Generally the system drops no packets. However, the event engine processes about 200 packets/second after the filter discards unwanted packets. At LBNL the monitor generates about 40MB of connection logs and 20 real-time notifications each day. This resulted in several CERT reports and many account deactivations. The source is available upon request. See for more information.

A participant asked whether Bro could handle the capacity of ATM and switched technology. Vern answered that in such a case, you can deploy multiple coordinated boxes. He does not expect any problems

arising from ATM packet assembly. Another audience member asked whether Bro would overload if placed on a gateway. Vern explained that such a system could be a reactive firewall or could talk to a firewall or router. You want to filter out the majority of traffic.

Cryptographic Support for Secure Logs on Untrusted Machines Bruce Schneier and John Kelsey, Counterpane Systems

Bruce Schneier, president of Counterpane Systems, presented a paper on secure logs. Essentially the secure logging system detects modifications to logs on an untrusted machine. Drawing analogies to the legal system, Bruce explained that "we don't prevent crime in this country, we detect it." The same idea is put forth in secure logging. Bruce's work includes the authorship of "Applied Cryptography" and creation of the blowfish cipher. Instead of presenting slides in ascending order, Bruce counted downwards since he claims the audience "really wants to know when they can get out." Incidentally, I saw Bruce writing his slides during the previous session. In any case, his talk went well and he made the audience roll with laughter.

In the general model, a untrusted machine, U, talks to a trusted machine, T. An ATM machine or smartcard qualifies as an untrusted machine. U maintains audit logs, occasionally commits logs to T, but is not necessarily expected to be compromised. If an intruder attacks U at time t, the intruder can not undetectably alter log entries before time t.

There are limits to possible solutions. First, no security measure can protect U after time t. An intruder can do anything after time t. Second, if T and U communicate online continuously, the problem no longer exists. Third, cryptography alone cannot prevent deletion. Write-only hardware such printouts or WORM drives can prevent deletion.

Bruce identified a few wrong ways to protect logs. The logger could encrypt log entries, but then the key is stored on U (ignoring performance-heavy public key crypto). MACs suffer from the same problem. A logger could use hash chains, but an intruder can fake them.

The approach in this paper combines MACs and encryption keys for an irreversible process. A complicated diagram depicts the addition of entries to a log file. Several variants of the logger exist (offline versions, phone transmission medium). You should see the paper for an outline of the protocol.

An audience member questioned whether this system can detect intrusions at the time of attack. Bruce explained that the secure logger does not exactly detect attacks; that's a "hardware problem." If an intruder can get into a system without generating log entries, then this system cannot help. Another participant pointed out that an electronic wallet can be attacked at time t=0. In Bruce's scheme, he assumes the owner is an attacker. It is a vulnerability. Questioned about the difference between irreversible hashing and hash chains, Bruce responded that an attack could delete log entries in reversible hash chains. Chains allow easy forward hashing. Bruce's logger uses hash chains, but prepends extra information. Finally, an audience member asked why the system does not use public-key encryption. If you have small logs infrequently accessed, would not PK encryption help? Bruce replied that PK encryption is too slow and and annoying for the goals of this secure logger.

In the future, Bruce hopes to provide better multi-party logging, distributed trust, and anonymization. His talk explained the problem of secure logging well, but the logger itself could not be fully explained in 25 minutes. Bruce has online information at

StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks Crispan Cowan, Oregon Graduate Institute

Leader of the IMMUNIX system survivability group and one of the few tie-wearing USENIX attendees, Crispan Cowan talked about a general method to prevent buffer-overflow attacks. System administrators typically patch vulnerable programs one at a time. Taking a pro-active approach, StackGuard modifies the compilation process to prevent most overflow attacks.

A buffer overflow can allow local accounts (or compromised accounts) to obtain root privileges. In the standard buffer overflow attack, kids use cookbook methods to feed a long string into a function that does not check array bounds. In doing so, an attacker can inject code into the stack, overwriting the return address. When the function returns, the system jumps to the memory location of choice, typically code to obtain a root shell.

StackGuard uses a compilation technique to prevent overflows. The method is so simple you will cry: detect changes to the return address by pushing one more word on the stack. StackGuard pushes a "canary" value (a direct descendant of the Welsh miner's canary) after the return address. When returning, a function checks whether the canary value has changed. The canary must be secret, randomly generated at execution time, and decidable. [The last two conditions sound contradictory to me, but I haven't looked under the hood.]

MemGuard, a variant of StackGuard, has granularity to protect a single word in memory. Unfortunately, MemGuard has disappointing results. Crispan admitted that the "5400% to 8800% overhead probably is not worth it." On the other hand, StackGuard requires a simple patch to gcc, which emits a little more in the function prolog and epilog. StackGuard function calls are more expensive, but GCC compilation time does not appear affected by StackGuard. GCC does not spend much time on function calls.

Experimental evaluation shows that StackGuard defends against *future* attacks. Crispan's group tried exploits from bugtraq, linux security announcements, and In most cases, StackGuard detects an overflow, halts, then warns the user. For instance, the Samba overflow came out after the implementation of StackGuard, yet StackGuard detected the attack. However, StackGuard fails to detect attacks where the return address is not attacked. For instance, the SuperProbe overflow involves the rewrite of a function pointer. Crispan gave wonderfully simple demonstration of StackGuard. Typing just a few commands, an unaltered dip binary produced a root shell. When compiled with StackGuard, dip dumps core and warns of the overflow.

Crispan identified two remaining problems: restarting daemons and responding to an attack. Halting is not too important for daemons started by inetd. But persistent daemons such as sendmail or inetd need a restart mechanism. A watchdog program could restart such services. The second problem involves what to do after detecting an attack. A more restrictive program to start (eg, using MemGuard). This could result in denial of service. StackGuard is not perfect; buffer overflow problems can get through. But StackGuard is effective against overflows you do not know about. This gives you time to apply patches, which you should still patch anyway.

An audience member asked whether an attacker could guess every canary. Crispan explained that two versions of the pseudo-random number generator exist. One uses seeding from the time-of-day (cringe!) and another uses /dev/random. Luckily StackGuard will not give an attacker a chance to guess a canary twice. Given that the canary is a 32-bit word, attackers will have to try to get around the canary rather than attack it directly.

Overflow attacks usually overwrite data pushed earlier on the stack. To protect against a downwards attack, return addresses should live on an another stack. It may be useful to put return addresses on another stack, but Crispan's group did not do this.

Another audience member asked why detect overflows instead of preventing overflows in the first place. Crispan replied that you would have to check on every write. You could check array bounds for every write, but StackGuard checks just once per function call. You may find lower overhead if reads occur more often than writes.

I note that buffer overflow attacks appear benign at first. But when combined with local account compromises, a single buffer overflow (hundreds are known to exist) can turn your system to swiss cheese. The common attack works as follows: an intruder will sniff a password (we all use encrypted telnet, right?), exploit a cookbook overflow, install a packet sniffer, then repeat the first step.

Related work includes Snarskii's FreeBSD libc fix and Solar Designer's non-executable stack. Crispan notes that StackGuard and other solutions can combine for better protection. In the future, StackGuard hopes to protect non-stack data structures, integrate with intrusion detection software, and secure a linux distribution. StackGuard is GPL'ed and currently works on the x86 architecture. A whole range of IMMUNIX projects can be found at

Data Mining Approaches for Intrusion Detection Wenke Lee and Salvatore J. Stolfo, Columbia University

Wenke Lee, a PhD student of Professor Stolfo, presented the paper on detecting intrusions by data mining. By applying data mining to intrusion detection, a systematic framework allows the construction of adaptive detection models. Unfortunately it was difficult to hear Wenke, but his paper explains most topics in the talk.

Wenke aims to avoid manual or ad-hoc intrusion detection mechanisms. Such mechanisms fall into two categories: misuse detection and anomaly detection. The problem with misuse detection is that someone must manually code known intrusion detection patterns. Moreover, new patterns are not detectable. In anomaly detection, someone must select the right set of system features for measurement. No solid guidelines exist, just experience and intuition.

Why use data mining? Intrusions usually leave trails of audit data. For example, cellular phones and credit cards use data mining approaches to detect fraud. In data mining, you select appropriate features to audit. Wenke chose data from sendmail function call traces and tcpdump output. The rest of the system consists of a low-level learning agent, base detection agent, and meta detection agent. An agent will extract connection level features like such as the number of connection attempts, the method of termination, etc.

Experimental results show single-digit misclassification rates for local traffic, but misclassification rates as high as 22% for inter-LAN communication. However, the addition of temporal statistical features reduces misclassification. Specifically, small window sizes for longer sampling periods produce better classification. One problem involves the selection of an optimal window size. A classifier should learn trends and patterns. For instance, it should predict whether you will visit web site C after visiting web sites A and B.

Wenke is just in the beginning of project and hope to insure further work in data mining approaches to intrusion detection. Currently Wenke is testing the effectiveness against known intrusions. After the session, Vern Paxson and Wenke Lee discussed ideas as USENIX attendees flocked around them like ants. Maybe we will see a hybrid of data mining and network monitoring in the future. You can find more information on

Cryptography Refereed Papers Track Session Chair: Dan Boneh , Stanford University

Certificate Revocation and Certificate Update Moni Naor and Kobbi Nissim, Weizmann Institute of Science

Kobbi Nissim presented a paper on efficient certificate revocation. Kobbi analyzed existing certificate revocation schemes and a new scheme with better incremental efficiency. The Naor-Nissim (NN) certificate revocation scheme uses Certificate Revocation Lists in the form of authenticated search structures. In addition, Kobbi won the best student paper award.

Certification involves three parties: a trusted certification authority (CA), an untrusted directory, and untrusted users. A CA issues and revokes certificates offline and periodically updates a directory. Users query a directory for "proofs" of validity and receive personal certificates from a CA. This paper addresses the problem of revoking users' certificates while keeping communication costs and message sizes to a minimum.

Kobbi summarized three existing certificate revocation schemes: Certificate Revocation Lists (CRLs), Micali's Certificate Revocation System (CRS), and Kocher's Certificate Revocation Trees (CRTs). The CRL offers the simplest approach. A CA periodically sends a digitally signed message to a directory. This message lists all revoked certificates. An obvious drawback is that the size of the message can grow overwhelmingly large. In CRS, a CA periodically signs a message denoting the validity of each certificate (revoked or not revoked). CRS has excellent query communication costs, but suffers from an increase in CA-to-directory communication. CRTs use binary hash trees to authenticate statements about certificate validity. CRT has the advantage that the entire CRT is not necessary to verify a given certificate. Moreover, users can easily prove certificate validity to other users. On the other hand, updates cause the re-computation of the entire CRT.

In the NN certificate revocation scheme, authenticated directories allow efficient directory queries and certificate "proof" updates. NN actually consists of a family of solutions, but Kobbi demonstrated the 2-3 tree version. Leaves represent revoked certificates' serial numbers (in ascending order) while values of internal nodes result from collision-intractable hashes of their children. The paper describes other data structure variations.

To vouch for the validity of the authenticated data structure, a CA signs a message containing the root node and tree height. Checking for a revoked certificate involves the re-computation of the path (of hashes) from the root node to the leaf representing the revoked certificate. This computation requires knowledge of all nodes on the path from the root to the leaf, along with all the children of the nodes on this path. Then to prove the validity of a certificate, a user must demonstrate two paths from the root node to two neighboring leaves such that the serial number of the unrevoked certificate is sandwiched between the values of the neighboring leaves. Because a proof of validity is small and hashes can be easily verified by any user, the new scheme allows users to efficiently prove validity to each other, reducing the user-to-directory communication costs.

Kobbi exhibited a table which denotes the presence or absence of desirable qualities in CRLs, CRS, CRTs, and NN. Unfortunately the paper does not replicate the same table, but you can deduce the information by reading the itemized lists within the paper. Qualities include low CA-to-directory communication, low directory-to-user communication, high directory update rate, whether a user may hold a proof of validity, scalability, and the existence of an update mechanism. The NN scheme works well except that the amount of directory-to-user communication increases (and thus the overall amount of communication).

The new scheme can also update certificates incrementally. In traditional certification schemes, CAs may issue short-lifetime certificates, then reissue new certificates for *each* user. But the CA becomes a bottleneck. In the NN scheme, Kobbi suggests that the CA broadcast an update message to all users. Taking advantage of the tree structure, users can update their certificate "proofs" appropriately. An audience member asked what would happen if a directory or user were to miss an update. Kobbi replied that proxies can collect update messages. Furthermore, one can authenticate proxies by checking the time of an update since the reissuing period is a known expression.

The appropriate method of revocation depends on requirements of response-time (propagation of revocation) and efficiency (the amount of communication). I recognize two main reasons for certificate revocation. Casual revocations result from non-critical reasons such as job changes or college graduation. If risk is low and most revocations are casual, decreasing the amount of communication at the cost of more propagation delay may be acceptable. However, revocations due to a compromise need immediate revocation. In a high-risk system, immediate revocations may justify the cost of extra communication.

Kobbi's clever overlays helped introduce certificate revocation, but too many complicated ideas uprooted the end of his talk. Nevertheless, the NN certificate revocation scheme is elegant and worthy of the best student paper award.

Attack-Resistant Trust Metrics for Public Key Certification Raph Levien and Alexander Aiken, University of California at Berkeley

Raph Levien, a graduate student of Alexander Aiken, presented a paper on the role of trust metrics for attack-resistant public key certification. The authors characterize existing trust metrics, establish a theoretical best-case in their model, and offer a metric which meets the theoretical best-case.

Trust metrics address the problem of binding public keys to opaque names. For instance, an email address may bind to a trusted public key. Trust metrics aim to accept most good bindings, but reject forgeries. By requiring multiple certifications, bindings become more attack-resistant. In the general case, a certification authority asserts that a key k belongs to a name n. This model naturally leads to a certification graph where nodes are keys or name/key associations and edges represent certification. A [binary] trust metric evaluates such a graph and outputs either accept (trusted) or reject (untrusted).

Raph admitted that the following makes two bold assumptions: the metric will accept most of the good name/key bindings and the name space is opaque (names reveal no information). Without these assumptions, the analysis becomes intractable. He suggests the metric is good for email, but maybe not for high security applications.

The proposed model considers two types of attacks. In a node attack, an adversary steals keying material from a CA. The adversary could reuse the keying material over and over. In an edge attack, an adversary can convince a CA to falsely certify a node. The adversary does not actually have the keying material. Why distinguish between node and edge attacks? It is harder to protect against edge attacks.

Within the paper you can find a table of successful attack thresholds. The table summarizes the necessary number of compromised nodes and edges to invalidate a trust metric. Resistance to attack ranged from single points of failure (just a single node) to quadratic numbers of nodes.

The maxflow-edge metric achieves the theoretically best case. Each key is certified by exactly d other keys (for concreteness, d could be 10). This method is consistent with both the web-of-trust and the CA model. For a successful node attack, an adversary must compromise d nodes. Approximately d^2 nodes or edges are necessary for a successful edge attack. Other attacks include chosen node (breaking a certain key) and random node (breaking any key). Both attacks need d nodes for a successful node attack or about d^2 nodes for a successful edge attack.

Raph mentioned problems with metric evaluation. Trust metrics are limited, cost grows slowly with cost of certification, will not scale up to the Internet. Raph calls the graph model somewhat simplistic. For instance, revocation is not part of the model. The paper could use more detailed captions so that the casual reader can see the main ideas. Breaking what must be a USENIX record, Raph delivered his talk with several minutes to spare. This paper offers good advice for designers of authentication metrics. You can find the paper and related information at

Software Generation of Practically Strong Random Numbers Peter Gutmann, University of Auckland

Peter Gutmann gave a review of poor pseudo-random number generator (PRNG) implementations and presented a practically (not pseudo) strong random number generator. Peter's accomplishments include the Secure File System and CryptLib. This self-proclaimed eternal graduate student warned the audience of his tendency to speak fast. In order to thwart this habit, an associate routinely shot Peter with a Texan rubberband gun (three times, to be exact).

Peter first highlighted significant holes found in PRNG implementations, including Netscape, Kerberos V4, MIT_MAGIC_COOKIE, and SESAME. Because these implementations used predictable and/or public seeding, one could easily determine the seed. Once an adversary has the seed, most PRNG implementations become deterministic. In fact, Peter looked at the SSH source code just before giving his talk. Apparently the SSH generator does not use a OWF, rather it uses a LSFR and exposes the internal state of the pool.

In designing a practically strong random number generator (PSRNG), Peter accumulated a list of general requirements. A PSRNG should protect against input analysis, manipulation of input, output analysis, and disclosure of internal state. Moreover, a PSRNG should use good coding practices. For example, Peter complained of the difficultly in understanding spaghetti source code in PGP 2.x.

A PSRNG should tap several sources for randomness. Random sources include the purely physical devices (lava lamps, radioactive decay); physical and post-processing (SG100); multi-source; single-source (PGP 2.x or /dev/random); secret nonce and PRNG (Applied Cryptography); fixed secret value and PRNG; or a known value plus a PRNG (Netscape, Kerberos V4, SESAME).

To gauge the effectiveness of entropy polling, Peter tried to compress successive samples of random data. Peter analyzed entropy polling on several platforms. DOS and OS/2 do not offer much entropy. In Windows 95 and Windows NT, you have some relatively non-deterministic system statistics to exploit. Surprisingly Peter found that Win16/32 produces lots of entropy upon reboot (about 2.5 times as much as a long-running machine). This results from the start up of 16-bit Windows drivers in somewhat non-deterministic order. However, network statistics in Windows NT contain almost no entropy and reboots cause little change in entropy. In evaluating UNIX randomness polling, Peter had to "handwave." BSD has more sources of randomness, but Peter did not test rebooted machines because 150 people use his only BSD machine.

Protecting the pool state is as important as protecting cryptographic keys. In UNIX root can use the mlock() call. And in Windows NT, locking does not work as documented. Macs can use the HoldMemory() call. Windows 95 locking does not work (the function returns "true"). Since Windows has "methods" to read the pool, one can obfuscate things by spreading the pool all over the place. Peter found this method "pretty good."

Peter also presented a PSRNG implementation which is better described by diagrams in the paper. In his PSRNG, Peter used hard-coded paths to utilities such as netstat to acquire randomness. He recommends running the randomization process with the UID nobody (so as not to expose root-privileged information) and timing the harvest of random numbers to kill slow entropy sources. CryptLib contains Peter's practical random number generator.

A participant questioned Peter about blocking issues in the Linux implementation of /dev/random. If the /dev/random is "drained" of its randomness, it may block to obtain more entropy. Peter explained that one cannot easily speed up /dev/random because it tries to estimate how long to stir pool for the amount of randomness you request. If you empty the pool, /dev/random will block until the pool is refreshed. Trading some security for performance, programs may find the urandom() call sufficient for some applications.

Throughout the talk Peter gave random bits of advice. For instance, generating a public and then the corresponding private key from the same pool can expose the state of pool for the private key. You have to keep them separated or intruders will come out to play.

Peter hesitated to call his scheme pseudo-random in the pure sense. Pseudo-randomly generated numbers usually refer to bit strings indistinguishable from truly random bit strings, given a polynomial amount of computation time. However, most PRNGs suffer from performance drawbacks. Instead Peter calls his scheme practically random -- a balance between trial-by-fire security and performance.

I note that even the experts can overlook PRNG problems. For instance, Jeff Schiller, a well-established security expert, co-designed Kerberos and co-authored an Internet RFC titled, "Randomness Recommendations for Security" in 1994 (see RFC 1750). However, people found a flaw in the Kerberos V4 random number generation in 1996! Even the experts can overlook random number generation. Therefore, PSNG designers should pay careful attention.

This entretaining talk addressed issues of practical security and good implementation practices. For more information, see

Invited Talk: Elliptic Curves -- Ready for Prime Time Alfred Menezes, University of Waterloo

In this invited talk, Alfred Menezes gave a quick introduction to elliptic curve cryptosystems and their advantages over other cryptosystems. Alfred charged into the most mathematical talk of the symposium. Alfred's work includes co-authorship of the "Handbook of Applied Cryptography" and "Elliptic Curve Public Key Cryptosystems." Once at Auburn University, he now works for the Advanced Cryptographic Research Center at the University of Waterloo.

Alfred began by defining multiplicative groups in a finite field. For instance, discrete log problem (DLP) cryptosystems usually operate in $Z_p^*$. Why use fancy groups other than just Z_p^*? Arithmetic and security of protocols may be more efficient or better. For instance, you could use smaller numbers, yet retain the same security.

Elliptic curve cryptosystems (ECC) are based on two variations of the DLP. An elliptic curve comes from the equation y^2=x^3 + ax +b where a and b are elements of Z_p^* such that 4a^3 + 27 b^2 does not equal zero (mod p). The set E(Z_p^*) consists of all points (x,y), x,y in Z_p^*, which satisfy the above equations, together with a special "point of infinity." Given a fixed prime p, there are many curves to choose. This means everyone could share the same prime, allowing construction of cheap, special-purpose hardware. Because ECC appears to provide good security with small secrets and little energy, companies like Motorola will want to use ECC in small devices such as pagers.

Alfred took a defensive stance on ECC. Has IFP been intensely studied? Yes, but Gauss could not even perform modular exponentiation. This prevented him from further success. Moreover, the Number Field Sieve (NFS) would have been useless to them since computers did not exist. The Integer Factorization Problem (IFP) really studied last 20 years. IFP is "sexier" than DLP, but in Alfred's opinion, IFP is not a better basis than DLP. None of these systems are provably secure; they make heavy-duty assumptions. ECC assumes an elliptic curve analog of DLP. There exist good breaks on specific instances of ECC, but no sub-exponential algorithm has been found for the general case.

Although ECDLP was just proposed in 1985, DLP and ECDLP abstractly concern the same problem -- just different arithmetic structures. Many DLP problems apply to EC's. Therefore, elliptic curves have been studied about as much as DLP. IFP attacks appear to have better software attacks. For instance, the RSA-129 effort involved several hosts over the Internet. On the other hand, DLP-based cryptosystems have better hardware attacks (according to Wiener).

An audience member asked about patent issues with ECC. With RSA, patent issues come into play. But for general ECC, no patents exist (just certain implementations). With ECC you can avoid patents. However, government standards protects liability. Some companies like patented technology for liability and licensing reasons.

Currently there exists no sub-exponential attacks on ECC. However, cryptographers disagree on whether to assume no sub-exponential attacks exist simply because no one has found one. Many cryptographers find the security comparisons between ECC and DLP hard to justify. Since describing Alfred's diagrams in words hardly does justice, I suggest reading his book, "Elliptic Curve Public Key Cryptosystems," or the paper, "Elliptic Curve DSA (ECDSA): An Enhanced DSA." See for more information.

Invited Talk: Factoring, Facts and Fables Arjen K. Lenstra, Citibank, N.A. Summary by Kevin Fu

This talk offered a historical perspective of factoring and security assumptions. The world's foremost expert on factorization, Arjen Lenstra became widely known when his team cracked the RSA-129 challenge at Bellcore. Arjen currently works for Citibank and has fun as president of Digicrime. Arjen emphasized that this talk has no relation whatsoever to his employer. He also notes that this is his first time at a USENIX Security Symposium and that "you can't expect much else from a banker."

Factoring is a simple problem: given an odd non-prime n, find a non-trivial factor of n. The problem of testing primality is easy. In theory, this runs in random polynomial time while in practice is runs in deterministic polynomial time. However, people believe that factoring large numbers (with no small factors) is difficult. This is a religion and there is no evidence to show it is true. Everyone seems to agree, but "I have no idea why."

According to the recently deceased James Ellis (, the British government had trouble maintaining lots of keys for the armed forces. Therefore, they sought a system which required no prior key. This account of Non-Secret Encryption (NSE) exists in a 1970 CESG report by J.H. Ellis: "The Possibility of Secure Non-Secret Digital Encryption." Then in November of 1973, C. C. Cocks wrote "A Note on Non-Secret Encryption" in a CESG report. The algorithm proposed by Cocks (CCC) looks very similar to RSA. Interestingly enough, CCC did not consider signature use. Of course, Diffie-Hellman appeared in 1976 while RSA appeared in 1977. Another scheme from CESG is based on DLP in a ring rather than a multiplicative finite field. This report was made months before Diffie-Hellman. Arjen asked the question: Were PKCS, RSA and DH first discovered in the UK? For a dramatic effect, he left the rest of the slide empty.

Security of "accepted" PKCS is based on the supposed difficulty of factoring or taking discrete logs. Do we have non-trivial lower bounds for difficulty of factoring or DLP? No, except for some special cases of no practical use. There do exist non-polynomial upper bounds from algorithms such as the Quadratic Sieve (QS) in 1982 and the Number Field Sieve (NFS) in 1989. Conclusion: Perceived security of PKCS is based on "our credulity and mathematical incompetence."

Arjen explained that there are plenty of people talking about factoring who have never factored themselves. For instance, Richard K. Guy doubts "anyone can factor 80 digits without special form in the present century" in his 1976 paper, "How to factor a number." Martin Gardner wrote of "A new kind of cipher that would take millions of years to break." In Arjen's opinion, a laptop computer could soon factor an 80-digit number in a single day.

To argue his point, Arjen extrapolated current factoring capabilities. In 1994, a QS factored an RSA-129 modulus. This required 5000 MIPS years for stage 1 (sieving) and two days on a 16K MasPar for stage 2 (matrix). Then in 1996, a NFS factored a 130-digit number in less than 700 MIPS years for stage 1 (68 hours and 700MB). However, stage 2 required much more computation time, even on a Cray C-90. Extrapolating these figures, Arjen believes factoring a 512-bit number with a QS would require 500,000 MIPS years for sieving and 4 days (and 1GB of space) on a Cray C-90 for the matrix. Substituting NFS, sieving would take 20,000 MIPS years and matrix computations would take 3 months (and 4GB of space). Therefore, 512-bit moduli are not long enough for current technology. On the other hand, factoring 1024-bit moduli seems hopeless. Just to sieve, the QS would require 10^15 MIPS years while the NFS would take 10^11 MIPS years. Arjen concludes that 512-bit QS factorization is feasible, 512-bit NFS factorization is hardly feasible, and 1024-bit factorization is hopeless.

Looking to the future, Arjen addressed how faster processors and new theory effect factoring. Speeding up a processor will not significantly improve current sieving techniques. Unless memory speeds increase, faster processors will not help. As for new theoretic insights, Arjen says, "Anything may happen."

Does difficulty imply security? Yes, if the entire production process is closely watched by people who understand all issues involved. But many people are either "laid off or promoted to security jobs." Then how can you avoid "creative" products? You could use trusted software (but does it exist?). Reading and understanding lots of source code is impractical. Trusting employees to write your own code is unrealistic. You could select the right vendor, but how? Combining products from different vendors is expensive, slow, and may not work. These problems apply to everything (operating systems, compilers, etc). Some experts say 160-bit elliptic curve cryptosystems (ECC) offer security comparable to 1024-bit RSA. Arjen believes 160-bit ECC is merely very difficult, not impossible.

Arjen summarized that proving the difficulty of IFP or DLP would not change anything since hardness of IFP or DLP do not imply hardness of many algorithms such as RSA. On the other hand, fast algorithms to solve IFP and DLP would topple most of the foundations of modern public key cryptography. Finally, Arjen concluded by asking, "Why won't business people leave the Internet alone?!"

An audience member asked what to do if you were to find a fast factoring algorithm. Arjen offered two suggestions. First, the discoverer should make the finding extremely public and publish the work as soon as possible -- in order to save your own skin. As an alternative, Arjen told the audience, "Just tell me about it." Some confusion errupted over key sizes. Do not confuse bits with digits. For instance, RSA-129 (digits) = 429 bits.