Subject: Electronic CIPHER, Issue 34, November 03, 1999 _/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ==================================================================== Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 34 November 03, 1999 Paul Syverson, Editor Bob Bruen, Book Review Editor Hilarie Orman, Assoc. Editor Mary Ellen Zurko, Assoc. Editor Anish Mathuria, Reader's Guide ==================================================================== http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/ Contents: [4746 lines total] Letter from the Editor Thirteenth IEEE Computer Security Foundations Workshop o Initial Call for Papers Security and Privacy News Briefs: o LISTWATCH: Items from security-related lists, by Mary Ellen Zurko Highlights are from risks, dcsb, cypherpunks, tbtf, and CRYPTO-GRAM. Commentary and Opinion: Book Reviews by Bob Bruen o Code Breaking A History and Exploration by Rudolf Kippenhahn o The Code Book. The Evolution of Secrecy from Mary, Queen of Scots to Quantum Cryptography by Simon Singh. o Cryptography and Network Security: Principles and Practice. 2nd edition by William Stallings Conference Reports: o CRYPTO '99 by Richard Graveman o USENIX Security Symposium '99 by Kevin E. Fu o Selected Areas of Cryptography (SAC '99) by Serge Mister o New Security Paradigms Workshop (NSPW '99) by Mary Ellen Zurko New Interesting Links on the Web: IFIP WG 1.7 Theoretical Foundations of Security, ACSA Information Security Bookshelf, Electronic ID Fraud Newsletter. New Reports Available via FTP and WWW: UK ITSEC report on MULTOS Who's Where: recent address changes Calls for Papers Reader's guide to recent security and privacy literature o Conference Papers o Journal and Newsletter articles o Books Calendar List of Computer Security Academic Positions, maintained by Cynthia Irvine Publications for Sale -- S&P and CSFW proceedings available TC officers Information for Subscribers and Contributors ____________________________________________________________________ Letter from the Editor ____________________________________________________________________ Dear Readers, We are pleased to bring you another issue of Cipher. This issue contains reports on four conferences, three book reviews, news, recent publications, and more. Papers for the official symposium of the Technical Committee on Security and Privacy have just been submitted. By now everyone should know that the symposium will be in its traditional location and hence retain its unofficial name---the Oakland Conference. To anyone who did not get a submission together in time to submit, let me first say "Thank You". Submissions are way up, and the program committee has its work cut out. Let me next say to you that, if appropriately foundational, you can submit your paper to the other TCSP venue, the Computer Security Foundations Workshop. CSFW will be at a new first-time venue, although certainly a traditional site in its own right, Cambridge University in the UK. The initial call for papers is contained below. Thanks as always to everyone who contributed. And, as always, if you have any contributions appropriate for the Cipher readership, please send them to us at cipher@itd.nrl.navy.mil. We can use someone to write up the ACM Computer and Communications Security Conference (at which many TC members find themselves right now: take some good notes folks). Volunteers to write up other meetings, such as next week's IETF meeting or next month's ACSAC conference would also be welcome. Finally, in case we don't get another issue out before the new millenium I want to wish you all a very happy and bug-free new year. Paul Syverson Editor, Cipher ____________________________________________________________________ Thirteenth IEEE Computer Security Foundations Workshop ____________________________________________________________________ Call For Papers 13th IEEE Computer Security Foundations Workshop July 3-5, 2000 Cambridge, England Sponsored by the Technical Committee on Security and Privacy of the IEEE Computer Society This workshop series brings together researchers in computer science to examine foundational issues in computer security. For background information about the workshop, see the CSFW home page. This year the workshop will be in Cambridge, UK. We are interested both in new results in theories of computer security and also in more exploratory presentations that examine open questions and raise fundamental concerns about existing theories. Both papers and panel proposals are welcome. Possible topics include, but are not limited to: --------------- access control authentication data and system integrity database security network security distributed systems security anonymity privacy security for mobile computing security protocols security models formal methods for security information flow executable content The proceedings are published by the IEEE Computer Society and will be available at the workshop. Selected papers will be invited for submission to the Journal of Computer Security. Instructions for Participants Submission is open to anyone. Workshop attendance is limited to about 40 participants. Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with a proceedings. Papers should be at most 20 pages excluding the bibliography and well-marked appendices (using 11-point font, single column format, and reasonable margins on 8.5"x11" paper), and at most 25 pages total. Committee members are not required to read the appendices, and so the paper should be intelligible without them. Proposals for panels should be no longer than five pages in length and should include possible panelists and an indication of which of those panelists have confirmed participation. To submit a paper, send to syverson@itd.nrl.navy.mil a plain ASCII text email containing the title and abstract of your paper, the authors' names, email and postal addresses, phone and fax numbers, and identification of the contact author. To the same message, attach your submission (as a MIME attachment) in PDF or portable postscript format. Do not send files formatted for word processing packages (e.g., Microsoft Word or WordPerfect files). Submissions received after the submission deadline or failing to conform to the guidelines above risk rejection without consideration of their merits. Where possible all further communications to authors will be via email. If for some reason you cannot conform to these submission guidelines, please contact the program chair at syverson@itd.nrl.navy.mil. Important Dates Submission deadline: January 31, 2000 Notification of acceptance: March 13, 2000 Camera-ready papers: April 10, 2000 Program Committee ----------------- * Tuomas Aura, Helsinki University of Technology, Finland * Drew Dean, Xerox PARC, USA * Joan Feigenbaum, AT&T Labs--Research, USA * Simon Foley, University College Cork, Ireland * Matt Franklin, Xerox PARC, USA * Dieter Gollmann, Microsoft Research, UK * Roberto Gorrieri, University of Bologna, Italy * Pat Lincoln, SRI International, USA * Nancy Lynch, Massachusetts Institute of Technology, USA * Cathy Meadows, Naval Research Laboratory, USA * Sylvan Pinsky, National Security Agency, USA * Mike Reiter, Bell Labs, USA * Steve Schneider, Royal Holloway, University of London, UK * Geoff Smith, Florida International University, USA * Paul Syverson (chair), Naval Research Laboratory, USA Workshop Location ------------------ The workshop will be held at the University of Cambridge, UK. Cambridge is a world-renowned collegiate university about 100 kilometres (60 miles) north of London. Both the city and the university are small by modern standards; about 130 000 people live in Cambridge and the university has about 9000 undergraduate and 6000 postgraduate students. Some name dropping: a remarkable number of eminent people have worked at Cambridge, including Isaac Newton, Charles Babbage, James Clerk Maxwell, Ernest Rutherford, J. J. Thompson, James Watson and Francis Crick, J. M. Keynes and Stephen Hawking. Sixty-four people working at Cambridge have won Nobel prizes. The Cambridge colleges offer mediaeval architecture and a quiet, contemplative environment. Kings College is particularly notable. The accommodation and meals for the workshop will be in Pembroke College, founded in 1347. Accommodation will be in student rooms in a modern college block, just two years old. Meals will be in the Old Library, which was the College chapel before Christopher Wren designed the existing chapel, finished in 1665. The workshop meetings will be in the modern presentation room of Microsoft Research Limited, a five minute walk from the college. The countryside north of Cambridge is mostly the fens (swamps that were drained about 1750). In the fens, cities and towns are invariably on the top of occasional small hills to keep the feet of their inhabitants dry. One city, called the Isle of Ely, includes the historic, enormous and elegant Ely Cathedral, started in 1108 on the remains of an earlier Christian shrine. Nearby is the town of Newmarket, the centre of the horseracing industry in the UK. There is excellent train service to Cambridge from London's Kings Cross. The schedule is available on the web. Coaches operate frequently from London and from the airports. For further information contact: General Chair Program Chair Publications Chair Prof. E. Stewart Lee, Director, CCSR Paul Syverson Joshua Guttman University of Cambridge Naval Research Laboratory The MITRE Corporation 10 Downing Street Code 5543 202 Burlington Road Cambridge CB2 3DS Washington, DC 20375 Bedford, MA 01730-1420 United Kingdom USA USA +44 1223 740101 +1 202-404-7931 +1 781-271-2654 E.S.Lee@ccsr.cam.ac.uk syverson@itd.nrl.navy.mil guttman@mitre.org More online information at . ____________________________________________________________________ SECURITY AND PRIVACY NEWS BRIEFS ____________________________________________________________________ _______________________________________________________________________ LISTWATCH: items from security-related mailing lists (October 29, 1999) by Mary Ellen Zurko (mzurko@iris.com) _______________________________________________________________________ This issue's highlights are from cypherpunks, dcsb, crypto-gram, tbtf, and risks. There's been a lot of discussion on cypherpunks about the file encryption in MAC OS 9 . It's 56 bits, though they say they're working towards having a domestic version with longer key length. It uses an Apple patented algorithm called Apple Secure Compression, aka ComCryption, which has not been made public. Adoption of W3C's privacy work (P3P) has been stalled lately due to Intermind's claim that its patent covers P3P. The results of an analysis by W3C's patent lawyer are in . The analysis states that Intermind's technology requires OO communications objects as control structures, which are not required by P3P. This level of detail does not come across in the claims of Intermind's patent; it's covered in the front matter. Source code for the encryption algorithm for DVD movies (CSS) was release through an anoymous remailer. Beyond having just a 40 bit key, there is a claim that it is vulnerable to a 2^16 attack with as little as 6 bytes of known plaintext. A pretty thorough looking crypto law page is at . The IETF is a topic of conversation in security circles again. This time they plan on taking up the issue of wiretap standards in their telephony over IP work, at the plenary session during their meetings the week of November 8. The key questions are: "should the IETF develop new protocols or modify existing protocols to support mechanisms whose primary purpose is to support wiretapping or other law enforcement activities" and "what should the IETF's position be on informational documents that explain how to perform message or data-stream interception without protocol modifications". An FBI spokesperson said it would be "wise and prudent". Representative Bob Barr (R-Georgia) says it would be "dangerous". Jeff Schiller, head of the security area, says "We should not be building surveillance technology into standards. Law enforcement was not supposed to be easy. Where it is easy, it's called a police state." October 21 was "jam Echelon" day. I doubt most people noticed (and that includes Echelon). The idea was to lard email that day with terms likely to mark them as suspect by Echelon. However, words like "kill" are found in many computer contexts and the use of "Active X" and "Bubba" left me mystified. And just what is CANSLO? This idea has been discussed several times on cypherpunks. The Emacs command "Spook" will accomplish the same thing. More sophisticated suggestions included crafting messages with the same n-gram distribution as regular English text (perhaps using a Mad Libs style template), and not juxtaposing keywords that don't fit together (like Bill Gates and bio warfare). A5/2, the "weaker" of the GSM voice privacy ciphers, along with A5/1, is now available for download by North American citizens only at http://cryptography.org/cgi-bin/crypto.cgi/libraries/a5_1_2.zip. The poster (Lucky Green ) says "Should this code, against all precautions and despite our strongest hopes, become available on a site outside North America, I would like to hear about it." In an article about subpoenas to web sites being on the upswing : Web sites that have been subpoenaed for user information aren't legally able to resist. They can warn customers, however -- and some do, but many don't. Silicon Investor, a popular stock site, says it simply doesn't have the resources to notify all the users whose information they are required to provide. "It's just not practical for us," said Ethan Caldwell, general counsel for the site's parent company, Go2Net, in the WSJ article. "We would need an entire subpoena staff to handle something like that." He said the site gets about one subpoena every day, and that they are able to notify users about half the time that their information is being given out. That does sound like a lot to me. The FTC announced its rule (effective April 2000) for parental consent on commercial web sites aimed at children . They must have a privacy policy and obtain verifiable parental consent. Methods like postal mail or facsimile, use of a credit card or toll-free telephone number, digital signature, or e-mail accompanied by a PIN or password will be required only if the information is to be shared externally. For internal information, for the next two years, operators will be permitted to use e-mail with a confirmation solicited via email, regular mail, or phone. California has authorized Verisign to issue digital signature certificates for use in communications involving state agencies . The government asked the 9th Circuit en banc panel for further briefing and a delay in the oral argument date (currently set for December 16, 1999) based upon the fact that it will issue revised encryption regulations on December 15, 1999. They state that the changes may or may not effect the treatment of crypto source code . The US House of Representatives is considering legislation supporting digital signatures. The White House and most Democrats support a bill that would make digital signatures legal only in those states that don't already have laws recognizing the validity of electronic contracts. Republican leaders want legislation that would cover all states and eliminate some of the paper-record keeping and notification requirements that some states impose on financial institutions and insurance companies. Discussion on cypherpunks covers the concern that this is just the first step in mandating not only a particular certificate format, but particular certificate use policies. Jane's is sponsoring a conference called CyberTerrorism: The Risks and Realities on 16-17 November, 1999 in Washington D.C., which includes participation in a Mock CyberTerrorism attack wargame. Sounds like fun to me! Maclen Marvit from Disappearing Ink spoke at a Cypherpunks meeting. It lets two or more willing, cooperative people have an email conversation with reasonable certainty that there won't be any persistent records kept for more than N days by any intervening servers, where N is set by the participants. All the other issues you can think of, it doesn't address. It sounds like you have to use a separate encryption package (like SSL or PGP) to ensure network protection. Staples mailed a $20 coupon to some select customers, in the form of a 5 digit code that could be used to order from its web site. Someone posted it on the Internet. By the time Staples discovered the problem, some unauthorized orders had been shipped. An e-commerce campaign group in the UK (http://www.stand.org.uk) send an encrypted message to the home secretary which contained the confession to a crime. Then they threw away the keys, and claimed that if the current legislation under consideration were to become law, if the home secretary could not prove to the police that he did not have the keys, he could face two years in prison. Karsten Sohr at the University of Marburg discovered another security flaw in Microsoft's Java Virtual Machine. A bug in the bytecode verifier allows illegal type casting. Dirk Balfanz and Ed Felton, at Princeton University, have constructed a demonstration applet that exploits this flaw to delete a file. Yet another industry alliance, the Trusted Computing Platform Alliance (TCPA) is attempting to promote a trusted PC platform through standards. It includes Compaq, Hewlett Packard, IBM, Intel, and Microsoft. Wondering what to put in your privacy policy? You can use the "Privacy Policy Wizard" on the TRUSTe site . The G8 nations are pushing for a multi-national convention to force ISPs to "freeze and preserve" data that an investigator suspects is criminal while a court order is obtained to gain access for evaluation . There was a fair amount of discussion of Stefan Brands book, "Rethinking Public Key Infrastructures and Digital Certificates - Building in Privacy" . His methods, like Chaum's, are patented. The book has good coverage on revocation, smart cards, and privacy. Three articles from TBTF on US crypto export rules, cracking the ECC challenge, and the key called NSA found in CAPI: ____________ ..US cryptography export rules eased Declare defeat, but stay in On 16 September the administration announced changes in the US cryp- tography export regime. Like numerous other changes in the past, this one was presented as a relaxation of the rules that will bene- fit consumers. It's far from clear that this is the case. Once the new rules go into effect in December, after a one-time re- view any retail product featuring encryption of any strength will be exportable to individuals and companies -- but not to govern- ments -- in all but 7 countries worldwide. This relaxation is tied to funding for a new FBI research lab and to disturbing loosening of the rules of evidence in court cases that involve encryption. The Electronic Privacy Information Center links the White House an- nouncement, commentary, and analysis from this page [13]. EPIC re- mains agnostic on the proposals. General counsel David Sobel said, "It appears that the FBI and large computer companies have reached an agreement on encryption, but that is not necessarily in the in- terest of the average computer user." The legislative vehicle for these new initiatives is the selfsame Cyberspace Electronic Security Act that, in an earlier draft, would have allowed secret police break-ins to alter computer equipment [14]. That provision is gone now; it was probably a trial balloon anyway. A week after the latest proposals were announced. EPIC's Mark Roten- berg found himself sharing a conference panel with William Reinsch, the administration official tasked with carrying out US crypto ex- port policy. Rotenberg later described his address to the politech mailing list: > I opened by quoting Senator Aiken's line regarding Vietnam > that the US should "declare victory and then get out." I > suggested that with the crypto issue, the Administration > has decided to "declare defeat, but stay in." [13] http://www.epic.org/crypto/announce_9_16.html [14] http://tbtf.com/archive/1999-08-23.html#s01 ____________ ..97 >> 512 International group breaks the seventh Certicom challenge Irish mathematician Robert Harley announced [15] that his team had cracked the seventh and most difficult Certicom ECC Challenge prob- lem to date. Certicom has confirmed the correct result [16]. So far seven Certicomm exercises and challenges have been cracked since December 1997; Harley's growing team has broken each one of them. The solution required 16,000 MIPS-years -- twice the effort of the recently broken, 512-bit RSA-155 [17]. The team struck it lucky, finding the solution in less than a third of the expected time. The distributed computation was run by 195 volunteers, on a total of 740 computers, over 40 days. While this result strengthens the case of those who have contended, on theoretical grounds, that a crypto key based on ECDL (Elliptic Curve Discrete Logarithms) is inherently harder to break than an RSA key, it does not prove that assertion. Rather, it indicates that at the current state of the art, the best mathematical tools and algorithms known for cracking ECDL take longer to run than the best tools known for cracking RSA. On 12 September I posted as a Tasty Bit of the Day Harley's call for more machines to throw at the problem; others, including TechDirt, publicized it as well. This graph [18], adapted from Harley's site, rather dramatically shows the effect of the call for participants. [15] http://cristal.inria.fr/~harley/ecdl/ [16] http://www.certicom.com/press/99/sept2899.htm [17] archive/1999-08-30.html#s01 [18] http://tbtf.com/pics/ecdl-prog.gif ____________ ..Sorting out the Microsoft _NSAkey flap Did Microsoft build a back door into Windows for the NSA? I'm doubting it By now you've heard all about the extra signing key found in Micro- soft's CryptoAPI in all Win95, 98, NT, and 2000 systems. Here's the posting by Andrew Fernandes that started all the fuss [6]. The BBC has an annotated screen shot [7] of a debugger session showing the variable named, portentously, _NSAkey. Microsoft's official re- sponse [8] to the flap makes a whole lot more sense than assuming that the National Security Agency had somehow weakened Microsoft's crypto and tagged the fix "_NSAkey." To put a few authoritative nails in this coffin, read the thoughts of Russ Cooper [9], propri- etor of NTBugTraq, and of the noted cryptographer Bruce Schneier [10]. The investigations of Fernandes (building on work last year by Nicko van Someren and Adi Shamir) have publicized a way to disable crypto export control in Windows. Anyone outside the US can replace _NSAkey with their own key, and use that key to sign a crypto module of any strength, and then use that strong crypto under the auspices of Win- dows. But note that this impotence of Microsoft's CryptoAPI to con- trol what crypto gets run is not new news. Bruce Schneier pointed out this Windows weakness in his CRYPTO-GRAM newsletter last April [11], before anybody discovered the name of the replaceable second key. Over the weekend Brian Gladman posted a note [12] to the UK Crypto list demonstrating that the Microsoft CryptoAPI had been a serious political issue in Bri- tain 3-1/2 years ago. He worked with British authorities to make sure that Microsoft UK was able to sign cryptographic modules sep- arately from the US authority. The _NSAkey fiasco raises four separate issues, and little of the commentary I've read makes much effort to disentangle them. The is- sues are: 1. Did Microsoft collude with the NSA? (Answer: who knows? Prob- ably not.) 2. Will Microsoft's actions allow the NSA to penetrate the compu- ters of Windows users? (Answer: almost certainly not.) 3. Did the US government, represented by the NSA, work with Mi- crosoft to assure that only weak crypto is exportable in the Windows framework? (Answer: absolutely.) 4. Does Microsoft's CryptoAPI implementation allow anyone to cir- cumvent the restrictions imposed by US crypto export rules? (Answer: yes, demonstrably.) What will be the fallout of this tangle? Even more people will be made aware that Microsoft security is porous. Even more people will learn of the utter inability of US controls to stop the export of technology which truly escaped a decade ago. And even fewer people will believe what Microsoft says, even though in the matter of the _NSAkey the company is probably telling the truth. A few years back Nicholas Petreley, the IDG pundit, summed up the common perception this way: If you threw Microsoft into a room with truth, you'd risk a matter / anti-matter explosion. [6] http://www.cryptonym.com/hottopics/msft-nsa.html [7] http://news.bbc.co.uk/olmedia/435000/images/_437967_nsa300.gif [8] http://www.microsoft.com/security/bulletins/backdoor.asp [9] http://ntbugtraq.ntadvice.com/_nsakey.asp [10] http://www.deja.com/getdoc.xp?AN=520853963 [11] http://www.counterpane.com/crypto-gram-9904.html#certificates [12] http://jya.com/msnsa-not.htm ____________ Finally, the Open Group's Director of the Security program is leaving. If you're interested, contact m.lambert@opengroup.org. ____________________________________________________________________ COMMENTARY AND OPINION ____________________________________________________________________ ____________________________________________________________________ Code Breaking A History and Exploration by Rudolf Kippenhahn. The Overlook Press 1999. 283 pages. Index, bibliography, four short appendices. $29.95. ISBN 0-87951-919-3. LoC Z103.K56 Reviewed by Robert Bruen, Cipher Book Review Editor bruen@mit.edu ____________________________________________________________________ The translation from the German by Ewald Osers to the English edition involved more work than usual because the many examples of code breaking required English language examples. Breaking codes has certain underlying techniques that apply to all situations, but there are significant differences, for example, in the frequency of letters within each language. The extra work has paid off quite well, the examples work wonderfully. Code Breaking, as the subtitle suggests, is both a history of code breaking and an exploration on how it is done. Although much shorter than the bible in the field (David Kahn's The Code-Breakers, 1181 pages) it reads smoothly from start to finish. Modern topics, such as PGP (Pretty Good Privacy) and the use of computers are integrated and not just added on at the end, with the result that the transition from the Enigma to prime numbers does not seem contrived. Not surprisingly, the majority of the book is spent around World War I and World II, but there is plenty of material on Edgar Allen Poe, Sherlock Holmes, Caesar and early European history. The impact of various players throughout history are brought out with the appropriate important attached, such as the execution of Mary Stuart by Queen Elizabeth after the decipherment of her encoded message that proved her intentions. Thomas Jefferson invented an encryption device that was used in a modified form by the US military as late as 1920. The explanations of how the various methods of encryption work are some of the best I have seen. The detail is presented in a step by step approach written for the nonprofessional, but without skipping over anything. Moreover, as the book progresses through history, the readers is given the modifications that results in new improvements. Starting with the monoalphabetic substitution of the Caesar cipher through polyalphabetic techniques and keys is done in well connected style. The chapters on how to break these codes is also a step by step approach that serves as good teaching model for anyone who wants to understand the basics about encryption and decryption. The discussion of how DES works is limited when compared to the rest of the book. It seems that the book Cracking DES from O'Reilly was not taken into consideration when the DES chapter was prepared. There is some of the history of the development of DES that is interesting, but the difficulty of keeping up with currents changes is apparent. A good explanation of how prime numbers work with public key encryption stands alone without mention of the new algorithms or the search that is underway to find the successor to DES. I would suggest that a second edition ought to be in progress. Code Breaking is recommended for those interested in the history of the field and for those who need a clear introduction to the breaking of codes. The math is simplified and the techniques well presented, preparing the novice for the more difficult modern world. ____________________________________________________________________ The Code Book. The Evolution of Secrecy from Mary, Queen of Scots to Quantum Cryptography. by Simon Singh. Doubleday 1999. 402 pages. 10 appendices, glossary, index, bibliography, illustrations. $24.95. ISBN 0-385-49531-5. LoC Z103.S56. Reviewed by Robert Bruen, Cipher Book Review Editor bruen@mit.edu ____________________________________________________________________ For many reasons, there is a real, growing interest crypto. This latest book in the history of codes, ciphers and crypto is a welcome addition. For most of this century developments were kept secret from the general public in spite of the critical roles played by cryptographers and cryptanalysts. Cryptology has been important in gaining and keeping power through both intrigue and war for millennia. Today, of course, the battle is being waged for individuals to wield the same power over personal communications with the stakes as high as they ever were. In the late 1500s, Queen Elizabeth was able to protect her throne from Mary, Queen of Scots because Mary's encryption was broken revealing her plans to take the throne, thus influencing the course of British history. Who knows what would have happened if Mary had succeeded in gaining power. In World War II, the story of breaking the German Enigma is a fascinating one. The contribution of the Poles and British to stopping the Nazis, in my opinion, made the difference in the outcome of the war. A number of people were left of history even though they invented techniques and methods (such as public-key cryptography) because they worked for government agencies that required secrecy. The Code Book helps to bring recognition to these people, some of whom will not know this because they passed away. Others missed out on the economic benefit that would have been available if they worked in the private sector. We all owe them much. Singh does a masterful job telling the story through historical events as well as highlighting the developments in cryptology, marking the important breakthroughs for the code makers and the code breakers. It is easy to forget that at one time the Caesar cipher was considered secure. One of my favorite detours in the book is a chapter on breaking languages and its relationship to code breaking. The are underlying similarities which make this chapter appropriate that are explained in the breaking of Egyptian hieroglyphics and Linear B. Each of them is as cool a story as breaking the Enigma. The chapter on quantum cryptography, one of the few in crypto history books, provides a interesting look into our future. The struggle between code writers and code breakers is the dominant theme of the book through history. Now it appears that struggle will take an sharp turn into the Twilight Zone with the possibility of truly unbreakable crypto, although that is what people thought each time a new method was developed. The book has a series of ten challenges at the end to test out your skills with a monetary reward if you can beat them all. The Code Book is well written and well researched making for enjoyable reading. Some of the material is covered elsewhere and some is not, but the overall presentation stands on it own just fine. It is recommended as a book of history, as well as a book for learning about how various ciphers work and how they can be broken with clear, detailed explanations. ____________________________________________________________________ Cryptography and Network Security: Principles and Practice. 2nd edition by William Stallings. Prentice Hall. 1999. 569 pages. Appendix, glossary, bibliography, index. $73.00 ISBN 0-13-869017-0 LoC TK5105.59.S713 Reviewed by Robert Bruen, Cipher Book Review Editor bruen@mit.edu ____________________________________________________________________ This textbook is a second edition of Network and Internetwork Security: Principles and Practice (1995) which was reviewed here just about three years ago. This edition is a substantial update of the first edition, with about 100 additional pages. The title change reflect the change in emphasis in the material presented. The bibliography has been increased in size with publications since the first edition and a few references have been removed, for example those related to LUC, which has been dropped as a topic in the book. In my earlier review, I noted the inclusion of LUC as an exception to most books, but also noted that it was good introduction to it. Also dropped as a topic is SKIPJACK. Neither will be missed. On the plus side, a welcome addition is the Introduction to Number Theory chapter covering the expected topics on prime numbers, modular arithmetic, test for primality, etc. with lots of examples. Some comfort with mathematics is assumed, but since the book is aimed at college students it seems to be at the correct level. This chapter plugs a hole in the first edition. The new chapter on firewalls help to make the book more comprehensive. Further additions come from changes in the field, such as IPSec and web security, citing the best sources in each. Simplified DES (developed by Professor E. Schaefer), an educational tool for understanding the principles of the DES algorithm, has been included as new introductory material for DES. It has similar properties and structure, but uses less bits. The DES presentation is similar to the first edition, but this chapter adds explanations of more algorithms: Blowfish, CAST-128, and RC5. The first edition had a chapter on SNMP which is not included in this edition, but of course, Stallings has other textbooks just for SNMP which are far more complete. Other noticeable improvement are the problem sets at the end of the chapters which have been updated by additional new problems and modifications of older ones. The old blue ink diagrams have been replaced by black ink with improvements for clarity. The mathematical proofs have been made more readable by simple things such as white space and indentation. I was also happy to see the addition of a section on elliptic curve cryptography, an important but somewhat neglected area. There are some papers and and a couple of books on ECC , but it does not get the same level of attention as other areas. The book was the winner of the 1999 Texty Award for the best Computer Science and Engineering textbook, awarded by the Text and Academic Authors Association, Inc. This is an excellent textbook, comprehensive with clear explanations and now up to date. I expect that the third edition will include material on the Advanced Encryption Standard that is under review at this time. The competition among the algorithms offers many lessons to us. I liked the first edition and I like this edition. It is little pricey, but Cisco was giving away copies for free for the asking. If you missed that offer, then you will have to buy a copy. ______________________________________________________________________ Conference Reports ______________________________________________________________________ ______________________________________________________________________ CRYPTO '99 Santa Barbara, CA August 15-19, 1999 by Richard F. Graveman (Telcordia Technologies, Morristown, NJ, USA) ______________________________________________________________________ Crypto '99 is one of two annual conferences sponsored by the International Association for Cryptologic Research. (In 2000, there will be three.) There were over 500 attendees from all over the world. Thirty-eight of 167 submitted papers were accepted, and there were two invited talks as well as the traditional rump session. The agenda and additional information, in particular pointers to more details on some of the rump session talks, are available from the IACR Web site, http://www.iacr.org. Don Beaver was General Chair and Michael Wiener was Program Chair. The proceedings are available as Advances in Cryptology-Crypto '99, Michael Wiener, ed., Springer-Verlag LNCS #1666, 1999. Session 1: Public Key Cryptanalysis I, Chair: Bart Preneel On the Security of RSA Padding, Jean-Sibastien Coron, David Naccache, Julien P. Stern This paper presented a new forgery scenario for RSA signatures. Previous attacks illustrated the importance of a padding function on a message m(m). Without padding, existential forgery is easy because exponentiation preserves multiplication homomorphically. Davida, De Jonge and Chaum, and Bleichenbacher previously exploited this property in their attacks. On the other hand, several schemes with proofs of security against such forgeries date back to Goldwasser, Micali, and Rivest in the mid-1980s. This new work started by considering the padding function as a black box. Signatures consist of pairs of the form (r, rd). If a new m(m) can be expressed as a product of previously seen r's, its signature can be forged. The approach is to factor m(m) and most of the r's and to find triples (m, a, b) such that (a m(m) - b n) is smooth. Similarly to the way factoring works with the MPQS or NFS, one tries to find enough relations over a factor base to be able to factor a new m(m). The authors examined ISO-9796-1 and came within one bit of a successful attack for certain types of messages, and more recently Coppersmith, Halevi, and Jutla completed the attack. For ISO-9796-2 and a modified version of Ecash, this attack is better than all previously known ones. For PKCS #1, version 2.0, this attack is not feasible. Cryptanalysis of the HFE Public Key Cryptosystem by Relinearization, Aviad Kipnis, Adi Shamir RSA is based on a single univariate equation Xe = C mod n. This has the advantage of being based on factoring and disadvantage of being slow for large n. Multivariate schemes, on the other hand, are based on an NP-complete problem and are fast for small fields, but many such systems have been broken. HFE, presented by Patarin at Crypto '96, uses a small field F with q elements and an extension field K/F of degree n. Many multivariate equations over F correspond to a single univariate equation over K. The trapdoor is the method of hiding the univariate polynomial in the multivariate ones. For |F| = 2 and |K| = 2128, the univariate polynomial P is represented as a system of multivariate polynomials over F. One can write G(x) = T(P(S(x))), where G looks random but can be determined, because S and T can be represented as univariate polynomials. Then T-1(G(x)) = P(S(x)), and to solve for T one obtains a system of quadratic equations. Therefore, they developed a new relinearization method that yields a parametric linear expression after a change of variables. Then the second stage of the process yields a unique solution. Relinearization proved to be a new, powerful technique that may have additional applications beyond attacking HFE for a given degree. It should be pointed out though, that Patarin's published challenges are of sufficiently high degree to make this particular attack computationally infeasible. The Hardness of the Hidden Subset Sum Problem and its Cryptographic Implications, Phong Nguyen, Jacques Stern The subset sum problem, first used in cryptography by Merkle and Hellman, is NP-complete. Lattice techniques can solve low-density subset sums, however. Against the hidden subset sum problem, which additionally hides the weights, exhaustive search is the only known attack. It works by precomputing pairs of the form (k, gk mod p). They gave a lattice-based attack for the low-density case and a security proof for higher densities. Their process used an incomplete lattice in Zm, a transformation, and lattice basis reduction to recover the hidden coefficients, after which the weights can be calculated. This attack works when the density is less than n/log2M, and it was verified experimentally. But when the density is greater, the output of the generator becomes statistically random in a provable sense. This gives a precise formulation of the security level of the system. Session 2: Invited Lecture, Information-Theoretic Cryptography, Ueli Maurer Information theory underlies many different areas of cryptography. It is based on probability theory and allows one to argue about security. An event on a discrete probability space X is a random subset of the points. A random variable is a partition of the space. Entropy of a random variable is a function that is 0 if the probability of the event is 1, is upper bounded by log2|X|, and is maximal when the events are close to uniform in probability. Cryptography makes extensive use of conditional entropy and mutual information, whereby knowledge about one random variable can decrease the uncertainty about another if the events are correlated. A cipher achieves perfect secrecy if the ciphertext gives no information about the plaintext. Shannon showed that this implies that the entropy of the key must be as great as the entropy of the message. Based on these simple ideas, Maurer gave simple proof that the one-time pad (OTP) is a perfect cipher. A lot of cryptographic research aims to reduce the assumptions one needs: about computational intractability, an adversary's computing power, the honesty of the entities, and physical properties like noise and randomness. Cryptography without computational assumptions avoids the problems of not knowing the right model of computation and not having reasonable lower bounds. For information theoretic security, key agreement must be based on some correlated mutual information, so public key (PK) systems cannot be information theoretically secure. By using privacy amplification, however, the correlated mutual information and key agreement can be based on a noisy broadcast channel. Information theoretic cryptographic primitives are n-party devices that take inputs from all the parties and return an output to each based on some probability function of the inputs. Secure multi-party computation is a primitive that simulates a trusted party. A 2-party example is oblivious transfer (OT). Bit commitment (BC) is another example with retained state. Reductions among different primitives are important, e.g., reducing the binary synchronous channel to OT or to BC. Reductions should be useful, natural, efficient, demonstrable without additional assumptions, and also "beautiful." Reducing the noisy random source with eavesdropper to key agreement is a 3-party example. Weiner's wiretap channel and secure broadcast are further n-party examples. Conditional independence is a concept that allows one to simplify a large number of results. Events are statistically independent if their probabilities are multiplicative. Conditional independence of A and B means A | C and B | C are statistically independent. Conditional independence holds for random variables if it holds for all events over the random variables. He gave a set of rules for calculating with conditional independence and applied it to security proofs using a pseudo-random function (PRF). Here, a real system is replaced by an idealized model, which is in turn proved to be a perfect system. The proof is based on indistinguishability. Adaptive and non-adaptive strategies can be handled by proving that a non-adaptive strategy is optimal. He illustrated this process for CBC MACs by showing that the success probability of the adversary is at most the probability of random guessing. He stated in conclusion that computation and communication are physical processes that can be exploited by those building cryptosystems, information theoretic primitives can cover many useful cases, and conditional independence is a powerful information theoretic proof technique. Session 3: Secure Communication and Computation, Chair: Moti Yung Information Theoretically Secure Communication in the Limited Storage Space Model, Yonatan Aumann, Michael O. Rabin The setting is a sender and receiver with a shared secret key and a passive eavesdropper. The key is, however, much smaller than the message. In classical cryptography, the adversary is assumed to be computationally bounded. In this case, the eavesdropper is space-bounded, but the bound is much larger than, say, logspace. The solutions should be simple, efficient, and have information theoretic security. The protocol uses a large random string available to all players. The shared secret is used to obtain indices into the shared random string to produce a pseudo-OTP. Maurer did similar work in 1992 for the case where the eavesdropper can only access C bits. Cachin and Maurer extended this in 1997. The new result here is that for any strategy by the eavesdropper, the eavesdropper's advantage is exponentially small in terms of a security parameter k, when the eavesdropper is limited to storage space n = 5C. The proof starts by defining a game over one-bit messages. The All-Or-Nothing Nature of Two-Party Secure Computation, Amos Beimel, Tal Malkin, Silvio Micali Secure computation must be private and correct in spite of malicious parties. The goal is to model the behavior of protocols with a trusted party. Trivial functions can be computed securely without cryptography (projection, constant, XOR), but AND, OT, and MAX are non-trivial with respect to unbounded adversaries. With cryptographic assumptions like factoring or trapdoor functions, non-trivial functions, indeed any function, can be computed securely. Kilian showed that OT is complete for secure computation with respect to factoring. Others have worked with models such as an "honest but curious" as opposed to a malicious adversary. Their goal was to find non-trivial functions requiring weaker assumptions than OT. But they actually showed that there are no weaker assumptions and there is no hierarchy. Thus, AND and MAX are also complete, as are all non-trivial functions. Impagliazzo and Luby showed that one-way functions (OWFs) are necessary for OT, but they are likely not to be sufficient. This work generalized their result to all non-trivial functions. Finally, they gave a combinatorial characterization of the trivial functions based on truth tables that do not contain an "insecure" minor. Session 4: Distributed Cryptography, Chair: Kazuo Ohta Adaptive Security for Threshold Cryptosystems, Ran Canetti, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, Tal Rabin Their result is efficient threshold DSS and RSA secure against any adaptive adversary. Threshold cryptography is useful, e.g., for signature schemes, because it eliminates a single point of failure, and malicious faults that remain below a certain threshold cannot corrupt the system. With non-adaptive faults, the adversary has to decide whom to corrupt before the protocol starts. Adaptive adversaries can observe the protocol and coordinate the attack as the protocol runs. Efficient threshold schemes secure against non-adaptive adversaries were already known for DSS and RSA. Non-adaptiveness is, in the authors' view, a somewhat artificial assumption. Proving security against adaptive adversaries, however, is harder. He described their solution for DSS by reducing regular DSS security to threshold DSS security. That is, if an adaptive adversary can break the threshold system, forgery is possible in the basic scheme. The construction uses a simulator that calls a DSS oracle to produce a view of the threshold scheme. The simulator must be prepared to handle the adversary's corruption of any single player. The zero knowledge (ZK) proof that is needed is not efficient, so they overcame this problem by constructing a system with erasures. Two Party RSA Key Generation, Niv Gilboa The problem is for Alice and Bob to generate two shares of the private key da and db such that d = da + db. Boneh and Franklin showed how to do this for three or more parties. Cocks gave heuristic 2-party solutions. Also, J. P. Stern gave a 2-party solution based on OT. This solution is more efficient. The protocol works by selecting candidates Pa, Pb, Qa, and Qb, computing N = (Pa + Pb)(Qa + Qb), determining whether N is the product of exactly two primes, and sharing the private exponent. Three ways to compute N are given based on OT, oblivious polynomial evaluation, and homomorphic encryption. The technique he showed shares a product x + y = ab in a ring R without revealing the factors, a and b. It is a useful optimization to do an efficient, initial trial division before the full primality test. Robust Distributed Multiplication without Interaction, Masayuki Abe Previous work, both in the information theoretic and computational settings used interaction. The idea here is to design a protocol where each party transmits only once, so two-way interaction is not needed. As long as no disruption occurs, secure channels with broadcast are all that is required. The cost is more local computation and communications. He gave an example by examining the Cramer-Shoup protocol. The players receive shares of A and B and want to redistribute the products C of their shares. The problem of proving that this was done correctly was originally solved by Chaum, who used a four-pass protocol. The main technique presented in this work was to use non-interactive verifiable secret sharing (VSS) according to Pedersen to replace Chaum's ZK proof. This new solution has optimal resiliency (t < n/2). Its soundness depends on the discrete log (DL) problem. A Simple Publicly Verifiable Secret Sharing Scheme and its Application to Electronic Voting, Berry Schoenmakers He presented a simple PVSS scheme using standard intractability assumptions and optimal performance, followed by a sample application. In secret sharing, a dealer gives the shares to the participants, and later, some of the participants may try to reconstruct the secret. Shamir showed how to do perfect threshold secret sharing based on polynomial interpolation. If, however, some participant uses the wrong share, the wrong secret will be reconstructed. McEliece and Sarwate addressed this problem first with ECCs, later others considered cheating dealers, and then Feldman and Pedersen showed how to do VSS. Finally Stadler gave a PVSS scheme. Public verifiability uses a public broadcast channel. He gave a non-interactive solution and proved computational security in the random oracle model based on the Decision Diffie-Hellman (DDH) assumption. Each participant registers a public key with the dealer, who encrypts the shares and public non-interactive ZK proofs for each participant that all of the shares lie on a polynomial of appropriate degree. Each share carries its own validity proof. At reconstruction, the share plus the proof are released. Other schemes produce a public key as a bi-product, but at the cost of using a double-discrete-log assumption. He concluded by giving an application to electronic voting. Session 5: Secret-Key Cryptography, Chair: Andrew Klapper Truncated Differentials and Skipjack, Lars R. Knudsen, M.J.B. Robshaw, David Wagner Skipjack is a recently declassified 64-bit block cipher with an 80-bit key and 32 rounds. There are eight A rounds, eight B rounds, eight A rounds, and finally eight more B rounds. It has an elegant design. Each round computes a non-linear 16-bit function and XORs the result with another 16 bits of the text. This work applies truncated differentials to the analysis of Skipjack. There are 24-round truncated differentials with probability 1, which can be extended to attacks on up to 28 rounds. The boomerang attack (FSE '99) can be applied as well. They showed how differences in each of the four 16-bit words in the 64-bit block propagate through 16 rounds. Using a truncated differential of probability 2-32, they were able to break 16-round Skipjack with 217 plaintexts and 234 to 249 steps. They exploited the interaction at the boundary between the A and B rounds and the bad forward diffusion in the B rounds. Next, they looked at the middle 16 rounds. After 12 of these rounds, both the first and second 16-bit words have the same differences. So if the two input texts only differ in the second words, they can break the middle 16 rounds with two plaintexts and 247 steps. Finally, they attacked the last 28 rounds using plaintexts that agree in the third 16-bit word. The attack takes 241 chosen plaintexts. Extending this attack to 32 rounds, however, looks difficult. They noted that Biham had previously shown an attack on 31-round Skipjack using impossible differentials. Fast Correlation Attacks based on Turbo Code Techniques, Thomas Johansson, Fredrik Jonsson This talk was about keystream generators that use one or more linear feedback shift registers (LFSRs), for instance with a non-linear combiner. The secret key is the initial state. Correlation attacks are possible if the output of the entire generator is not independent of one of the LFSRs. Because general linear codes are hard to decode, they looked for linear codes with embedded convolutional codes. The turbo code attack works by precomputing M such convolutional codes having the same information symbols and then applying each of the M decoders. They also showed that it is possible to decode in parallel. They simulated the performance and showed how a larger M results in better performance. Highly Nonlinear Resilient Functions Optimizing Siegenthaler's Inequality, Subhamoy Maitra, Palash Sarkar The motivation is to construct boolean functions, which can be represented by algebraic truth tables, with certain properties including high nonlinearity. He defined the degree as the number of variables in the highest order product term and the nonlinearity as the minimum distance from the set of affine functions. Their goal was to construct balanced functions with maximum order correlation immunity and algebraic degree. Bent functions, for example, are unsuitable because they are not balanced. However, they used a recursive construction that starts with a bent function to get maximum degree with minimum loss of nonlinearity. In the next step they constructed a balanced function with correlation immune order one. Finally, they showed how to increase this order. They proposed that this construction is more direct than Camion et al. from Crypto '91 or Seberry et al. from Eurocrypt '93. They compared the nonlinearity of the recursive and direct constructions and showed that their results have higher nonlinearity than the previous work. Session 6: Message Authentication Codes, Chair: Xuejia Lai UMAC: Fast and Secure Message Authentication, John Black, Shai Halevi, Hugo Krawczyk, Ted Krovetz, Phillip Rogaway A MAC is used as a signature to verify data origin and integrity, but in the private key setting. It consists of generation and verification algorithms, both of which take a message and key as inputs. Security is defined in terms of existential forgery (Bellare, Kilian, and Rogaway). Two popular constructions are CBC-MAC and HMAC. Carter and Wegman constructed a MAC from a universal hash function followed by a cryptographic operation. A universal hash family is a family of hash functions. This work considered e-almost-universal hash functions. Several fast hash families are known, e.g., MMH and NMH by Halevi and Krawczyk, which are based on multiplication. This paper introduced UMAC, which is even faster. It is a fully specified MAC related to NMH. Their work consisted of theory, specification, and code. The key is used to select a hash function from the family. Then parts of the key are added to the message in 32-bit increments, and pairs are multiplied to get 64-bit values, which are in turn added, and so forth. Carries are discarded. The construction had to avoid several problems: high collision probability, need for long keys, parallelism, length constraints, and long hash outputs. They used a Toeplitz construction and several other tricks. Attention was given to efficient implementation and appropriate security level. The performance is roughly five to ten times as fast as HMAC. Square Hash: Fast Message Authentication via Optimized Universal Hash Functions, Mark Etzel, Sarvar Patel, Zulfikar Ramzan This work also used the Wegman-Carter design and existential forgery paradigm to produce a fast MAC. They defined an unconditionally secure MAC, which was motivated by information theoretic security concepts. They applied e-almost-d-universal hash functions over an additive group whereby Pr[h(x) - h(y) = d] < e for all x, y, d. They started with the MMH family, which breaks the message into l-bit blocks, takes inner products, and applies modular reduction. Their scheme, instead, takes the square of the sum rather than the product at each step. By discarding carry bits, they got a more efficient construction with a measurable reduction in security. They also measured performance in comparison with HMAC and MMH and got a 27% to 33% speedup over MMH and about two to one over HMAC. Constructing VIL-MACs from FIL-MACs: Message Authentication under Weakened Assumptions, Jee Hea An, Mihir Bellare The model of a MAC and its security is as above. Previous MACs were constructed using block ciphers, HMAC, and universal hash functions. This new work, called NI-MAC, starts by considering FIL-MACs, i.e., "fixed-input-length" MACs. Block ciphers and compression functions operate as pseudorandom functions on fixed length inputs, whereas CBC, HMAC, and UMAC can take variable length inputs. The question is whether secure and practical MACs can be designed, assuming only that the compression function itself is a MAC, not necessarily a pseudorandom function. That is, does the unforgeability of the FIL-MAC imply the unforgeability of the VIL-MAC? Three popular transforms, CBC, Feistel, and a variant of Merkle-Damgerd, were examined, and only for the last does the proof go through that unforgeability of the FIL-MAC implies the same for the VIL-MAC. Note that unforgeability is very different from collision resistance, because forgery of any message must be prevented. The resulting NI[ksha1] MAC is slightly slower than HMAC-SHA-1, but it has demonstrable security based on a weaker assumption. Stateless Evaluation of Pseudorandom Functions: Security beyond the Birthday Barrier, Mihir Bellare, Oded Goldreich, Hugo Krawczyk Pseudorandom Functions (PRFs) are a keyed family from which a given key selects one member, and they are indistinguishable from a set of truly random function by an attacker who does not know the key. That is, the attacker cannot predict a single bit of f(b) for an unseen input b. PRFs can be used to build PR generators, encryption (block ciphers associated with PR permutations), message authentication, and challenge-response authentication. The two important properties of PRFs are consistency and independence. Users have to be careful not to use the same argument twice, so a counter can be made part of the input. Otherwise, security degrades quadratically (the so-called birthday paradox). Counters, however, require maintaining state, which may be inconvenient. One alternative approach is to double the input length, e.g., from 64 to 128 bits. Their approach, called the parity method, was to take an m-bit to m-bit PRF, apply it at several points, and XOR. For a 64-bit function, the chance of a birthday collision is reduced from 2-32 to 2-58 or 2-125 when the function is applied twice or three times, respectively. Session 7: Public-Key Cryptanalysis II, Chair: Arjen Lenstra Cryptanalysis of the Goldreich-Goldwasser-Halevi Cryptosystem from Crypto '97, Phong Nguyen This is a lattice cryptosystem based on the hardness of lattice problems. The idea goes back to Ajtai (STOC '96), who showed a relationship between worst case and average case difficulty. A lattice is defined to be all the Z-linear combinations of the rows (or alternatively, columns) of an invertible integer matrix. Multiplication by a unimodular matrix gives another basis for the same lattice. The goal is to find good Z-bases with short vectors. The closest vector problem (CVP) is hard, even with a good basis (Babai). In practice, the best technique is to compute an LLL or Schnorr reduction, which works best when the "gap" is large. The GGH cryptosystem is a lattice analog of McEliece's system. Using a dimension > 300, choose as public key a "bad" basis and as private key a "very good" basis. The ciphertext is the message plus a perturbation. Previous work by Schnorr and the author broke this system for dimensions up to 150 and 200, respectively. This work took advantage of the fact that GGH leaks the plaintext modulo twice the size of the perturbations, which lets the attacker find a larger gap and solve instances of size up to 350. The best way to fix GGH is to choose error vectors that are not disclosed modulo any number. Even so, the system would need dimension 400 or more, and the public key would be 1.8 Mbytes, so the system would be somewhat inefficient. Weakness in Quaternion Signatures, Don Coppersmith Andrew Odlyzko presented this paper. Several more efficient signature systems than RSA have been proposed, going back to OSS (Ong, Schnorr, Shamir) in 1984, Pollard in 1987, then Cubic OSS, broken by Pollard, Quartic OSS, again broken, and so forth. OSS uses a large n of unknown factorization, secret key u, and public key k = -u-2. The signature of m is (x, y) such that x2 + ky2 = m mod n. This is very efficient and a chosen plaintext does not reveal the key. It was hoped that breaking this would require factoring n, but this was not the case. Pollard and Schnorr showed how to reduce k and m to get a sequence of equations that can be multiplied together and solved to forge signatures without knowing the secret key. The quaternions H are a non-commutative fourth degree division algebra over the reals with basis 1, i, j, k. The system works over quaternions with coefficients from Zn. Satoh and Araki generalized OSS to this algebra, where uTky = -1, and xTx + yTky = m in H/Zn. The author first broke this assuming one has the public key and signatures of three random messages. If the three messages are independent, they have a non-zero determinant yielding a low degree polynomial, so the attack goes through as with Pollard-Schnorr. His second attack does not even need the random signatures. Cryptanalysis of "2R" Schemes, Ye Ding-Feng, Lam Kwok-Yan, Dai Zong-Duo The 2R-schemes proposed by Patarin and Goubin are PK systems based on the function decomposition problem, which is believed to be intractable. The private key is several invertible mappings, and the public key is their composition. In 1985, Matsumoto and Imai proposed B-schemes based on two secret linear maps. In 1988, they replaced one of these with a quadratic map, but Patarin broke this in 1995. For the 2R-schemes, the private key is three affine bijections and two quadratic mappings. The public key is the field, dimension, and composition mapping. The authors broke this new system too by reducing it to a linear algebra problem. Factoring N = prq for Large r, Dan Boneh, Glenn Durfee, Nick Howgrave-Graham Several proposed cryptosystems have a modulus of the form p2q or prq. This work presented a new approach to factoring the latter form based on lattices. In 1996, better methods of factoring p2q with elliptic curves were demonstrated. This lattice-based method works best when p and q are about the same size. If r = 2, it is O(N1/9), if r = (log p)1/2, it is O(p1/sqrt(log p)), and if r = log p, it is polynomial. In all cases, neither p nor q has to be prime. The method works by guessing the high order bits of p, forming a lattice, reducing it with LLL, constructing a polynomial from a small lattice vector, solving it, and, if successful, one of the roots will be the correction needed to the initial guess. If the lattice vector is sufficiently small, the polynomial will not be too large in an interval around 0, and if it is less than p2, its root over the integers will be the needed correction. They succeed in factoring a 1280-bit N with 80-bit p and r = 15 in 21 hours with a 72-dimension lattice. A 768-bit N with 96-bit p and r = 7 was also factored in 7 hours with a 60-dimension lattice. Rump Session, Chair: Stuart Haber Prize awards for Hasty Pudding analysis, Richard Schroeppel Many challenge prizes have been offered in the past for breaking ciphers. He put up ten bottles of champagne, in total, and David Wagner was the first winner for finding equivalent keys. Bart Preneel et al. got the second for their Asiacrypt '99 paper [see next talk]. Belgian remarks on U.S. pudding, Carl D'Halluin, Gert Bijnens, Bart Preneel, Vincent Rijmen They extended Wagner's observation on Hasty Pudding. For 128-bit keys, there are 230 equivalent keys, for 192-bit keys there are 272. It is easy to repair this. An attack on ISO-9796-1, Don Coppersmith, Shai Halevi, Charanjit Jutla This is an attack on RSA/Rabin signatures, which extends the result presented in Session 1. The attack has to be modified so the first word, "a," is different, then the pattern "bce" repeats, and finally there is a "bcd" at the end. For a 1024-bit modulus, the offline work is 230 precomputations plus 215 per signature. On-line, it takes 3000 signatures. FPGA: A practical tool for cryptanalysis with running examples, F. Koeune, J.-J. Quisquater, R. Sebastien, J.-P. David, T. Gilmont, J.-D. Legat They announced a FPGA defined in VHDL that does DES encryption in 8 clock cycles. A board of four chips can perform 200,000,000 DES operations per second. For Matsui's attack, 245 keys can be searched in 5 days. Only 130 boards can replicate the machine that EFF built. Quadruple DES, Paul Kocher (spoof) DES keys are too short and 2DES has a meet-in-the-middle attack. Triple DES has a 112-bit effective key but is not exportable. His 4DES (Ek1, Dk1, Ek2, Dk2) should be exportable, and reduced round versions may be quite secure. Crypto law and export control update, John Gilmore Dan Bernstein won his appeal in the U.S. Ninth Circuit. Source code was ruled free expression protected by the First Amendment, and export controls were ruled impermissible prior restraint. The majority decision goes further and brands export controls as a government attempt to control a science. It continues to state that we have lost an amazing amount of privacy, a situation which freely available encryption can partially balance. The separate concurrence recommended this case be heard in the Supreme Court. Meanwhile, SAFE is heading to the House Floor next month. French crypto law changed drastically towards liberalization. Germany considers strong domestic products without escrow indispensable. Wassenaar is being pursued vigorously by the U.S. Government. Most countries have not implemented the agreement, but pressure is increasing. Fast precomputation for discrete logarithm cryptosystems, C.P. Schnorr Several previous papers like (1) Brickell and (2) Lim and Lee have given constant factor speedups of the binary method. Other proposals, e.g., precomputing an array of pairs, have been shown to be flawed. This talk was a proposal to repair such systems and a proof that the previous attack by Stern et al. no longer works. Issues in tamper resistance, Benjamin Jun He focused on inexpensive devices like smart cards and tokens. Protocol attacks have exploited RSA padding and MAC construction. Timing attacks and DPA cause leakage. Fault analysis is the third important area. Any one of these can be catastrophic. Random number generator failure in provably secure encryption schemes, William Whyte, Burt Kaliski RNGs are usually considered implicitly as secure primitives in the design of a scheme. ElGamal needs an ephemeral key pair, PKCS #1 version 1.5 needs random padding, OAEP-RSA needs a random seed, and ANSI X9.62 DH needs ephemeral keys. In RSA-OAEP, RNG failure reduces entropy and possibly allows either a dictionary attack on the message or Coppersmith's attack. For DH-AES, the adversary can recognize repeated messages. Arms export: Blazonry for dummies and Oenocryptology, Don Beaver (spoof) The first part explained the Crypto '99 coat of arms on the cover of the conference handout. The second talk was about the Echelon surveillance system. A FOIA request revealed nothing, but a California winery provided more revealing information. Fun with cryptography: How not to set a final exam question, G. Agnew In the graduate-level class at Waterloo, he often poses a new crypto system on the final. He gave a simple example of a system and showed that breaking a OTP can be reduced to breaking this example. Signature schemes based on the strong RSA assumption, Ronald Cramer, Victor Shoup This is an alternative scheme to be presented later this year. The signer hashes and encrypts and then performs a calculation using a 161-bit prime. The complexity of signing is 1.5 to 2 times RSA, verifying is 2 times DSA. The security proof is a simulation, and in the random oracle model, the standard RSA assumption suffices. The same analogy applies to Cramer and Shoup, 1998, with respect to the DDH and CDH problems. Construction of universal hash families from algebraic curves over finite fields, Chaoping Xing, Huaxiong Wang, Kwok Yan Lam They derived a new class of authentication codes. He compared this result with Helleseth and Johansson's and also with Bierbrauer's and claimed better security for the same size parameters. What is the PGP key of Bill Gates? A practical experiment with key servers, J.-J. Quisquater Assume on e receives a properly PGP signed message "I like apples" from Bill Gates and wants to check his signature. About 20 PGP key servers all over the Internet return various PGP keys for "Bill Gates." How to solve a system of equations inside IDEA, E.G. Giessmann, G. Lassmann For IDEA, multiplication is the most power-consuming step on a smart card. They showed how to observe differences in power usage when sparse factors show up in the multiplications to solve for the whole key. Keeping secrets using .o files, Steve Meyer This was a steganography talk. The problem is protecting digital designs. He claimed that certain object files are difficult to reverse engineer. Computer license plates, Thomas Cusik The Intel Pentium III processor serial number was announced with the usual jargon about improving security, but the public fury was unanticipated. He likened this event to the need for and introduction of license plates for motor vehicles early in the Twentieth Century. A probabilistic poly-time framework for protocol analysis, P. Lincoln, J. Mitchell, M. Mitchell, A. Scedrov Their proposal is to use a language-based approach and to express security in terms of protocol equivalence. The idea is to adopt the SPI calculus to a probabilistic polynomial time calculus. He gave a couple of examples. Cross-encryption, Rosario Gennaro, Shai Halevi, Tal Rabin Their model is on-line shopping with a clearinghouse, but the seller does not want to trust the clearinghouse. They showed two solutions based on the idea of distributing the clearinghouse functionality. New forgeries with Rabin-like cryptosystems, J.-S. Coron, M. Joye, J.-J. Quisquater One known attack assumes the attacker sees the public key, chooses messages, gets the corresponding signatures, and then chooses another message. They improved this attack for even exponents. Cracking kryptos (well, almost), Jim Gillogly Kryptos is a cipher on the wall of the CIA building. It is about 860 characters long, it looks like a substitution at the beginning, and later parts have the statistics of a transposition. The final part looks like a polyalphabetic cipher with eight alphabets. He managed to recover large parts of it. Introducing the T-class of SOBER stream ciphers, Greg Rose, Philip Hawkes A family of ciphers with different key sizes (8 to 32 bytes) and internal designs oriented to different size processors, but resembling the original SOBER, now exists. Software speeds up to 100Mbit/sec have been measured. Studying cycles in RC4, Chris Hall Ignoring the keystream output, incrementing i can be viewed as rotating a ring, which eventually returns to the original state. He suggested studying these cycles further. Next-generation mobile phone algorithms, Greg Rose TIA TR45 defines North American cellular standards. An ad hoc Authentication Group is doing algorithm work for security. Collectively, the output is called IS-41: more information can be found at http://scitech.ctia.org/Security/index.html. Authentication includes key management, keys will be 128 bits, SHA-1 will be the hash function, and privacy algorithms will be air-interface specific. GSM is progressing similarly, but the process is not nearly so open. The P-problem: a solved instance and its implications, Detlef Huehnlein (partial spoof) For a mapping from W to P, elements in W are defined to be equivalent if they map to the same P-element. W is written words, P is pronounced sounds. Secure and CQRE, for example. CQRE will take place in Germany in late November. Player elimination in distributed protocols: Can resilience be for free? Martin Hirt, Ueli Maurer, Bartosz Przydatek This talk was about the cost of resilience against an active adversary. They showed how to use a new technique, player elimination, to achieve resiliency. Efficient separable fair contract signing schemes based on standard signatures, Jan Camenisch, Markus Michaels Fair contract signing implies either both parties get signed copies or neither does. Previous schemes without an on-line trusted party all lacked one or more useful properties. Their improvement on prior systems uses ZK proofs and resorts to an off-line trusted party only in the case of a dispute. Consistency concerns for fair exchange, Paul Syverson This talk illustrated how definitions of fairness are often inadequate by giving an example where the abort protocol can be run even after the exchange has been successful. The proofs worked because the definitions were flawed. OEF using a successive extension, Kazumaro Aoki, Fumitaka Hoshino, Tetsutaro Kobayashi Optimal Extension Fields are of size pn where p is close to the machine's word size. They compared implementations with both affine and projective representations of elliptic curves in terms of the complexity of inversion. Tricks for a better efficiency of authentication protocols, Marc Girault The goal is to get fast public key message authentication in a smart card. He and Stern examined DL schemes with precomputation in 1991 and 1994. This work showed how to store about 50 precomputations in a kilobyte of memory. Public-key cryptography and password protocols: The multi-user case, Maurizio Boyarsky (presented by Cynthia Dwork) The problem is to optimize the use of humanly memorized passwords and to make guessing on line the best attack. Halevi and Krawczyk gave a solution to the one-user case at ACM '98. Boyarsky extended their encrypted challenge and response protocol to multiple users and dynamic adversaries. Meanwhile, Halevi and Krawczyk pointed out that the full version of their paper considers insider attacks as well. Correlated coins and applications to game theory, Yevgeniy Dodis, Shai Halevi, Tal Rabin Given a public list of pairs (ai, bi), pick i and give Alice ai, Bob bi. If, for example, all the aI and bi are distinct, both learn i. Any 2-party probabilistic function in which the parties have no input can be simulated with this game. They showed that with this primitive certain games need no mediator and gave cryptographic constructions and a protocol for the primitive. Complete characterization of security notions for probabilistic private-key encryption, Jonathan Katz, Moti Yung Definitions of security for secret key systems are lacking, e.g., indistinguishability, non-malleability, and plaintext or ciphertext oracles. Randomization occurs in padding or choosing an IV. They have shown that adaptive and non-adaptive chosen plaintext security are equivalent, among other results. Non-malleable non-interactive zero knowledge and adaptive chosen-ciphertext security, Amit Sahai This was a pre-announcement of a paper at FOCS '99. The scheme of Naor and Yung does not work against dynamic adversaries because NIZK is malleable, which he fixed. A secure e-mail voting scheme with trusted authorities for one occasion, Josef Lukovics This is a four-phase scheme consisting of registration, voting, authentication, and evaluation. It has secrecy, fairness, and simplicity. Riffle-shuffle and gray-code as block cipher components, Jayant Shukla The AES finalists use a small number of primitive constructions. He presented two new operations that are fast and invertible and have potentially good security properties. A new blind signature scheme, Pascal Paillier Starting from his Eurocrypt '99 result, which used Benaloh's construction of a group over Zn x Zn*, he now added an efficient method of blinding. Efficient dynamic traitor tracing, Omer Berkman, Michal Parnas, Jiri Sgall The threat is that a pirate may rebroadcast the content, so the goal is to locate and disconnect the traitors who give the pirate the content. In each round, the users are partitioned by "colors" until the traitors are all located. The goal is to bound the number of rounds and number of colors simultaneously. Both upper bounds and lower bounds were given. On full linear (k, n) traceability schemes, K. Kurosawa, M. Burmester, Y. Desmedt In the same model as the previous talk, the goal is to identify at least one of the traitors. This talk was a modification of their work presented in Eurocrypt '98, which in all cases avoids the attack given by Stinson and Wei. Andrew Fernandez He pointed out a potential trapdoor named _NSAKEY in Microsoft's NT 4.0 Service Pack 5 and suggested how to exploit it to avoid the software enforcement of export controls. The Real-Time Cryptanalysis of A5/2, David Wagner et al. They reverse engineered and cracked this GSM cipher based on four LFSRs of sizes 17, 19, 22, and 23 bits. A non-linear function of the 17-bit LFSR is used to clock the others. Getting two frames of 114 bits each is enough to break the scheme. Once the 17-bit LFSR is found by exhaustive search, the rest falls out. Note that the attack can be done in real time. Session 8: Traitor Tracing, Chair: Daniel Bleichenbacher An Efficient Public Key Traitor Tracing Scheme, Dan Boneh, Matthew Franklin The model is to force users to pay for content on a per usage basis. There is a single broadcaster. In their system a single public key corresponds to many private keys. Each content provider has a public key and encrypts content before distribution. Consumers have their own decryption keys and their systems enforce payment when they decrypt, which seems superior to having a global decryption key. When a pirate player is found, the individual private key gives some information about the traitor. The system consists of four algorithms: key generation, encryption, decryption, and tracing. Tracing must anticipate k-party collusions and work as a black-box oracle. Each public key has m private keys. The system is an ElGamal-like scheme, but there are m vectors of length 2k, which are public and used to produce different representations of the private key. Encryption works in a group of order q. The ciphertext has length expansion by a factor of 2k. The tracing algorithm uses invalid ciphertexts to expose the representation. An important open question is whether safety against arbitrary collusions is achievable. Dynamic Traitor Tracing, Amos Fiat, Tamir Tassa This is also a broadcast system with conditional access, but the new twist is dynamic piracy, whereby the pirate rebroadcasts keys or otherwise sensitive data. The static model only considered fixed pirate decoder boxes. In this model, the data are divided into segments and personalized. A fixed number of fingerprints are assumed to be robust and are applied iteratively after piracy is detected to partition the users and to locate the traitors. Dynamic schemes can adapt to different numbers of traitors as the system is running. The overall system is probabilistic and has a small chance of error. Open problems include finding an optimal (polynomial) deterministic scheme and devising a probabilistic scheme with known or oblivious allocation of codewords. Efficient Methods for Integrating Traceability and Broadcast Encryption, Eli Gafni, Jessica Staddon, Yiqun Lisa Yin Again, the model is broadcast encryption and traceability in the secret key setting. The goal is to find the right intersection of users' key sets between the two extremes of sending separate transmissions to each user and having a unique key set for each potential set of authorized users. For example, if each user has r keys, we can exclude any set of m users by using a polynomial of degree at most (r-1)/m. At the same time, it should be risky for c users to pool keys and produce a pirate decoder, while at the same time the probability of false accusation should be small. Two solutions were given: broadcasting shares of the message and assigning sets of keys to each user. Two other issues are resiliency against excluded users and scalability as new users join the group. Session 9: Invited Lecture, The Evolution of Public-Key Cryptography, Martin Hellman In November 1976, his New Directions paper co-authored with Diffie stated, "We stand today on the brink of a revolution." Public key cryptography played a part in that revolution, which has also had its evolutionary aspects. In 1881, Kerckhoffs stated that the details of the system should be assumed to be public. He and Diffie identified one-way functions as a low-level primitive. John Gill deserves credit for identifying indices or discrete logs as a mechanism for creating one-way functions. The Pohlig-Hellman system essentially anticipated RSA but missed the composite modulus trapdoor. Diffie-Hellman actually was invented after Pohlig-Hellman, and Merkle also had the idea of public key distribution. Because we have only a few public key systems, alternatives not using number theory are important. When DES came out, the key size was an obvious problem, but also other trapdoors were suspected. Nine years before the quadratic sieve, Richard Schroeppel showed that the continued fraction method was subexponential and of the same asymptotic complexity as the quadratic sieve. He also introduced the idea of sieving and tried to factor F8, but because the factors of F8 are of very different sizes, Brent got there first with Pollard's r algorithm. Hellman made some comments on GCHQ's claim to have invented public key cryptography in the early 1970s and his rationale for his natural inclination to pursue what appears to others as foolish. Session 10: Differential Power Analysis, Chair: Andrew Odlyzko Differential Power Analysis, Paul Kocher, Joshua Jaffe, Benjamin Jun This work involved both engineering and cryptography. The general idea was to measure the power usage of a cryptosystem, e.g., when implemented in a tamper-resistant package like a smart card. Response time, electromagnetic emissions, and physical tampering can be used to mount similar attacks. He showed how the power usage of DES reveals the structure of the cipher. Simple analysis can locate conditional branches, e.g., the difference between square and multiply in modular exponentiation. When the measurements become too noisy, the trick is to use statistical correlations. Applied against DES, the key can be guessed and verified one S-box, i.e., six bits, at a time. Not only the data bits, but also the instructions, timing, register contents, and so forth can be extracted. This allows complete reverse engineering. To protect against this, the possible approaches are signal reduction, noise generation, leak-tolerant protocols, and other ideas he declined to explain. Many patents on this technology are pending. However, full protection is infeasible today, so many systems are being built with faulty security assumptions. Towards Sound Approaches to Counteract Power-Analysis Attacks, Suresh Chari, Charanjit Jutla, Josyula R. Rao, Pankaj Rohatgi Most vendors have been trying to address the DPA attacks described above. Chip cards are typically clock-driven CMOS circuits. Power is consumed when logic states change. Usually a few bits in the overall state determine events and intra-cycle delays. The attacks attempt to distinguish states by separating distributions. Many ad hoc countermeasures do not work well. Among these are adding clock jitter or spurious clock cycles, reordering operations, adding noise, balancing data accesses, and inserting random delays. More reasonable approaches are randomizing, blinding, and secret sharing. Formal analysis can help with these designs and predict what the statistics will reveal, given a certain number of samples. Session 11: Signature Schemes, Chair: Daniel Simon Separability and Efficiency for Generic Group Signature Schemes, Jan Camenisch, Markus Michels Typically, group members have to go through some public-private key setup, which may affect other users unless the system has the separability property. Separability may be perfect, strong, or weak depending on which keys are separable and full or partial depending on the parties. Chaum and van Heyst defined group signature schemes. Membership, revocation, and "opening a signature" need to be provided along with the usual operations. Many follow-on schemes have been defined, but they all lack separability except Camenisch and Damgerd, 1998, which has partial separability. This work demonstrated the first fully separable scheme. It is built from a DL construction and has a ZK proof of knowledge. A Forward-Secure Digital Signature Scheme, Mihir Bellare, Sara K. Miner Using the standard public key framework, the problem is later exposure of the signing key, which allows forging. Secret sharing and tamper resistance are two approaches to protecting the key. If the key exposure is discovered, the key can be revoked, but prior signatures are now suspect. Timestamping is one solution, but it requires another entity. Their solution is for the secret key to evolve from one time period to the next with a one-way function, and to have the keyholder delete obsolete versions. Their construction was proved secure in the random oracle model based on the hardness of factoring. The public key is a Blum integer. The secret key consists of a base key and a list of values that are squared at each time interval. The proof of forward secrecy was first carried out for an identification scheme and then carried over to the signature scheme. Abuse-free Optimistic Contract Signing, Juan A. Garay, Markus Jakobsson, Philip MacKenzie Abuse freeness means that neither party can prove to a third party that she has the ability to complete or terminate the protocol. In effect, Alice proves to Bob, "Either A is true or I am Bob." All of the cryptography is based on the DDH problem, and the proofs hold in the random oracle model. Contract signing, introduced by Blum, is an example of a fair exchange problem, that is it is "neither or both." The two approaches to contract signing are gradual release and trusted third parties. Their protocol has an off-line third party and is constructed from commitment, convertibility, and non-transferability. It is optimistic, complete, fair, and abuse free; there are two messages in each direction together with abort and resolve protocols, which involve the third party. Session 12: Zero Knowledge, Chair: Jean-Jacques Quisquater Can Statistical Zero Knowledge be made Non-Interactive? or On the Relationship of SZK and NISZK, Oded Goldreich, Amit Sahai, Salil Vadhan Blum, Feldman, and Micali defined non-interactive ZK (NIZK) using a shared random string model. Statistical ZK (SZK) is a clean model for many applications. They showed that (1) BPP 9 SZK iff BPP 9 NISZK and (2) if NISZK is closed under complement then NISZK = SZK. In 1985, Goldwasser, Micali, and Rackoff defined SZK as an interactive protocol with an unbounded prover and polynomial-bounded verifier, such that the verifier accepts or rejects correctly with high probability, and the verifier can simulate her view of the interaction on her own. In NISZK [BFM88, BDMP91] the verifier rejects with high probability no matter what proof the prover sends. For a long time, quadratic residuosity was the only known SZK problem, but [DDPY98] showed that IMAGE DENSITY is complete for NISZK. On the other hand, STATISTICAL DIFFERENCE and ENTROPY DIFFERENCE (ED) are complete for SZK. ENTROPY APPROXIMATION (EA) is in NISZK, so they produced a reduction from ED to EA. Hence, any SZK proof can be made non-interactive. The harder result was showing EA is complete and in NISZK. On Concurrent Zero-Knowledge with Pre-Processing, Giovanni Di Crescenzo, Rafail Ostrovsky Classical ZK has a prover and a verifier, completeness, soundness, and the zero knowledge property. In modern networks, there are many provers and verifiers, and many proofs may be occurring at once. The danger is that verifiers can team up and coordinate their questions to learn more than they should. There has been a lot of work on concurrent ZK: timing assumptions, pre-processing, number of rounds. Finally Kilian, Petrank, and Rackoff showed there is no four-round concurrent ZK. The various parameters are number of rounds, efficiency, trust, and assumptions. Their new result is a three-round challenge-response pre-processing protocol followed by polynomially many three-round proofs with no limits on concurrency. The key technical problem they solved is that interleaving kills the simulator. Dwork and Sahai solved this by using a trusted center and straight-line simulation. The authors of this work introduced the idea of equivocal commitment, which is a stronger variant of chameleon commitment. Session 13: Asymmetric Encryption, Chair: Serge Vaudenay On the Security Properties of OAEP as an All-or-Nothing Transform, Victor Boyko All-or-nothing transforms (AONTs) are randomized transforms that have no keys and the property that given all of the output, the input can be recovered, but given part of the output, none of the input can be recovered. The original idea was that AONTs could be used to defend against brute force attacks. Also, part of the output of the AONT can be encrypted with a public key system, which is especially useful with short key systems like elliptic curves. Another application is remotely keyed encryption [JSY99], whereby the smart card encrypts part of the output of the AONT. Since AONTs have no keys, they may actually be exportable. Heuristic constructions were given by [JMP96] and [R97]. In 1998, Stinson showed a system that leaks partial information. The construction in [JSY99] does not generalize. Bellare and Rogaway introduced OAEP for public key encryption. It consists of adding randomization and performing a single Feistel round. He showed in the random oracle model that OAEP has strong security based on indistinguishability with non-adaptive and adaptive adversaries. The adaptive model is needed to defend against brute force attacks. His proofs derived upper and lower bounds for OAEP as an AONT. Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization, Mihir Bellare, Amit Sahai This is a paper about definitions of security, in particular, privacy under public key encryption. Much more effort goes into designing schemes than defining what they do. Theory, beginning with Goldwasser and Micali, asks, "What is encryption?" One criterion is indistinguishability; another is semantic security. They showed these are equivalent. The formalization of semantic security involves a simulator. Non-malleability [Dolev, Dwork, and Naor] means that the adversary cannot modify the ciphertext meaningfully. The difficulty is getting a good formalization of what meaningful modification means. One model uses simulation and another uses comparison, which are different, e.g., with respect to chosen ciphertext attack. The authors presented several results, for example, that indistinguishability and non-malleability are equivalent with respect to full adaptive chosen ciphertext attack but not with respect to weaker attacks. Secure Integration of Asymmetric and Symmetric Encryption Schemes Eiichiro Fujisaki, Tatsuaki Okamoto RSA is a one-way trapdoor permutation (OWTP), whereas ElGamal is a more general one-way encryption (OWE) system. To get provable security, OWE and OWTP systems are converted, e.g., OAEP can be applied to a OWTP but not to a OWE system. This is the first problem they addressed. The second is hybrid use of symmetric and asymmetric encryption. Their result converts a OWE encryption system to a chosen ciphertext secure system in the random oracle model. Simultaneously, they created a generic method to construct secure hybrid encryption systems. Session 14: Electronic Cash, Chair: Serge Vaudenay Auditable, Anonymous Electronic Cash, Tomas Sander, Amnon Ta-Shma In 1982, Chaum introduced blind signatures, which have been used in many systems. They can provide unconditional anonymity, but several problems have been pointed out, e.g., blackmailing and compromising the bank's signing key. The goal is to get the best of both worlds: anonymity and defense against these and other threats. There has been a long sequence of work based on "trustees," "early detection of key compromise," and so forth. In their new system, the bank keeps a list of outstanding coins, and withdrawal and payment are essentially list membership operations, which allows auditing. What they built, therefore, are blind, auditable membership proofs based on hash trees, all of which can be made public, but the bank must maintain data integrity. The constructions are based on discrete logs and proved secure in the random oracle model. As an example of the efficiency, he showed the algorithm for root updates. Session 15: Protocols and Broadcasting, Chair: Phillip Rogaway Oblivious Transfer with Adaptive Queries, Moni Naor, Benny Pinkas The subject is private information retrieval over an adaptive list of queries, where each query depends on the previous ones. One-out-of-n OT is an inefficient solution. They used an n-dimensional generalization of pseudorandom synthesizers [Naor and Reingold], where sum consistency does not conflict with pseudorandomness. This work gave two adaptive k-out-of-n constructions, one based on DDH and one general one. In the process, they also achieved verifiable oblivious evaluation of a pseudo-random function. Compressing Cryptographic Resources, Niv Gilboa, Yuval Ishai Given n players, they defined two-phase protocols. First, the dealer securely distributes "resources" to each player, and then the players communicate over insecure channels. This model can achieve, for example, private communications, private modular addition, or proactive secret sharing. However, long pads may be required for perfectly secure solutions. By using pseudorandom generators, arbitrarily long pseudo-pads can be created. Still, the system has to defend against malicious coalitions. So each player gets a subset of the replicated seeds, which are used in turn to form the pseudo-pads. They solved a combinatorial problem for general linear correlation by finding the patterns for the replication; the solution is based on graph theory and linear algebra. Coding Constructions for Blacklisting Problems without Computational Assumptions, Ravi Kumar, Sridhar Rajagopalan, Amit Sahai The model is broadcast with a large number of users, but occasionally a few users need to be excluded (e.g., surprise party announcements). Each user has an ID and decryption mechanism. The trivial solution of different mechanisms for each user is enormously inefficient with respect to communications and user storage. Also, the constructions should use the encryption and decryption functions in a black box model. Their main result is based on algebraic-geometric codes and the communications overhead is independent of the total number of users. It is the first feasible solution, insofar as the storage requirements are poly-logarithmic in the number of users. An Information Theoretic Analysis of Rooted-Tree Based Secure Multicast Key Distribution Schemes, Radha Poovendran, John S. Baras Multicast communication has many commercial applications, e.g., stock quotes, news, and sporting events. Security considerations include integrity, confidentiality, routing, failure recovery, and membership additions and deletions. Several methods of member revocation based on a rooted tree have been proposed. The authors used information theory to study and optimize this data structure for the member revocation problem. The optimal average number of keys per member in the tree depends on the entropy of the member revocation event, which should be maximized for the currently proposed strategies, regardless of the revocation probabilities. They analyzed both the compromises and collusions needed to break the system and gave a simple rule for group size. ______________________________________________________________________ Eighth USENIX Security Symposium Washington, DC, USA August 23-26, 1999 by Kevin E. Fu (fubob@mit.edu) ______________________________________________________________________ Key Note Speech: Peter G. Neumann Peter Neumann of SRI International substituted for Taher Elgamal as the key note speaker. Most of the talk dealt with issues such as security, survivability, reliability, and predictable behavior. Neumann has many claims to fame including the moderation of comp.risks and a 1965 publication on file system access control in Multics. Neumann used stories, quotations, and ever-so-clever puns to discuss principles for good software engineering. Past efforts fundamental to software engineering include Multics, T.H.E. system, domains of protection, principles, confined execution, PSOS (a provably secure OS by PGN), and isolated kernels. But most existing commercial systems are fundamentally inadequate with respect to reliability and security. Survivability is also not well understood. Unfortunately, not much good research is finding its way into products. Developers ignore the risks, and the same old flaws appear over and over. For instance, 8 of 13 CERT reports this past year resulted from buffer overflows. Programming languages do little to prevent security problems. Few systems are easy to evolve. Neumann challenged the old adage of KISS (Keep It Simple Stupid). He argued that such advice does not work for extremely complex systems. He also disagreed the "build one to throw away" principle of Brooks because this encourages companies to perform beta testing with normal users. There remains much to be learned from past mistakes; these problems are highly multiple dimensioned. However, Neumann reasoned that we do not learn much from failures because the builders tend to move on to something else or build another system with the same problems. The insider problem is largely ignored. This is a difficult problem to solve because the insider is already authorized. In defense of Wen Ho Lee, Neumann referenced the Los Alamos National Laboratory incident as an example. Merely shutting out access of outsiders will not solve the problem. You have to worry about insiders. Neumann recommended that developers specify general requirements, create stronger protocols, use good cryptographic infrastructures, design for interoperability, force commercial developers to do the right thing (maybe with open source), perform network monitoring and analysis, and find a good way to integrate systems. The Q/A section mainly consisted of additional stories from the audience. Matt Blaze asked the rhetorical question, "Why in poorly designed systems does everything become part of the trusted computing base? Why limit [this characterization] to only poorly designed systems?" A punny John Ioannidis explained that Neumann is not "preaching to the choir" but to "lots of soloists." People do not listen to the soloists. Ioannidis argued that education of users is just as important as proper engineering practice. Otherwise uneducated users will simply bring in the Trojan horse. The discussion led to an insane number of analogies about singing and security. Causing the crowd to erupt with laughter, Greg Rose said it's amazing we can build great flight simulators and all, but we can't build good air traffic control systems. A philosophical Dan Geer quizzed Neumann on the difference between brittle and fragile. Neumann responded that brittle means an object breaks when tapped a bit; fragile means the object will probably fall off the shelf. In Multics, a failure in ring 0 would cause the whole system to collapse. A failure in ring 1 would only cause the the process to crash. In ring 2, a failure might only generate an error message. The rings contained the error. What is Neumann's final prognosis? All reports conclude that the state of practice is not good. Risks abound. For related information including a pointer to the Risks archive, see http://www.csl.sri.com/neumann/ PDAs Refereed Papers Track Session Chair: Jim Duncan, Cisco Systems, Inc. ---- The Design and Analysis of Graphical Passwords Ian Jermyn, New York University; Alain Mayer, Fabian Monrose, Michael K. Reiter, Bell Labs, Lucent Technologies; Aviel Rubin, AT&T Labs - Research Fabian Monrose analyzed the security of graphical passwords and proposed two graphical password schemes which could achieve better security than textual passwords. This paper not only won the best student paper award, but it also won the best overall paper award. In spite of textual passwords falling vulnerable to dictionary attacks, passwords still serve as the dominant authentication mechanism today. On the other hand, graphical passwords have convincing evidence showing better memorability and less information revealed about its distribution. Results from cognitive science show that people can remember pictures much better than words. Combining this with the commonly found Personal Digital Assistant (PDA) allows new graphical input capabilities. Graphical schemes can decouple position from the temporal order in which a password is entered. Monrose listed three desired properties of graphical passwords. First, it must be at least as difficult to exhaustively search as traditional passwords. Second, keys must not be stored in clear. Third, graphical passwords need to be repeatable and memorable. After building a strawman scheme where graphical passwords can emulate textual passwords, Monrose described a second scheme, dubbed Draw-a-secret. It takes as input a picture and order of its drawing. This scheme keeps track of boundary crossings and pen lifts from the screen. This information then passes though a hash function to produce a raw bit string as a password. Monrose pointed out that there is yet no characterization of the distribution of graphical passwords. This means one picture is not known to be more likely than another as a password. On the other hand, textual passwords have well-known distributions related to dictionaries. Rather than focus on the unrealistic upper bound of graphical password choices, Monrose analyzed a minimum bound by creating a grammar (similar to LOGO) which describes common ways to enter a graphical password. By further assigning complexities to the terminals, evidence indicates that graphical passwords with short complexities (pen strokes) can surpass the power of textual passwords. Peter Honeyman went into an evil thesis committee mode by shaking the earth with many difficult and antagonizing questions. Assuming at least 60 bits of entropy are necessary to protect confidential information, Honeyman questioned whether 5 pen strokes in a 5x5 grid is enough. Monrose did not answer the question directly. Another audience member asked what would be the typical graphical password (a smiley face, picture of a dog, etc). Suggesting that attacks similar to dictionary attacks may exist, the audience member asked how to classify what is memorable. Monrose explained that his team did not have enough experience to characterize distributions. Another audience member asked for anecdotal evidence about memorability. Monrose explained that in practice, a 6x6 grid results in poor memorability. It's simply too fine-grained for the average user to repeatedly enter a graphical password correctly. The 5x5 grid creates a good balance between security and memorability. Peter Neumann pointed out that written Chinese has a known key stroke order that may impose a distribution. Graphical passwords may pose a challenge for writers of Chinese. While a native writer of Chinese confirmed Neumann's point, she believed the graphical passwords have good merit. Asked about incorporating timing and pressure for entropy, Monrose replied that it has not been considered. Neumann added that his group found pen pressure useless as a source of entropy. While there is no concrete proof that graphical passwords are stronger than textual passwords, Monrose gave convincing evidence that graphical passwords stand a good chance and at least deserve further analysis. In the future, Monrose hopes to better classify memorability of graphical passwords. For source code and further information, see http://cs.nyu.edu/fabian/pilot/ ---- Hand-Held Computers Can Be Better Smart Cards Dirk Balfanz, Edward W. Felten, Princeton University Dirk Balfanz reasoned how a PDA can serve as a better smart card. By implementing a PKCS#11 plugin for Netscape Communicator and the 3Com PalmPilot, Balfanz reduced the number of components in the Trusted Computing Base (TCB). The trusted part of system remains only on the Pilot. Smart cards usually do not have a trusted authentication path to the user. The user often communicates a secret PIN to the smart card via a PC and smart card reader. This places the PC and smart card reader in the TCB! Had any malicious software existed on the PC, it could have easily obtained digital signatures of bogus data. Of course, a user interface or warning light on the smart card could prevent such malicious use. Unfortunately, most smart cards do not have a user interface. The implementation of PilotKey currently works for Linux, Win9X, Solaris, and Windows NT in concert with Netscape Communicator and the Pilot. Preliminary results show a 512-bit RSA signature takes about 5 seconds, and key generation about 2-3 minutes. However, a 1024-bit RSA signature takes 25 seconds and a whopping 30 minutes for key generation. Balfanz pointed out that the Pilot uses something like a 16MHz processor. He expects the processor to speed up in the future. The PilotKey program implements the PKCS#11 functions for the Pilot. The program enables a user to provide randomness via the user interface. Because PilotKey works directly with the user, the PC does not see the PIN. Finally, the program lets the user choose whether to send decrypted messages back to the PC. Incidentally, PilotKey cannot securely handle email attachments unless you can perform base64 decoding in your head. Alas, the Pilot does exhibit some problems acting as a smart card. First, the Pilot is not tamper resistant, and the operating system does not provide memory isolation among programs. Hence, one should not use untrusted software on the Pilot in conjunction with PilotKey. Second, it is important not to loose the Pilot. While the secret information is password-protected, this offers only limited protection. Third, the PilotKey is not appropriate for "owner-is-enemy" applications. For instance, a program keeping track of a cash balance is inappropriate. The Pilot owner could easily change the balance. Finally, HotSyncing could reveal secrets to the host computer. Balfanz claimed these problems are not show stoppers. Inciting a few chuckles, he explained that you "just need a secure OS on the Pilot." In the paper, Balfanz makes some good suggestions on how to secure the Pilot OS. Peter Honeyman began the inquisition with a hail of questions. Another audience participant reminded the rest of the crowd to turn off the Pilot's IR when not in use. One questioner asked why not to fix the OS on the PC if we can just fix the OS on Pilot as suggested. Balfanz admitted this is the same problem, but fixing the OS on the PC is not any easier. At this point, a punny Peter Neumann sought attention. He pointed out that this was the first talk where Bob does not appear with Alice. Alles in Ordnung. You would have had to be there to understand.... A participant suggested that splitting trust seems to simply push the problem down the line. People will want more features (e.g., signed Excel spreadsheets). Balfanz explained that shifting trust to PDA comes at expense of usability. Another participant argued that removing all the software from a Pilot results in a relatively expensive smart card. Questioned about Europe's desire for autonomous smart card readers and smart cards with displays, Balfanz responded that he would not trust the ATM or the reader. Bruce Schneier wrote a similar paper on splitting trust in the first USENIX Symposium on Smart Cards last May. For more information on PilotKey, follow up with balfanz@cs.princeton.edu. ---- Offline Delegation in the File Repository Arne Helme, Tage Stabell-Kulø, University of Tromsø, Norway Arne Helme explained mechanisms for offline delegation of access rights to files in the context of a distributed File Repository (FR). In this model each user has her own file repository, but would like to share files. Offline delegation refers to delegating authority without network connectivity. For instance, one could use verbal delegation. PDAs challenge centralized security models, provide users with personal TCBs, and support the construction of "delegation certificates." This meshes well with the design criteria for offline delegation: a delegation should not allow impersonation, credentials must form valid and meaningful access rights, and the authority granted in a certificate should not be transferable or valid for multiple use. The implementation consists of an access request protocol and a prototype for the 3Com PalmPilot. The software contains a parser/generator for SDSI-like certificates and a short digital signature scheme using elliptic curve cryptography. The access request protocol (analyzed with BAN logic in the paper) describes the process of creating, delegating, and using offline certificates. The delegator must read 16 4-digit hexadecimal numbers to convey an offline certificate (e.g., by phone). Helme's Pilot software allows the delegatee to quickly receive these numbers via a convenient user interface. Helme clarified that certificates in this scheme are valid for only one use. Servers keep a replay cache. One problem with the current signature scheme is that the FR cannot distinguish between two different files which had the same file name at different times. Helme concluded that offline delegation is a natural extension of services to the File Repository, PDAs support verbal delegation, and performance is satisfactory. For more information, contact arne@acm.org. Keys Refereed Papers Track Session Chair: Carl Ellison, Intel Corporation ---- Building Intrusion-Tolerant Applications with ITTC Thomas Wu, Michael Malkin, and Dan Boneh - Stanford University Tom Wu discussed how applications can store and use private keys in an intrusion tolerant manner. As a side benefit, Wu's methods also provide high availability of private keys. Existing public key architectures rely on non-shared keys or split-key storage. The private key is reconstructed, creating a single point of failure. Methods in Intrusion Tolerance via Threshold Cryptography (ITTC) create no single point of attack because private keys are never reconstructed. In Shamir's Threshold Scheme, a dealer generates a key and splits it into several shares for trustees. The trustees can later reconstruct the original key at a central location. In this scheme a dealer sees the original key and all the shares, creating a single point of failure. Compromise to the dealer's memory results in total disclosure of the key. Wu uses an intrusion-tolerant scheme which is not vulnerable to such a single point of failure. He employs methods due to Boneh and Franklin to generate a private key already in a shared form. To create shares, Wu uses an idea due to Frankel. Share servers apply the share to an operation (e.g., sign, decrypt, encrypt) rather than give the share to the application as if it were a dealer. Without a threshold number of results from the share servers, an adversary can not reconstruct the results of a private key operation. SSL protects the communication between the application and share servers. In order to break the ITTC scheme, an attacker must compromise multiple systems in a short period of time. One can also identify malicious share servers. Wu's implementation adds intrusion tolerance to a Certificate Authority and Apache web server. Integration of ITTC was trivial with the OpenSSL code. By relying on load balancing and multiple threads for parallelism, the ITTC adds only a 17% drop in throughput and 24% drop in latency when compared to non-ITTC RSA. Wu pointed out that this tolerable slowdown is insignificant considering the overhead of SSL session establishment. Wu concluded that ITTC improves security and reliability for servers and CAs. Meanwhile, the performance impact is minimal. An audience member asked about the security consequences of an intruder obtaining a core file from such an intrusion-tolerant web server. Wu happily replied that the intruder could only retrieve the result of a single decryption, not the private key of the web server. At no single point is the key entirely reconstructed. For more information, see http://www.stanford.edu/~dabo/ITTC/. ---- Brute Force Cryptanalysis on UNIX Passwords with SIMD Computer Gershon Kedem and Yuriko Ishihara, Duke University Gershon Kedem gave an overview of brute force cryptanalysis. He listed the primary barriers to brute force: expertise, non-recurring time and cost, access to tools and technology, replication costs, reusability, and performance. Based on work with a SIMD computer, he proposed a design of a SIMD computer dedicated to brute force cryptanalysis. Kedem spent a long time on a table comparing the experience, cost, and time necessary for brute force when using software, FPGAs, ASICs, and custom chips. See the paper for the table. Because the talk spent so much time summarizing past research, the audience mostly lost touch with the contributions from this paper. A Single Instruction Multiple Data (SIMD) machine is made of a large array of small processors. This configuration makes it possible to get close to the theoretical limits of the processor. PixelFlow is a SIMD machine made of many flow units. Each flow unit includes an array of 8192 processing elements. Using 147,456 PixelFlow SIMD processors, Kedem was able to perform brute force cryptanalysis of 40-bit RC4 (38,804,210 key checks/second) and the UNIX crypt scheme (24,576,000 UNIX password checks/second). PixelFlow could try all 40-bit key combinations for a particular RC4 ciphertext in about 7.87 hours. Because UNC Chapel Hill created the PixelFlow machine for image generation, it does have some limitations when used for cryptanalysis. It has few registers, no dedicated shift unit, no pipelining, and no memory indexing. The lack of memory indexing prevents fast table lookups. Had PixelFlow used memory indexing, Kedem explained there would have been a 64X speedup for RC4 and a 32X speedup for DES (but the speedups from Kedem's talk are twice that of the figures in the paper). These limitations are specific to PixelFlow, not SIMD machines in general. Kedem then proposed a SIMD design for brute force cryptanalysis and compared this to FPGA-based machines. Adi Shamir's TWINKLE project was one buzzword mentioned in this talk. However, an audience participant pointed out that TWINKLE does not take into account that LEDs fade over time. For information related to brute force cryptanalysis, see http://theory.lcs.mit.edu/~rivest/bsa-final-report.ps or contact Kedem at Duke University. ---- Antigone: A Flexible Framework for Secure Group Communication Patrick McDaniel, Atul Prakash, and Peter Honeyman - University of Michigan Patrick McDaniel introduced Antigone, a flexible framework for defining and implementing secure group policies. To demonstrate the usefulness of Antigone, McDaniel integrated it into the vic secure video conferencing system. Antigone is unique in that it focuses on the following goals: * Applications can flexibly use a wide range of security policies * The system supports a range of threat models * It is independent of specific security infrastructures * It does not depend on the availability of a specific transport protocol * The performance overheads are low Taking into account that security policies vary from application to application, Antigone provides a basic set of mechanisms to implement a range of security policies. First, a session rekeying policy allows sensitivity to certain membership changes including join, leave, process_failure, and member_eject. Second, an application message policy guarantees types of security such as confidentiality, integrity, group authenticity, and sender authenticity. A third policy specifies what kind of membership information other members can obtain. The fourth policy determines under what circumstances can Antigone recover from failures. One participant asked how to demonstrate that all policies are complete. McDaniel explained that his current work does not have a complete answer. Another participant asked for comparisons between Antigone and other software to implement security policies. McDaniel responded that Antigone currently does not take into account peer groups, voting, negotiating protocols, or ciphers. However, he sees no fundamental reason preventing Antigone from addressing these issues. In the future, McDaniel hopes to investigate adaptive and role-based policies, implement new mechanisms, benchmark Antigone, and integrate the software with real applications. For more information, see http://antigone.citi.umich.edu/ or email pdmcdan@eecs.umich.edu. Security Practicum Refereed Papers Track Session Chair: Wolfgang Ley, DFN-CERT ---- The Design of a Cryptographic Security Architecture Peter Gutmann, University of Auckland, New Zealand With longer hair than last year, Peter Gutmann gave a fast-paced talk on how to design a versatile, multi-platform, cryptographic architecture. The implementation works on many platforms ranging from 16-bit microcontrollers to supercomputers and ATMs. Most security toolkits specify an API, not an architecture. In contrast to the traditional outside-in approach, Gutmann's architecture takes a general cryptographic architecture, then wraps an interface around it. Gutmann built his architecture based on two concepts: objects encapsulate the architecture functionality while a security kernel enforces a consistent security policy. Each object has tightly controlled I/O for security reasons. Objects are either action objects (e.g., encrypt, decrypt, sign) or container objects. The containers further decompose into three object types: data containers, key and certificate containers, and security attributes. Gutmann found that C did not work well for implementing this architecture. The implementation comes in several languages ranging from C/C++ to Perl and VisualBasic. Gutmann also wrote a formal specification and used the Assertion Definition Language (ADL) to verify his code. An object can be in one of two states, low or high. In the low state, one can perform all allowed operations on the object. In the high state, one can only perform a limited, safe subset of those operations. An audience member asked whether Gutmann's design prevented an object from selecting from more than two states (i.e. whether it was implemented as something like a single bit flag). In Gutmann's experience, two states are sufficient. The security kernel supports an infinite number of states, but expecting the user to manage them all is very complex (they would have to track a complex FSM as an object moves from one state to another) and so far there has not been any real need to use more than two. Questioned about flexibility of message passing, Gutmann replied that his architecture supplies a fixed set of standard messages. Another participant felt that the cryptography seemed incidental to the talk. Finally, someone asked about performance issues with message passing through the security kernel. Gutmann found surprisingly little overhead from the message passing. He believes that the extreme multi-threading mitigated such performance hits. Everything is implemented in Gutmann's cryptlib library, which lives at http://www.cs.auckland.ac.nz/~pgut001/cryptlib/. ---- Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0 Alma Whitten, Carnegie Mellon University; J. D. Tygar, University of California at Berkeley Giving a breath of fresh air to the USENIX community, Alma Whitten raised serious issues about deficiencies in cryptographic user interfaces. The audience asked lots of questions and received the talk extremely well. After explaining user interfaces (UI), she spoke about the critical results of a usability evaluation of PGP 5.0. Recognizing that security is fundamentally different from traditional software, Whitten defined usability for security as: * A user can tell what needs to be done * A user can figure out how to do it * A user does not make dangerous errors * A user does not get annoyed and give up Whitten noted that applications such as word processors would focus on the second point. But one cannot assume the second point in security. For instance, in security the undo function cannot reverse the accidental mailing of plaintext. Whitten explained that "security is not a word processor" because of: * Unmotivated users (security is a secondary goal) * The barn door problem (the undo problem) * The abstraction problem * The software is only as strong as the weakest link * Lack of feedback Whitten chose PGP 5.0 with the Eudora plugin on a Macintosh for a case study because the interface was reasonably well designed by general consumer software standards. Her team used two methods to evaluate PGP: laboratory testing (objective, limited, and expensive) and cognitive walkthroughs (subjective, comprehensive, and cheap). The cognitive walkthrough showed that visual metaphors needed more thought. For instance, the quill head can confuse users. One user mixed up a .sig file with signatures. There is also information overload with too much visual data such as displaying the whole key ring and metadata. With respect to barn door and feedback problems, we need more protection against irreversible errors such as sending plaintext. In the laboratory test, users had 90 minutes to perform a series of actions. Users worked in the scenario of a political campaign. The user had to securely coordinate five virtual campaign members. The 12 participants were of a variety of ages from 20-49 years, education from some college to a doctoral degree, and backgrounds from fine arts to computer technology. Half of the participants were male. Everyone pretty much could generate key pairs (after a few users misunderstood by generating keys for each campaign member). 25% of the users managed to send plaintext instead of ciphertext. 2 of these 3 people realized the mistake. Several users encrypted the message with their own public key instead of the recipient's key. Nearly everyone fell into this trap. Eventually after receiving error messages from the virtual campaign members, the participants were able to send encrypted mail correctly. Half of the participants eventually encrypted a message. 25% did it without much help. By the time the first tests were done, only 5 people got far enough to decrypt a message. Most could decrypt. However, some mixed up ASCII-amoured keys with PGP messages since both look the same. The study concluded that PGP 5.0 with Eudora has a nice UI, but competent people in the target group could not handle this. Whitten suggests that to fix these deficiencies, one should simplify the UI, minimize what is less important, and add the right kind of help. A participant suggested that maybe the problem is the PGP model. Can we get a reasonable interface with PGP? Whitten responded that her group looked where the PGP model did not match with users' expectations. Avi Rubin asked how much documentation and training the test subjects received. Whitten gave each subject a printed, bound copy of PGP and Eudora manuals in addition to a quick tutorial on how to send email with Eudora. In other words, the test subjects had more resources than most users. Moreover, the test subjects read the manuals. Another audience member asked how many users did an out-of-band check to see if the encrypted messages worked. These details are in the paper, but Whitten noted that one technical person noticed the keys were signed and understood that trust in the key had to do with signatures on the key. The majority of users did not understand the trust metric. Derek Atkins humorously defended himself by saying he designed much of core API, but not UI for PGP. He asked about problems relating to confusion between various key types. Whitten said that one virtual campaign member had an RSA key and would complain to the test subjects about problems decrypting email. Only one subject figured out that the email had to be encrypted once with RSA and once with DSA. Greg Rose added his anecdote that USENIX used to run a heavily scripted keysigning. It worked well, but about 2/3 of messages would require human interaction. Another participant asked about the importance of interruptions (e.g., dialog box warnings) to the user about security. Whitten explained that there was no time to look at user tolerance levels. Ideally one would make the UI sufficiently obvious to prevent such dangers in the first place. Questioned about how much of the problems resulted from poor interaction with the Eudora plugin versus the core PGP interface, Whitten explained it is hard to divide. The UI needs a display to make it obvious what is encrypted and what is not. For further information, email alma@cs.cmu.edu. Good quotation: "Security is not a word processor" - Whitten ---- Jonah: Experience Implementing PKIX Reference Freeware Mary Ellen Zurko, John Wray, Iris Associates; Ian Morrison, IBM; Mike Shanzer, Iris Associates; Mike Crane, IBM; Pat Booth, Lotus; Ellen McDermott, IBM; Warren Macek, Iris Associates; Ann Graham, Jim Wade, Tom Sandlin, IBM John Wray described the reference implementation of the Internet Engineering Task Force's (IETF) Public Key Infrastructure (PKIX). Wray explained the motivation behind the project. IBM needed a pervasive, open PKI for business; major vendors pledged support for PKIX; and there was a need for reference code to exercise the PKIX specifications. The team implemented as a C++ class library the PKIX specifications which consist of several RFCs for X.509, Certificate Management Protocols (CMP), Certificate Request Message Format (CRMF), and LDAP V2. Wray highlighted what the team learned from this experience. What did they do right? They used: * A single data structure for protocols and persistent data * C++ destructors to minimize memory leaks * A single backend good for promoting code reuse * An unfamiliar platform (NT) for development However, Wray also noted what they did wrong: * No proper error architecture (but Wray has never seen a good one) * Interoperability testing came too late * Sloppiness with respect to case sensitivity. NT is case insensitive, but UNIX is case sensitive. * Standard Template Library (STL) problems Asked how easy is it to use the CDSA architecture, Wray replied that indeed CDSA 1.2 presented difficulties. Questioned about certificate revocation, Wray replied that the implementation publishes certificate revocation lists. For more information, see http://web.mit.edu/pfl/ or read the archives of imc-pfl@imc.org on http://www.imc.org/imc-pfl/. ______________________________________________________________________ Selected Areas of Cryptography (SAC '99) Kingston, Ontario, Canada August 9-10, 1999 by Serge Mister (Entrust Technologies) ______________________________________________________________________ The Sixth Annual Workshop on Selected Areas of Cryptography (SAC '99) was held at Queen's University in Kingston, Ontario, Canada. 52 people from 11 countries attended the workshop. After opening remarks, the first session "Cryptosystems and Pseudorandom Number Generators" began with a presentation by Helena Handschuh on a "Universal Encryption Standard", an idea she developed with Serge Vaudenay. The idea of the research was to design a symmetric cipher which can be set up to be compatible with DES and triple DES, but also supports the 128 bit block size and key lengths of 128, 192, and 256 bits as required in the Advanced Encryption Standard (AES) criteria. The cipher design uses a pair of triple DES chains, with some bits of the output of each DES block swapped with those of the second chain. Pre and post-whitening is used to defend against collision attacks. The key schedule is the same as that used in DEAL. John Kelsey presented the second paper, titled "Yarrow-160: Notes on the Design and Analysis of the Yarrow Cryptographic Pseudorandom Number Generator" and co-authored with Bruce Schneier and Niels Ferguson. The talk began with a review of how pseudorandom number generators (PRNGs) are often compromised: - the sources of random data are not as nondeterministic as the designer assumed, - the PRNG is not re-seeded often enough, - when the PRNG is re-seeded, not enough new entropy is provided. Yarrow addresses the first issue by allowing multiple sources of entropy and designing the system to ensure that even if one of the sources of entropy were to fail, the system would remain secure. Yarrow also makes use of two entropy pools; a fast entropy pool and a slow entropy pool. Every second reading from each entropy source is stored in each entropy pool. The PRNG is re-seeded whenever either the fast entropy pool contains entropy from a single source of at least 100 bits, or the slow entropy pool contains entropy from two sources of at least 160 bits each. The idea here is that the fast pool provides frequent re-seeding to reduce the amount of time that the PRNG sequence can be deduced after a compromise, while the slow pool tries to assure that complete knowledge of the PRNG state cannot be maintained indefinitely. Three-key triple DES is used to generate output bits. The last talk of the session was given by Guang Gong on the paper "Elliptic Curve Pseudorandom Sequence Generators", joint work with Thomas Berson and Douglas Stinson. Many PRNGs are composed of one or more linear feedback shift registers, combined in such a way as to destroy the linearity of the system. These and all other periodic binary sequences can always be considered as a sequence arising from applying a trace function to a Reed-Solomon codeword. In this paper, the authors suggest replacing the Reed-Solomon code with a sequence generated from an elliptic curve. More precisely, a randomly chosen supersingular curve is chosen over GF(2^n). A point P of maximum order is chosen at random on the curve, and the sequence iP (0. The first session, Protection, was chaired by Bob Blakley. The first paper was "Secure Dynamic Adaptive Traffic Masking" by Brenda Timmerman, California State University, Northridge. She dedicated her presentation to the memory of Jon Postel. He felt the Internet was in danger and wanted to protect it. He went against doctors orders to not get into antagonistic situations. And he gave her a lot of encouragement. Some Internet users need protection from traffic analysis, including , government agencies, and industrial and financial institutions. Traffic Flow Confidentiality (TFC) provides this protection. It can protect the length, source and destination of a message (though her work is not covering source and destination confidentiality yet). But, TFC can consume network resources. The use of padding and delays can impact performance. Adaptable approaches can reduce overhead. She used the classic "pizza to the pentagon" example. You need a model of what you want to hide. Adaptable transport layer traffic masking may show some of the higher peaks, but can save packets. She quoted Jon Postel as saying, "if you're not getting the system administrator mad, you're not really doing research." With Secure Dynamic Adaptive Traffic Masking (S-DATM), if you know what your peaks will look like, you can program intermittent peaks. She implemented a prototype using peer modules on the sender and receiver. The security model formally represents system security policy. It includes ranges of acceptable behavior and rules for dynamic adjustments, using statistical techniques. It can provide precision, reduce processing and storage, address statistical anomalies, and satisfy security policy with critical values. It is scalable to Internetworks. The matrix math assumes all nodes are fully connected and have global knowledge (but there are other approaches). You track past behavior (what you want it to look like) and current behavior. Throughput, inter-arrival delay, and burst size are the characteristics studied. Her current assumption is that each message has the same size. An attendee considered that the technique might also be used to make the adversary to come to an incorrect conclusion. During questioning Brenda emphasized the need for enough phony peaks to disguise the real ones. There was also consideration of whether the technique was powerful enough to disguise second or third order statistics. She compared performance, efficiency and protection of three approaches: nonadative random, adaptive random, and S-DATM. The performance is the average delay per message. The efficiency is the amount of padding. The adaptive random gave the best performance and the same efficiency as S-DATM. Protection is given by correlation coefficients. Below .3, the correlation is fuzzy. Adaptive random had a correlation of .7, S-DATM has one of .028. She tested with real mail traffic patterns collected from ISI (a 20 minute window). S-DATM provides improved performance over nonadaptive random and it is tunable with no loss of protection. There was some discussion of using information theory as a basis fo r such techniques. Brenda had tried but couldn't make it work. She is having students act as protectors and the enemy, and will look at the impact performance. The second paper was "Security Architecture-Based System Design" by Edward A. Schneider, Institute for Defense Analyses. Defense Goal Security Architecture (DGSA) is a little known paradigm. It can be applied to system design. Based on a threat space it produces a protection profile and helps with the selection of mechanisms. It is applicable beyond defense. It deals with a mix of independent concerns from difference information domains, which partition the information space such that each domain has homogeneous protection requirements. The domains include security associations, policy specification/enforcement separations, and specific strength of mechanisms. You don't have a piece of information in multiple domains. Transfers between domains are limited by policy. There is a policy for each user that is consistent throughout an entire domain. They are similar to , but different from , protection domains. Changes don't effect polyinstantiated copies between domains. A domain is a set of information objects, users, and policy. You make the partitioning fine enough to cover all aspects that need to be different. There was much discussion about how doing that might impact functionality. An attendee said that the policy for releasability may be independent of classification. It would need to be replicated for each domain. The Secure Management Information Base represents the policy. They have implementation experience with such concepts in Tmach and virtual machines. But maybe you don't want a direct implementation. There is a multi-dimension information space. The object dimensions lies in intersection of the information domain, the network location, and the application/manager. The application often has to interpret policy, and it defines the threat space. An attendee pointed out that if you need meta policies for applications that span domains then you have a covert chanel problem. Thinking this way has you determine what are the pieces of information you are trying to protect, what type of protection you need, and what partitions or walls are needed. First you need to take an enterprise view of how to protect information, then think through its distribution. The information (type), application, and end system defines the information space, and defines which threats we're worried about. The threat model might take into consideration location in the network, application, risk tolerance, policy, and importance of the information. The protection profile in the common criteria is a statement of requirements. Organizational policy and threats leads to assumptions/objectives which lead to requirements. The common criteria is oriented towards components; there are composability issues. You can use the multi-dimension space to move from the system to component requirements. When you have policies that don't compose, you need to decide whether or not to share certain information. Session 2 on Availability was chaired by Anil B. Somayaji. The first paper in that session was "Survivability - A New Technical and Business Perspective on Security" by Howard F. Lipson and David A. Fisher, CERT Coordination Center, Software Engineering Institute. David presented. In the technical and business context for security, confidentiality is seen as most important. There are closed, physically isolated systems with a well defined periphery and central control and administration. A complete knowledge and visibility of the system is assumed, and insiders are trusted. Systems run dependable software (correct, reliable, trusted) and there is a bounded value of potential loss. In the technical and business context for survivability, business risks are most important. There are open, highly distributed systems with an unknown or ill defined periphery under distributed control and administration. There is incomplete and imprecise knowledge of the system with visibility limited to local neighborhood. There are untrusted and unknown insiders, unknown software components (COTS, Java applets, CGI's, etc.), and business competitiveness and survival are at risk. Survivability is the ability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. The security premise states that it is always possible to increase the cost of compromise beyond its value to an intruder. It includes the notion of binary success and focuses on the confidentiality of information. Integrity and accessibility often also of concern, but typically viewed as technical issues. The criteria for use of anything is industry standard practice. An attendee pointed out that in Europe, dependability spans correlated and uncorrelated events. The survivability premise is that individual components of a system cannot be made immune to all attacks, accidents, and design errors. There are degrees of success. It focuses on mission specific risks. Mission objects often include information accessibility, integrity or confidentiality. Mission objectives/ business risks are primary. The criteria for use of anything is mission specific risk management. An attendee pointed out that the two premises not comparable, as phrased. Another suggested the comparison would be you can always do more vs. you can never do enough. The problem space is unbounded systems, with an absence of central control (no one has administrative control), an absence of global visibility (no one knows the extent or participants), with mission objectives/a purpose of the system (critical functionality and essential quality attributes) and emergent properties (global system properties, often not present in individual components). Examples include the Internet and national infrastructures like the electric power generation and distribution grid, financial systems, transportation systems, oil and gas distribution, social, economic and biological systems, neural nets, and (non open source) COTS software. A security/survivability architecture supports critical mission requirements, functionality, and non functional global properties. They look for emergent algorithms such as distributed computations that fulfill mission requirements by exploiting the characteristics of unbounded systems. Emergent algorithms produce emergent properties that are self stabilizing. There is cooperation without coordination; all parts contribute wherever they are needed. No individual part is essential. EASEL is an automated tool they are working on for simulating unbounded systems. The next paper was "Optimistic Security: A New Access Control Paradigm" by Dean Povey, Queensland University of Technology, Australia. He took the zen approach of doing slides by hand. He claimed it was liberating not to use Powerpoint. Dean started with a scenario; there is a hospital, it is a dark and stormy night, and the only access, by bridge, is down. The night staff need access to information they don't have normal access to. Traditional security disallows it, and the patient dies. Having the system aware of changing circumstances is virtually impossible. There is a gap between what organizations need and what access control mechanisms can enforce. Systems don't understand context; they can't directly use a policy based on things outside the system. There are two sets of actions; legitimate actions and dangerous actions. The overlap between them is questionable actions. It is sometimes legitimate to read another user's mail, but more often than not, it isn't. With traditional security we assume unknown actions are dangerous. With optimistic security we assume unknown actions are legitimate. An attendee asked if this was the same as audit. Dean replied that they want to be optimistic but still have some constraints. It's a fit for purpose thing. You can explicitly state what actions are questionable. There are requirements on these actions. They must be accountable, auditable, and recoverable. A compensating transaction recovers system. An attendee pointed out that you can prohibit things that you haven't analyzed to avoid complication. Another mentioned that in some systems, once you're inside the firewall, it's wide open, to get the job done. This can tighten things up in that situation. But you need audit in that case. The formal model uses well formed transactions from Clark & Wilson. There are certification rules, enforcement rules, and partially formed transactions. He dropped notion of CDI (data with known properties), since he needed to have a way to verify that the data is valid. An attendee pointed out that the risk analysis and thresholds are important. This technique assumes a large population of benign cooperating users. There was much discussion about whether this could be applied to hostile automaton, such a content filtering and sandboxing 'somewhat trusted' code and applying intrusion detection strategies to the transition information. He concluded with; systems can't always enforce security policies, optimistic security provides a means to allow questionable actions while preserving integrity, the model can be related to Clark & Wilson Well Formed Transactions, and it has useful applications . The final session of the first day was a discussion, chaired by Marv Schaefer. It was based on the paper "Strike Back: Offensive Actions in Information Warfare" by Donald J. Welch, Nathan Buchheit, and Anthony Ruocco, United States Military Academy, West Point. Don presented. He pointed out they are speaking as private citizens and do not represent the US army, DoD or government. They are not advocating illegal actions. To the best of our knowledge these ideas do not represent current policy. They state that the policy should be reexamined. Information war has national security implications. The information infrastructure is a vital national resource. The US has fought to protect vital resources before (War of 1812, Box Rebellion, Persian Gulf War). The Information Infrastructure is owned by private corporations, many of which are multi-national. Who is responsible for defending the national information infrastructure? They consider cyberspace as the 4th dimension we can fight in. Next they asked, what is information war? The DoD says it is information operations during war. Denning says it involves every high schooler with a modem. Schwartau categorizes it as criminal activity and up. The authors state that we are presently fighting many information wars at many levels. An attendee asked if they should be classed as acts of war? Don answered yes. Another attendee drew the analogy to a war on disease like ebola, or a war on drugs, indicating the same strategy could not work effectively in those arena. Another pointed out that an actual war declared by Congress authorizes us to do several things, like use the military. Don pointed out that we've fought a lot since the last time we actually declared war. Someone asked what a treaty to end the war would look like. Don maintains that if the information war is a war and we treat it as a criminal action, we will lose. We cannot fight an information war today legally except in very limited circumstances. Someone asked if we need to win or if a truce is good enough. Don said that Saddam Hussein is still in power, George Bush is out. He hopes we never have to fight a total war like WWII. Someone asked who do we attack? We try to identify the proper adversary and take steps to neutralize them. An attendee pointed out we need to worry about automated agents. History shows that defensive wars are not winnable (French at the beginning of WWII, Vietnam War). The adversaries found the center of gravity and won. In defending through offense, we identify our adversaries, identify our adversaries' center-of-gravity, and strike their attack capabilities (example: Libya air-strike). Someone asked if we authorize formerly unauthorized things, someone might feel threatened, so would we be a less or more likely target of an information war? Don said that there is a similar situation in that some feel there is already an open season on Arab groups. Someone asked if US citizens are enemies. They could be, though that is not necessarily a good thing. The Melissa virus is a smart munition. Who is the enemy who created it? Someone replied, Microsoft. Don then presented an adversary taxonomy. There are individuals (crackers, criminals), non-governmental organizations (organized crime, terrorists, corporations), and governmental organizations (friendly governments, enemy governments). Someone asked if we should commit military forces to the Chaos Computer Club. Don suggests measured force. It was pointed out that they tend to be members of friendly nations, so that would be highly provocative. Don said that if we took offensive action, we could stop them before they get the autonomous agents out. If we had penetrated their systems and had some idea of what they were doing, we could tell they were going to launch a massive virus. It could become a huge resource drain, but what's the alternative? Someone pointed out that Pakistani attacks could look like India, to turn us on them. Don then went on to discuss the neglected principles of war - offensive, maneuver, security, surprise, and economy of force. Offensive - seize, retain and exploit the initiative. Maneuver - place the enemy in a position of disadvantage through the flexible application of combat power. Security - never permit the enemy to acquire unexpected advantage. Surprise - strike the enemy at a time or place in a manner for which he is unprepared. Economy of force - employ all combat power available in the most effective way possible; allocate minimum essential combat power to secondary efforts. Participants suggested that we might better use these resources to closing our vulnerabilities, and that like biological warfare, governments might pledge not to engage in information warfare. Don said that a decentralized way to fight such a war needed. One attendee asked if we have the right to bear a laptop. The conclusions to the presentation were: The best outcome of a defensive war is a stalemate; Current policy and law do no support offensive defense; Policy and law must change to win information wars; and National security is at stake. Discussion at the end of the talk touched on the issue that lack of accountability is a problem in the government and a classic necessity in security systems. One of the side effects Don sees is that to be effective the military has to know what's going on in all systems; we'd have to give up some privacy. Someone suggested that we should instead create a law saying that everyone has to have decent security in their systems; another that we should legislate that critical infrastructures can't rely on computers. The first session Thursday morning, Caveat Emptor, was chaired by Cristina Serban. The first paper that day was "Security Architecture Development and its Role in the Seat Management and Total Cost of Ownership Environments" by Ronda R. Henning, Harris Corporation. As best she can tell right now, security doesn't fit into the title issues. Total Cost of Ownership (TCO) means how much money does a user cost? This includes desktop hardware, software, network support, help desk, and people. Seat Management (SM) is about how to keep the cost per seat low. Techniques includes running on a single platform, standard software, "rightsizing", and reengineering the infrastructure for efficiency. As you drop the cost because of standardizing, you decrease the probability of survivability enhancement. Diversity goes down, there are fewer targets, and it takes less intelligence to attack. Someone pointed out that if devices were more self managing and configuring, diversity would not be such a drain. What's missing from this picture? Security management (virus scanning, firewall, web monitoring). It's an unspecified component; out of scope with no mapping of policy to architecture and mechanisms. Should you also cost out security? What it's not is a one time benchmark, quantitative, or an "outsourced" infrastructure. What it may be is a tradeoff between spending more money and having more staff, the ability to use my network, and the consolidation of services. How much is that security policy in the window? They looked at having a service level agreement (SLA), with contractual accounting for "quality of security". There could be gradients of service - assurance, frequency, responsiveness. Something like gold, silver, or platinum practices that map back to an organization's policy. Policy begets requirements which answer the question, how much security can you stand? There are tradeoffs between cost, vulnerability, and countermeasures. They decided to experiment and derive a trial SLA in 13 areas such as documentation, testing, and recovery. Vendor review produced nothing substantive, which meant the bar was too low, or they didn't understand it (yet). The customer wanted "measurable stuff" like minutes, frequency, and numbers. In discussion it was noted that everything here was process; there is a feeling that good process will produce good product, but no evidence that this happens. Certainly bad process gives bad product. It could be a CYA things from the point of view of a bureaucracy. These things contribute to security but they aren't security. It's adversarial; war isn't a science either. Brenda responded that the timeliness of security incident reporting has potential, and that they're off trying to find other numerical stuff. The next steps are trying to refine to numbers and do a trial pilot. Someone pointed out as an analogy that health care doesn't guarantee life, just a a suite of treatments. This could be managed care for security. SLAs could include intrusion detection, incident response, tracking down the interlopers, some number of hours of service, and some number of scans per year for clients like E-trade and Yahoo. Small companies need insurance. The next paper was a "Cursory Examination of Market Forces Driving the Use of Protection Profiles" by Kenneth G. Olthoff, from Maryland, USA (he refrained from officially giving his work affiliation, as his employers did not want to be associated with this work in any way :). Ken started off with a rousing preacher-like call to consider security, using the Orange Book instead of the Good Book, and calling "Can I get an A1?" (instead of an Amen). There was also an injunction to beware the Gates and Windows of Hell. Getting down to the detailed content of his position, he restated the assumption behind protection profiles; if the user writes a profile for what he wants, a vendor will build the device with the security properties. Market size dictates vendor eagerness to respond. As an example, he said that Marv doesn't represent a large market, and John has a slightly larger market. Is John's profile close enough for Marv? What portions of the environment are left uncovered? Everyone is taking some amount of grief to get something. Someone suggested it was demand side economics. The vendor gets a few standards, and the testers like standardization. Espousers of the common criteria think it will happen differently. Several other aspects of commercial models then came up, such as that a steady revenue that grows a company can produce COTS, or that a vendor tries to meet several markets and the customer evaluates. Someone pointed out that the market and DoD have gotten behind NT and Solaris, which would be hard to overcome. If NT is not evaluated to a profile, who wins? Ken is questioning the existing paradigm of Common Criteria (CC) economics; "if you profile it, they will build it". Attendees pointed out that many users don't know their security needs, that they may say what they want as opposed to what they need, and that there is an opportunity for open source. One person wrote a protection profile. They looked through the CC and asked how each bullet effected them. This was helpful, as they hadn't even thought about traffic analysis, for example. Then a lot of time was wasted in writing it up in CC (though the writing in English was useful). None of the vendors want to read the CC-eze. Someone commented that it's a way to facilitate customers and vendors getting together. Why won't the market just work out using the useful bits? You can always try to create demand by, saying, releasing an appropriate virus. The Orange Book was the result of 3 years of criticism and 10 years of work. Protection profile turnaround time is 1 year. There's a question of whether it's a meaningful process. The CC decouples mechanism from assurance. European evaluations were cheaper and faster than those for the OB. There was an economic motivation. Some people were not in a high threat environment and wanted to spend less on evaluations. The ISO 9000 drivers are the required compliance of large vendors, and they need compliance from their suppliers. The victims must enforce. There's also a European government mandate; it is needed to sell to the government. The DoD tried to do something similar in the US, and it didn't work. It violated its own rule; only Multics was B2 so the rule was determined to be non-competitive; the first procurement was deemed illegal by the authorities. Companies may find a large market group to write/adopt a profile that matches their products. What's the useful half life of a profile? Is it really that much different from vendor declarations? The next session, Authorization, was chaired by Cathy Meadows. The first paper in that session was "Paradigm Shifts in Protocol Analysis" by Susan Pancho, University of Cambridge. Her subtitle was, "Needham and Schroeder again?". N&S was published in 1978 and is used as an example in many discussions. There are wide variations in the conclusions of analyses. They have found weaknesses not detected earlier. Why were there gaps earlier? So, her main question is, Why are there wide variations in the results of different analyses of the Needham-Schroeder protocols? Maybe we have better, newer tools now? In today's situation, authentication protocols are believed to be error-prone. What if it's a gap in the analysis? Her conclusion is that differences in modeling the protocol led to differences in analyses. It led to different "flaws" or claims of attacks. There were new weaknesses from different interpretations and environments. The question is whether we can find all flaws in all protocols if we build better tools. We interpret protocols to produce a model. We may forget the initial goals and assumptions. For example, Denning and Sacco state "If communication keys and private keys are never compromised (as Needham and Schroeder assume), the protocol is secure (i.e. can be used to establish a secure channel)." They go on to say "We will show that the protocol is not secure when communication keys are compromised, and propose a solution using timestamps." In Lowe's New Attack, he states "There is a well known attack upon the full protocol [DS81]... The attack we consider in this paper is newer, and more subtle ... we show that it fails to ensure authentication... (impersonation by participant)." A sends to I; I pretends to be A to B. Someone joked that this could be a feature - delegation. If you just wanted to find out if A is alive, this is not considered an attack. In Lowe's model, an intruder can be a protocol participant. What if someone in the protocol misbehaves? N&S model was trusted participants on a wicked network. Lowe adopted a new computer security paradigm; we don't trust each other anymore. N&S explicitly state they "provide authentication services to principals that choose to communicate securely". Someone noted that we need to look at the original sources to see if it's an attack on the assumptions, or the protocol. It's like stating p implies q, and you've got not p. Someone else stated that Newton said I don't do hypotheses, except in Latin, so it sounded cooler. How do we capture the intent of the developer, to categorize misuse or abuse rather than failure. An example is the attack on smart cards using the oven and microwave. Stating the limits indicates forms of attacks. Stating the warranty gives a toehold to attackers. Someone related a tale where a tiger team used fire safety to defeat security; they pushed the system outside of its normal mode of operation. Pushing the threshold gives you the attack. For example, retransmissions can give known plaintext. Susan went on to consider Meadow's analysis. An attacker could be a protocol participant (like Lowe's assumption), but it also considered type confusion (unlike Lowe and others). It was a modification again of the protocol environment, and a new attack was discovered. A nonce should not be revealed in the run. A thought a nonce as a name, and sent it in the clear. She relaxed the assumption of how smart the protocol implementer is. It was hard to tell the difference between a message with a nonce and a name and one with two nonces. In summary, the difference in modeling the protocol could reflect paradigm shifts in the security community. Concluding discussion noted that one can expect vulnerabilities in any protocol outside its original environment. One question is, can you find all your assumptions? That's the difficult part. Vulnerabilities change in magnitude as experience changes. Have you found a class of attacks or found a flaw in the protocol? Many protocols rely on unprovable assumptions. We have the deductive system, but aren't sure the axioms are valid. The next paper was "Secure Group Management in Large Distributed Systems" by J. Bret Michael, Naval Postgraduate School and John McHugh, CERT Coordination Center. Bret and John co-presented. They asked, What is a Group and What Does It Do? A "Darpian" view of the environment of the future for secure group management includes large groups whose memberships both change continuously and morph rapidly 100,000's, many to many relationships regarding communication among members of groups, and reliance, in part, on cryptography for authentication and access control. Which new security paradigm should be explored? The authors suggest, probably none. Are we (and DARPA) searching for a technology based solution to a problem of a social nature? They could not think of any examples from today's communication structure. Examples of few to many and many to few, may be composed. They are looking for examples where all messages are delivered to all participants, it is equally likely that any member sends, information should be retained in the group, and there are immediate changes with membership changes. They know of no examples where secrets are kept by thousands of members of a group. Automatons, yes, but not people. Group membership does not change as rapidly and continuously as imagined. There are dynamic coalitions. A DARAPA white paper suggests large dynamic group management as an area of research. Retention of confidentiality to the current group is required as well. A suggested example was an auction, but the auctioneer responds to the many. In the minefield with intelligent agents, mines broadcast status. The mines are a centralized few. Someone pointed out that in publish/subscribe systems where participants are interested in messages matching a pattern there is very dynamic membership. What is the risk if someone gets a message if they're already cleared to self subscribe for anything? Someone else suggested troops with intelligent weapons, full sensors sending information; full perfect information. In the military, information flow is based on the chain of command. Information disseminated few to few. The sensors could know who your friends are and display them. In which case you can forget about confidentiality; you want to ensure availability. The hard problem is figuring out the policy for a dynamic coalition. In this case, you only want to know about local events. If you fall into enemy hands, you want to remove from the group. It could also be useful for anarchists. Distributed network topology applications break down the problem to few-to-many or many-to-few. Routing is done by neighbors. In IP multicast there is dynamic binding, membership is not persistent, there is authentication of hardware address and group address, and they maintain access/membership list at routers. In ATM, each switch has a list of peer groups and peer group members. Group management by key change may be forcing a solution. In the maneuver warfare doctrine, you want to move faster and better, not more secretly. But still try and minimize the information the enemy gains. There can be a trade off with getting information to your own forces, and they can be overwhelmed with information. Decisions are made in crisis using partial information; it's most important to know the accuracy of what you know. Maybe they need local aggregators. Knowing everything all the time may just boggle the mind. You could optimize the protocol for local information in the many-to-many case. What would you do with these semantics if you had it? The next session, Policy, was chaired by Erland Jonsson. The first paper of that session was "SASI Enforcement of Security Policies: A Retrospecitve" by Ulfar Erlingsson and Fred B. Schneider, Cornell University. Ulfar presented. Reference monitors (RMs) monitor execution to prevent bad behavior. They are able to enforce many interesting policies. Other mechanisms help to implement RMs. They can capture policy-relevant events and protect the RM from subversion. They range from kernel supported to interpreter to modified application. SASI stands for Security Automata SFI (Software Fault Isolation) Implementation. It implements RMs by program modification by generalizing SFI. SFI guarantees hardware like memory protection. Several issues are raised. Does the application behave the same (not if it is turn into the stop program)? You can only input programs generated with a high level language so that changes are invisible. You can also write one on purpose that behaves differently. An attendee pointed out you want the malicious ones to behave differently. Ulfar says you want to truncate their behavior. Another issue is, can the application subvert the inserted RM? There are advantages to SASI enforcement. The kernel is unaware of the security enforcement. There is no enforcement overhead from context switches. Enforcement overhead is determined by policy. Each application can have a customized policy. It can enforce policies on application abstractions (e.g., restrict MSWord macros and documents). An attendee pointed out there is still a composition problem with nested objects, RMs, and policies. SASI policies are security automata. One notation for specifying security policies can specify any Execution Monitoring enforceable policy. They are easy to write, analyze, emulate, and compile. The language is a textual version of diagrams. SASI halts a program when it fails. Liveness and information flow properties are not covered. It can enforce any safety property, which is what we've been doing with RMs in the past. After modifying the application, SASI checks its policy before every machine instruction. It can enforce traditional RM policies, restrict access to system abstractions (filesystem), enforce memory protection within an address space (SFI), disallow division by zero, and enforce application-specific policies like "no network sends after a file system read" and "don't allow too many open Javascript windows" and "MSword macros may only modify their own document". There was a question about who would write these policies. Someone pointed out that checking for division by zero could be more expensive this way if you otherwise have a hardware trap. Your policy also has to prevent circumvention. Some noted that colluding objects might be able to remove each others' policies. Checking at every machine instruction is slow and often there is no need to check at an instruction. For example, if you have a policy that forbids divide by zero, you only need to check before divide instructions. SASI simplifies checks by partial evaluation. They have prototype SASI implementations on X86 and JVML. You input a SASI security policy and target application, and it outputs an application modified according to the policy. X86 modifies gcc assembly applications. You can use X86 SASI to enforce to SFI, which in turn can protect the RM. JVML SASI modifies JVML class files. They reimplemented the JDK 1.1 security manager. This provided a more flexible implementation and produced runtime performance the same or better. The JVML verifier provides safety (can't subvert inserted RM) and knowledge. "Higher-level" JVML semantics identify accesses to methods and data and make it easier to restrict operations like "network send". An attendee pointed out you can use it to retrofit Netscape to implement the Sun JVM policy. It can support stack inspeciion in JVMs that don't support it yet. The Security Automaton Language state notation not expressive enough. It must encode all security-relevant information. It is not enough to check machine instructions. Applications use abstractions (methods). An attendee pointed out this relies on a lower level RM; the JVM and JVML. Another mentioned that Gypsy dealt with language based guarantees. The final paper of the day was "Security Modeling in the Commercial Off The Shelf Environment" by Tom Markham, Secure Computing Corporation, Mary Denz, Air Force Research Laboratory, and Dwight Colby, Secure Computing Corporation. Tom presented. We build distributed systems today by test and patch; that wouldn't work for bridges. Someone mentioned we also have bridge inspectors. In the past, we would design and validate the security first, then develop the software, then field it. We want to specify inputs and outputs and policies, get COTS hardware and software, and build it. Electrical engineering students have training for design and component composition. An attendee pointed out that EE students are not usually asked to provide properties that doesn't know how to define or implement. Tom wants theory and models to allow computer science students to build COTs systems in a week. Today, the adversary getting is smarter, components are purchased, systems change frequently, and there is a lack of tools for expressing distributed systems security. What would 98% integrity under some conditions sound like? Someone pointed out that the EE students have appropriate properties of the components. Tom stated that we have created empirical models for the human body. Someone commented that we need to reduce software evolution to a biological pace. Tom said there is perpetual change; tactical reconfigurations and new strategic capabilities. Attackers and vendors are making changes. We can take a game view, considering covert/overt vs. chance/strategy. Modeling in COTS is something like poker. You can have some strategy, but you use the cards you're dealt. Comments on this included that you can assert what you have and see if the other side folds, that with poker you at least know what's in your hand, and that all the betting is in the ante. Engineering models reduce the system to the fundamentals. Theory destroys facts; the periodic table replaces volumes of alchemy books. What are the fundamental elements of a security engineering model? Tom suggests confidentiality, availability, and integrity. A layered security model is a useful tool. It should be scaleable, flexible, and heterogeneous. It needs to deal with the mission, humans, applications, middleware, OS services, node hardware, the environment, delivery, development, network services, and data transport, with relationships between applications and services. The file system might require storage integrity from the hard disk. Layers are connected by the confidentiality, integrity, and availability services. What about non-repudiation? Should it be a base service? Someone commented that revocation involves jurisdiction, revocation policies, and so on. It is not technical, it is legal with a technical underpinning. Someone else noted that there is a component of time under all this, and place (like a notary in France). Someone asked, What are the attributes of something to make it fundamental? Perhaps that it's measurable and has composition laws. The act of making assumpt ions introduces a vulnerability, but it's necessary for a model. It's hard to measure likelihoods of vulnerabilities. In a nuclear power plant, you can plan for sequences of correlated events with probabilities of failure modes to get reliability; you can model both independent and dependent probabilities. We can measure existing systems; we don't even have the facts for the theory to subsume. Comparing individual systems may be OK. Value, loss and time are possible fundamentals. Tom wants to create equations to talk about file system security. Failures disrupt service, and they are propagated. Failures can be inserted for known vulnerabilities and probable vulnerabilities to perform worst case analysis. Someone commented that this treats the CIA as a boolean. It could be throwing good bits after bad; you may know something about one application, but no others. There is no monotinicity property, at least in any way we understand. What worked depended on history and specifics. No two NT boxes are configured identically. You get a security argument instead of a security metric. The final session, Integrity, was chaired by John Michael Williams. The first paper of the session was "On the Functional Relation between Security and Dependability Impairments" by Lars Stromberg, Erland Jonsson, Stefan Lindskog, Chalmers University of Technology. Lars presented. They are trying to unify the concepts of security and dependability. This can help get security addressed earlier in program development. Dependability is reliability, availability, safety, and security. Security is confidentiality, integrity, and availability. Introducing threat into an object system is an environmental influence. They differentiate between delivery of service to a user (authorized) and non-user (unauthorized). There is delivery of service to user; denial of service to a non-user. Someone commented that an attribute of the object system is to protect itself from the environment. There is integrity protection for the environment, correctnesss of the object system, and trustability in the resulting behavior (output as anticipated or specified). Faults in the environment may occur. The object system is exposed to a threat which could exploit a vulnerability in the system, which could introduce an external fault. This causes an error state in the system. Someone called it a correctness failure. Someone else remarked that in one ISO standard reference, you can find 4 different definitions of error. Radiation in the environment could cause a bit error, resulting in a crash or unencrypted message. A writeable .login file would be a vulnerability, the attacker is a threat, and the reliability failure is files being deleted. Someone suggested that you never view users a human beings; here's always a proxy. Internal systems states are correct/error and non-vulnerable/vulnerable. Non-vulnerable is not very common; it's something to aim for. External states are correct/failure. A threat, an attack, creates an internal fault or breach that causes the system to go from a correct to an error state. The error state may take a system state from correct to failed with a failure. A fault is an event leading to an error or vulnerability. A threat is an environmental subsystem that can possibly introduce a fault in the system. A vulnerability is a place where it is possible to introduce a fault. An error is a system state that may lead to a system failure during normal system operation. A failure is an event at which a deviation first occurs between the service delivered and the expected service. Someone pointed out that the expected can be different from the specified service. In the future, they would like to have a hierarchical model, where a failure in one sense is a fault in a larger sense. Also, they want to model gradual intrusions, which exploit a vulnerability, introduce something else, and the system eventually fails (perhaps gradually, slowing down). The final paper of the workshop was "Securing Information Transmission by Redundancy" by Jun Li, Peter Reiher, Gerald Popek, University of California, Los Angeles. Jun presented. Interruption threats are hard to counter. Redundant transmission makes interruptions harder. But redundant transmission is not as easy as using redundancy in other systems. There is path interruption and data interruption. This includes link overload and dropping stuff. An encrypted message can still be interrupted. The acknowledgment itself is also subject to interruption. Retransmission means the possibly failing again. They suggest using redundancy. Don't use a single path. Any point on a single path is a point of failure. Only parallel redundancy is considered here. You are successful if at least one copy of the message is received. Redundancy is used in other areas, like highly available storage. Transmission redundancy is not easy because discovering disjoint paths is difficult. Routing is transparent to applications. Disjoint paths may not exist at all. You can try to be as disjoint as possible. An attacker has to find a choke point or break multiple points. Attendees wondered if this was related to existing QoS work. Someone discussed a story where a company had leased from MCI and Sprint to gain redundancy, but Sprint had leased a line from MCI. Another stated that information hiding is considered harmful. Another pointed out that arguing that the specification of a channel should include path or path disjointness information is entirely a political problem. We have lost the ability to ask the provider for that capability. Another noted that an attacker can cut some lines and force the message to a particular backup line. The authors are interested in redundancy for large scale situations. High scale adds further problems. The structure for redundant transmission can only be built in a distributed fashion. The routing infrastructure itself is not secure enough. A router on one path can be fooled by a router on another path. There is a tradeoff between resource usage and redundancy degree. Each node may have different requirements on security assurance, different transmission characteristics, different platform, and so on. Their work in this area is called Revere. Their goal is to disseminate security updates to a large number of machines. They assume a trusted dissemination center. The security updates are small size but critical information, like a new virus signature, a new intrusion detection signature, a CRL, or the offending characteristics for a firewall to monitor. Revere cannot use an acknowledgement or negative acknowledgement. It employs redundancy to have multiple copies sent to each node. Each node can also forward security updates to others. A node can contact multiple repository nodes for missed updates. You may receive multiple copies, some authentic, some not. You can forward them to other nodes. They assume that over 70% of the nodes are OK. Someone pointed out the similarity with the Byzantine general's problem. Each node maintains several path vectors, which are an ordered list of nodes to go through to reach this node from center, and a latency. When a new node joins, it contacts several existing nodes, and compares the path vectors to find parent nodes. Efficiency and resiliency are the two most significant factors. They maintain the structure with heartbeat messages. Not all nodes can communicate with each other. The joining algorithm can detect cycles. There was some discussion of how to evaluate the resiliency of the structure, including the minimum number of edges to cut to interrupt communications. It is more resilient if you have more disjoint paths, and more efficient if you have lower latency. Maybe you can eliminate the least performant redundant path. Future research includes developing a further understanding of large scale redundancy, the security of the transmission structure itself, theoretical aspects such as resiliency, and deployment in the Internet. ________________________________________________________________________ New Interesting Links on the Web ________________________________________________________________________ o http://www.dsi.unive.it/~focardi/IFIPWG1_7/ IFIP WG 1.7 Theoretical Foundations of Security Analysis and Design. Home Page of the newly approved IFIP Working Group with the main aims (among many other) of investigating the theoretical foundations of security, discovering and promoting new areas of application of theoretical techniques in computer security and supporting the systematic use of formal techniques in the development of security related applications. The main research topics relevant for the Working Group include: * formal definition and verification of the various aspects of security:confidentiality, integrity, authentication and availability; * new theoretically-based techniques for the formal analysis and design of cryptographic protocols and their manifold applications (e.g., electronic commerce); * information flow modelling and its application to the theory ofconfidentiality policies, composition of systems, and covert channel analysis; * formal techniques for the analysis and verification of mobile code; * formal analysis and design for prevention of denial of service. o http://www.acsac.org/secshelf/secshelf.html The Applied Computer Security Associates (ACSA), the sponsor of the Annual Computer Security Applications Conference, has established an Information Security Bookshelf at the above URL. ACSA sees this bookshelf being used is as a source of readings for self-study and for courses. You're invited to take a look at the bookshelf and to suggest additional books, papers, and reports. It would be most helpful if you could provide the source files or a URL pointer to them. Suggestions and contributions should be sent to bookshelf@acsac.org. Don't be bashful about suggesting your own work. The project needs a permanent editor (or chair, the title is negotiable). If you know someone who might be interested, send that information to the same address. o http://www.e-dentification.com/htm/newsletter.asp Home page of the ELECTRONIC IDENTITY FRAUD NEWSLETTER. Free from e-DENTIFICATION, Inc. Free service to the electronic security community. The objective of this newsletter is to stimulate awareness and create a forum for the exchange of information in an effort to combat the emergence and rampant growth of identity fraud in electronic commerce and electronic data interchange. If you are involved in online security, online privacy issues, electronic commerce, electronic data interchange, credit management, banking, or internet business, you and your associates are invited to subscribe to this free monthly newsletter. [Note: Subscriptions appear to come as ascii text. Downloads of back issues require the ability read MS-Word documents. -editor] ________________________________________________________________________ New Reports available via FTP and WWW ________________________________________________________________________ o http://www.itsec.gov.uk/products UK government reports on recent evaluated products, including the recent E6 evaluation of the MULTOS operating system. ________________________________________________________________________ Who's Where: recent address changes ________________________________________________________________________ o Martin Abadi Bell Labs - Lucent Technologies 3180 Porter Drive Palo Alto, California 94304, USA Fax: +1 650 565 7676 o Yvo Desmedt Department of Computer Science PO Box 4530 206 Love Building Florida State University Tallahassee, FL 32306-4530 USA Tel. +1 (850) 644-9298 Fax: +1 (850) 644-0058 e-mail: desmedt@cs.fsu.edu http://www.cs.fsu.edu/~desmedt o David Goldschlag CTO USinternetworking, Inc. One USi Plaza Annapolis, MD 21401 +1 410-897-4491 david@goldschlag.com o Alec Yasinsac Department of Computer Science Florida State University 214A James Jay Love Building Tallahassee, FL 32312 USA tel: 850.644.6407 fax: 850.644.0058 email: yasinsac@cs.fsu.edu o Jianying Zhou Kent Ridge Digital Labs 21 Heng Mui Keng Terrace Singapore 119613 Tel: +65-8742585 Fax: +65-7744990 Email: jyzhou@krdl.org.sg WWW: http://homex.s1.net.sg/user/jyzhou/ _______________________________________________________________________ Calls for Papers ________________________________________________________________________ CONFERENCES Listed earliest deadline first. See also Cipher Calendar. * FIRST'2000, The 12th Annual FIRST Conference on Computer Security and Incident Handling, Chicago, Illinois, USA, June 25-30, 2000. (Submissions due: November 15, 1999) The Forum of Incident Response and Security Teams (FIRST, www.first.org ) brings security incident response teams together including government, commercial, and academic organizations. The conference is a five day event, two days of tutorials and three days of technical sessions including refereed paper presentations, invited talks, and panel discussions. The focus of the FIRST'2000 conference is on the most recent practical advances in computer security in all its aspects. The Program Committee is soliciting original papers analyzing, among other topics, methodologies for drafting security policies, recent intrusion techniques, describing experiences in building incident response capabilities, working security architectures, pros and cons of both commercial and experimental pro-active security tools. The deadline for submissions is NOVEMBER 15, 1999. The full call for papers is at www.first.org/conference/2000. * WWW9, 9th International World Wide Web Conference (WWW9), Amsterdam, The Netherlands, May 15-19, 2000. (Submissions due: November 22, 1999, Workshop and Tutorial proposals due: Sept. 10, 1999) Topics: E-Commerce, XML, Multimedia, Web Server Performance, Searching and Querying, Protocols, Web Document Management, Java, Web Site Design, Web Security, RDF, Database and Directory Services, Collaboration, Accessibility, Metadata, New Languages Submitted papers should present original reports of substantive new work in areas that can be theoretical (models, analyses, techniques, semantics), empirical (experiments, case studies), or implementation-oriented (new systems, tools, methodologies, user interfaces). Tutorial proposals are desired for both half-day and full-day sessions on topics of current relevance to Web design, services, operation, and use. Subjects of interest include XML, DOM, Multimedia, E-commerce, Java, Dynamic HTML, Security, Accessibility, Graphics and the Web, and other areas expected to be of special interest in spring 2000. WWW9 workshops are intended to provide a forum for highly interactive discussion on focused topics. Workshop proposals should address current web-related issues which can benefit from small-group information exchange and discussion. Attendance at workshops will be limited. Submission details are available at www9.org. * PODC'2000, Nineteenth Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, Portland, Oregon, USA, July 16-19, 2000. (Submissions due: January 14, 2000) Research contributions on the theory, design, specification, implementation or application of distributed systems are solicited. This year PODC will be held in conjunction with a workshop on middleware (information concerning the workshop will be posted on the PODC web site once it is available). In light of this, PODC especially encourages papers addressing distributed computing issues in building and using middleware. Topics of interest include, but are not limited to: o distributed algorithms and their complexity, o specification, semantics and verification of distributed systems, o issues relating to the design and use of middleware platforms, o fault tolerance of distributed systems, o cryptographic and security protocols for distributed systems, o mobile computing, o distributed computing issues in the Internet, including the Web, o communication network protocols and architectures, o multiprocessor/cluster architectures and algorithms, o distributed operating systems and databases, o consistency conditions, concurrency control and synchronization, o distributed object-oriented computing. Conference presentations will have two formats: "Regular presentations" of approximately 25 minutes accompanied by papers of up to 10 pages in the proceedings, and "Brief announcements" of approximately 10 minutes accompanied by one page abstracts in the proceedings. Details on the conference and submission procedure can be found on the conference web site at www.podc.org/podc2000/, or contact the program chair, Jim Anderson, by email, anderson@cs.unc.edu, or phone, 1-919-962-1757. * AES3, Third Advanced Encryption Standard (AES) Candidate Conference, New York, New York, USA, April 13-14, 2000. (Submissions due: January 15, 2000) In the summer of 1999, NIST began Round 2 of the technical analysis of five candidate algorithms that have been selected as finalists for the AES development effort. Near the end of Round 2, the 3rd AES Candidate Conference (AES3) will focus on discussion of the technical resuts of Round 2 and views on candidates for Round 3. A complete call-for-papers is given on the conference web page at csrc.nist.gov/encryption/aes/round2/conf3/aes3conf.htm. * ISSSTA 2000, IEEE Sixth International Symposium on Spread Spectrum Techniques and Applications, Sheraton Tara, Parsippany, NJ, USA, September 6-8, 2000. (Submissions due: January 15, 2000) Prospective authors are cordially invited to submit papers in particular but not exclusively on the following topics: o THEORY: Spreading codes and sequences, waveform design, spectral shaping; synchronization, acquisition, tracking; coding and modulation for SS; direct sequencing, frequency hopping, hybrid concepts; digital and analog SS signal processing, estimation theory; CDMA, SSMA, interference cancellation, joint (multiuser) detection, capacity; information security, ECM, EVCCM, LPI; antennas for SS, propagation effects, channel modelling, anti-fading techniques, RAKE; coexistence SS/other systems, overlay systems, EMC; networking; power control, AGC, amplifier nonlinearities. o SYSTEM DESIGN: Tools for SS system design, modeling and simulation, application of AI; frequency allocation; networking aspects, handover, dynamic channel allocation; SS techniques in education. o COMMUNICATIONS: Mobile & cellular, CDMA, SSMA, satellite; digital broadcasting; power line communications; radio relay; optical SS communications; wireless LANs; SS bus systems, consumer applications, remote control; packetized data & voice networks. o NAVIGATION, RANGING, CHANNEL SOUNDING: GPS, GLONAS, radar, lidar, pulse compression; wideband channel sounding; deep-space applications; correlation techniques to measure flow and speed; SS time domain reflectometry. o DEVICES AND CIRCUITS: ASICs for SS, chip sets, digital correlators, frequency synthesizers, all digital transmitter and receiver implementations, SAW, CCD, neural networks. Please submit five double-spaced copies of original papers to the ISSSTA 2000 Technical Program Committee Chairman as per schedule. Detailed submission instruction and other information can be found on the conference web page at: www.ISSSTA2000.org. * CSFW, 13th IEEE Computer Security Foundations Workshop, Cambridge, UK, July 3-5, 2000. (Submissions due January 31, 2000) Details above. * USENIX, 9th USENIX Security Symposium, Denver, Colorado, USA, August 14-17, 2000. (Submissions due: February 10, 2000) The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in security and applications of cryptography. Please see the conference web site at www.usenix.org/events/sec2000 for more information on the symposium, a detailed list of topics of interest, and the procedure for submitting a paper. * CARDIS 2000. IFIP CARDIS 2000 FOURTH SMART CARD RESEARCH AND ADVANCED APPLICATION CONFERENCE HP Labs, Bristol, UK, September 20-22, 2000 (Submissions Due: February 14, 2000) Smart cards or IC cards offer a huge potential for information processing purposes. The portability and processing power of IC cards allow for highly secure conditional access and reliable distributed information systems. IC cards are already available that can perform highly sophisticated cryptographic computations. The applicability of IC cards is currently limited mainly by our imagination; the information processing power that can be gained by using IC cards remains as yet mostly untapped and is not well understood. Here lies a vast uncovered research area which we are only beginning to assess, and which will have great impact on the eventual success of the technology. The research challenges range from electrical engineering on the hardware side to tailor-made cryptographic applications on the software side, and their synergies. Many currently existing events are mainly devoted to commercial and application aspects of IC cards. In contrast, the CARDIS conferences aim to bring together researchers who are active in all aspects of design of IC cards and related devices and environment, such as to stimulate synergy between different research communities and to offer a platform for presenting the latest research advances. Additional information at www.cardis.org * ACISP'2000, Fifth Australasian Conference on Information Security and Privacy, Brisbane, Australia, July 10-12, 2000. (Submissions due: February 20, 2000) Papers pertaining to all aspects of information security and privacy are solicited. Papers may present theory, techniques, applications and practical experiences on any relevant topic including: authentication and identification, database security, mobile communications security, secure operating systems, security and cryptography policy, security management, commercial applications, key management and auditing, secure electronic commerce, security architectures and models, distributed system security, evaluation and certification, cryptology, access control, network security, smart cards, risk assessment and copyright protection. Please see the conference web page at www.isrc.qut.edu.au/acisp2K for details. * ESORICS 2000, 6th European Symposium on Research in Computer Security Toulouse, France, October 4-6, 2000. (Submissions due: March 15, 2000) The aim of the European Symposium on Research in Computer Security (ESORICS) is to further the progress of research in computer security by establishing a European forum for bringing together researchers in this area, by promoting the exchange of ideas with system developers and users and by encouraging links with researchers in related areas. We solicit papers describing original ideas and new results on the foundations and applications of computer security. The primary focus is on high-quality original unpublished research, case studies and implementation experiences. We encourage submissions of papers discussing industrial research and development. Suggested topics include but are not limited to: * Theoretical Foundations of Security: * Operating Systems Security: * Distributed Systems: * Network Security: * Telecommunications and High Speed Network Security: * Internet Security: * Security and mobile systems: * Security in Data and Knowledge Bases: * Development of Secure Systems: * Management of Secure Systems: * Electronic Commerce: * Security of small systems: * Intellectual Property Protection: * Multimedia and Digital Libraries: * New applications of Cryptography: * Security versus other Requirements: * Security Evaluation: Details on submissions of papers and panel proposals and other information available at www.cert.fr/esorics2000/ and from Frederic.Cuppens@cert.fr * NSPW'2000, 2000 New Security Paradigms Workshop, Ballycotton, Co. Cork, Ireland, September 18-21, 2000. (Submissions due: March 24, 2000) The 2000 New Security Paradigms Workshop will take place September 18-21, 2000 at the Bayview Hotel in Ballycotton, Co. Cork, Ireland. In order to preserve the small, intimate nature of the workshop, participation is necessarily by invitation only. However, anyone can obtain an invitation simply by having a research paper, discussion topic, or position paper accepted. New authors are welcomed. Over the last few years, 40% of the attendees have been first-time participants. Although the formal call for papers has not yet been released, any topic that represents a significant change in thinking about difficult security issues will be welcomed. We also encourage submissions that draw from fields other than those that have become part of traditional computer security. Paper submissions are due March 24, 2000. The formal Call For Papers will be posted at our web site at www.nspw.org. If you wish to be notified when the CFP is released, please send e-mail to our Publicity Chair, Crispin Cowan, at crispin@cse.ogi. * MFPS. The Sixteenth Workshop on the Mathematical Foundations of Programming Semantics Stevens Institute of Technology, Hoboken, NJ April 13 to April 16, 2000. Stevens is located directly across the Hudson River from Manhattan. The MFPS conferences are devoted to those areas of mathematics, logic and computer science which are related to the semantics of programming languages. The series particularly has stressed providing a forum where both mathematicians and computer scientists can meet and exchange ideas about problems of common interest. We also encourage participation by researchers in neighboring areas, since we strive to maintain breadth in the scope of the series. The invited speakers for MFPS 16 are Samson Abramsky University of Edinburgh Rance Cleaveland Stony Brook Andy Gordon Microsoft Cambridge Robin Milner University of Cambridge Peter O'Hearn Queen Mary - Westfield Dana Scott CMU In addition to the invited talks, there will be two special sessions at the meeting. The first will be devoted to security, and the second will be devoted to model checking. The remainder of the program will consist of talks contributed by the participants of the meeting. Those interested in contributing a talk at the meeting should send a title and short abstract to mfps@math.tulane.edu. The available slots will be allocated on a first come, first served basis. As with other MFPS workshops, the Proceedings for MFPS 16 will consist of a special issue of the journal Theoretical Computer Science. All participants at the meeting (whether they present a talk or not) will be invited to submit a paper for the Proceedings; these submissions will be refereed to the usual TCS standards. Detailed information about the Proceedings will be available shortly after the meeting. Additional information available at www.math.tulane.edu/mfps16.html or from mfps@math.tulane.edu. JOURNALS Special Issues of Journals and Handbooks: listed earliest deadline first. * Journal of Theoretical Computer Science, special issue on Dependable Computing. Guest Editor: Gilles Motet. (Submissions due: December the 20, 1999) Papers should be sent as attached rtf, postscript or pdf files to Guest Editor: Gilles Motet / LESIA DGEI, INSA, 135, avenue de Rangueil / 31077 Toulouse cedex 4 / France. Email: Gilles.Motet@insa-tlse.fr. More information can be found at: wwwdge.insa-tlse.fr/~lesia/tcs-call-for-paper.html. * IEEE Software Call for Articles & Reviewers Malicious Information Technology: The Software vs. The People Publication: Sept./Oct. 2000 (Submissions due: April 1, 2000) Because of the increased danger that malicious software now poses, we seek original articles on the following specific issues: + Intrusion detection + Information survivability + Federal critical infrastructure protection plans + Federal laws prohibiting encryption exports vs. US corporations + State-of-the-practice in security testing + The Internet's "hacker underground" + Corporate information insurance + Penalties for those convicted of creating viruses + Case studies in information security and survivability Guest Editors: Nancy Mead Jeffrey Voas Carnegie Mellon University Reliable Software Technologies nrm@sei.cmu.edu jmvoas@rstcorp.com Authors: Submit one electronic copy in RTF interchange or MS-Word format and one PostScript or PDF version to the magazine assistant at software@computer.org. Articles must not exceed 5,400 words including tables and figures, which count for 200 words each. For detailed author guidelines, see www.computer.org/software/edguide.htm. Reviewers: Please e-mail your contact information and areas of interest to a guest editor. ________________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 1: Conference Papers by Anish Mathuria ________________________________________________________________________ * The 8th USENIX Security Symposium, August 23-26, 1999, Washington, USA: o The Design and Analysis of Graphical Passwords. I. Jermyn, A. Mayer, F. Monrose, M. Reiter and A. Rubin o Hand-Held Computers Can Be Better Smart Cards. D. Balfanz and E. Felten o Offline Delegation. A. Helme and T. Stabell-Kul?x o Vaulted VPN: Compartmented Virtual Private Networks on Trusted Operating Systems. T.-H. Choo o Enforcing Well-Formed and Partially Formed Transactions for UNIX. D. Povey o Synthesizing Fast Intrusion Prevention/Detection Systems from High-Level Specifications. R. Sekar and P. Uppuluri o Building Intrusion-Tolerant Applications. T. Wu, M. Malkin and D. Boneh o Brute Force Attack on UNIX Passwords with SIMD Computer. G. Kedem and Y. Ishihara o Antigone: A Flexible Framework for Secure Group Communication. P. McDaniel, A. Prakash and P. Honeyman o A Secure Station for Network Monitoring and Control. V. Prevelakis o The Flask Security Architecture: System Support for Diverse Security Policies. R. Spencer, S. Smalley, P. Loscocco, M. Hibler, D. Andersen and J. Lepreau o A Study in Using Neural Networks for Anomaly and Misuse Detection. A. Ghosh and A. Schwartzbard o The Design of a Cryptographic Security Architecture. P. Gutmann o Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0. A. Whitten and J. Tygar o Jonah: Experience Implementing PKIX Reference Freeware. M. Zurko, J. Wray, I. Morrison, M. Shanzer, M. Crane, P. Booth, E. McDermott, W. Macek, A. Graham, J. Wade and T. Sandlin o Scalable Access Control for Distributed Object Systems. D. Sterne, G. Tally, C. McDonell, D. Sherman, D. Sames, P. Pasturel and E. John Sebes o Certificate-based Access Control for Widely Distributed Resources. M. Thompson, W. Johnston, S. Mudumbai, G. Hoo, K. Jackson and A. Essiari o Digital-Ticket-Controlled Digital Ticket Circulation. K. Fujimura, H. Kuno, M. Terada, K. Matsuyama, Y. Mizuno and J. Sekine * DEXA'99 Workshop on Electronic Commerce and Security, September 2, 1999, Florence, Italy: [Security-related papers only] o IT-Security in Electronic Commerce: From Cost to Value Driver. R. Holbein and T. Gaugle o A Secure Payment System for Electronic Commerce. I. Mavridis, G. Pangalos and S. Muftic o Coordination between security levels for Internet architectures. E. Fernandez o Batching Proofs of Knowledge and its Applieations. K. Nguyen, V. Varadharajan and Y. Mu o Efficient Detection of Failure Modes in Electronic Commerce Protocols. S. Gurgens, J. Lopez-Munoz and R. Peralta o Multi-Party Fair Exchange with an Off-Line Trusted Neutral Party. V. Varadharajan o Non-repudiation in An Agent-Based E-Commerce System. C.-C. Liew, W.-K. Ng, E.-P. Lim, B.-S. Tan and K.-L. Ong o Controlling the Dissemination of Electronic Documents. V. Prevelakis, D. Konstantas and J.-H. Morin o NIGE Log - A method of protecting logging information which takes account of its deletion. T. Takada and H. Koike * New Security Paradigms Workshop 1999, September 22-24, 1999, Ontario, Canada: o Secure Dynamic Adaptive Traffic Masking. B. Timmerman o Security Architecture-Based System Design. E. Schneider o Survivability --- A New Technical and Business Perspective on Security. H. Lipson and D. Fisher o Optimistic Security: A New Access Control Paradigm. D. Povey o Security Architecture Development and its Role in the Seat Management and Total Cost of Ownership Environments. R. Henning o Cursory Examination of Market Forces Driving the Use of Protection Profiles. K. Olthoff o Paradigm Shifts in Protocol Analysis. S. Pancho o Secure Group Management in Large Distributed Systems. J. Michael and J. McHugh o SASI Enforcement of Security Policies: A Retrospective. U. Erlingsson and F. Schneider o Security Modeling in the Commercial Off The Shelf Environment. T. Markham, M. Denz and D. Colby o On the Functional Relation between Security and Dependability Impairments. L. Stromberg, E. Jonsson and S. Lindskog o Securing Information Transmission by Redundancy. J. Li, P. Reiher and G. Popek * SAFECOMP'99 - 18th International Conference on Computer Safety, Reliability and Security, September 27-29, Toulouse, France: [Security-related papers only] o On Formal Languages for Sequences of Authorization Transformations. Y. Bai and V. Varadharajan o Dependability Requirements and Security Architectures for the Healthcare/Medical Sector. G. Trouessin o Three-Pass Hybrid Key Establishment Protocol based on ESIGN Signature. S. Lee and T. Kim o Integration of Safety and Security Requirements. D. Eames and J. Moffett * 7th Annual Working Conference on Information Security Management and Small Systems Security, September 30-October 1, 1999, Amsterdam, Netherlands: o A protocol improvement for high-bandwidth encryption using non-encrypting smart cards. R. Weis o Real-time risk analysis on the Internet: a prototype. H. Venter, L. Labuschagne and J. Eloff o A practical approach to manage data communication security. P. Samwel and M. Spruit o The future of Australian & New Zealand security standard AS/NZ 4444? M. Warren and B. Hutchison o The effective utilization of audit logs in information security management. W. Olivier and R. von Solms o An approach to standardizing security analysis methods for virtual systems. A. Frisinger and L. Yngstrom o Information security at top level - Securometer?. streamlines management information. A. Buren, B. van der Meer, A. Shahim, W. Barnhoorn and E. Lindgreen o Risk analysis on Internet connection. P. Samwel and M. Spruit o A secure station for network monitoring and control. V. Prevelakis o Security aspects of a Java-servlet-based web-hosted e-mail system. E. Hepworth and U. Ultes-Nitsche o Time and security in smart cards. V. Cordonnier, S. Nemchenko and A. Watson o The Intranet authorization paradigm. M. Vandenwauver, P. Ashley and G. Gaskell * MASCOTS'99 - 7th International Symposium on Modeling, Analysis and Simulation, October 24-28, 1999, College Park, Maryland, USA: [Security-related papers only] o An Experimental Analysis of Cryptographic Overhead in Embedded Systems. W. Freeman and E. Miller o Model Checking the Secure Electronic Transaction (SET) Protocol. S. Lu and S. Smolka * COMPSAC'99 - 23rd Annual International Computer Software and Applications Conference, October 27-29, 1999, Phoenix, Arizona, USA: [Security-related papers only] o A Protocol and Simulation for Distributed Communicating Firewalls. R. Smith and S. Bhattacharya o Mobile Agents protection in the Internet Environment. A. Corradi, R. Montanari and C. Stefanelli o Computer Network Intrusion Detection, Assessment and Prevention Based on Security Dependency Relation. S. Yau and X. Zhang * WELCOM'99 - International Workshop on Electronic Commerce (in conjunction with the 18th IEEE Symposium on Reliable Distributed Systems), October 19, 1999, Lausanne, Switzerland: [Security-related papers only] o Approaching a Formal Definition of Fairness in Electronic Commerce. F. Gartner, H. Pagina and H. Vogt o Authorization Methods for e-Commerce Applications. R. Oppliger o Gateways to Overcome Incompatibilities of Security Mechnisms. J. Zollner o Security Mechanisms for Using Mobile Agents in Electronic Commerce. P. Marques, L. Silva and J. Silva o Accountable Anonymous Service Usage in Mobile Communication Systems. L. Buttyan and J.-P. Hubaux _______________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 2: Journal and Newsletter Articles, Book Chapters by Anish Mathuria _______________________________________________________________________ * Computer Communications, Vol. 22, No. 10 (June 1999): o B. Harris and R. Hunt. TCP/IP security threats and attack methods. pp. 885-897. * Computer Communications, Vol. 22, No. 12 (July 1999): o T. Klobucar and B. Jerman-Blazic. A formalisation and evaluation of certificate policies. pp. 1104-1110. o C.-C. Chang, P.-C. Huang and W.-B. Lee. Conference key distribution schemes for portable communication systems. pp. 1160-1164. o R. Oppliger. Security issues related to mobile code and agent-based systems. pp. 1165-1170. * Journal of Cluster Computing, Vol. 2, No. 1 (1999): o A. Ganz, S. Park and Z. Ganz. Experimental measurements and design guidelines for real-time software encryption in multimedia wireless LANs. pp. 35-43. * Information Processing Letters, Vol. 71, No. 1 (July 1999): o Y.-M. Tseng and J.-K. Jan. Attacks on threshold signature schemes with traceable signers. pp. 1-4. * Computer Communication Review, Vol. 29, No. 3 (July 1999): o C. Mitchell and K. Rantos. A Fair Certification Protocol. pp. 47-49. * Computer Communications, Vol. 22, No. 13 (August 1999): o M. Peyravian, S. Matyas and N. Zunic. Decentralized group key management for secure multicast communications. pp. 1183-1187. * IEEE Journal on Selected Areas in Communications, Vol. 17, No. 9 (September 1999): o M. Waldvogel, G. Caronni, D. Sun, N. Weiler and B. Plattner. The VersaKey Framework: Versatile Group Key Management. pp. 1614-1631. * Software Practice and Experience, Vol. 29, No. 12 (October 1999): o I. Brown and C. Snow. A Proxy Approach to E-mail Security. pp. 1049-1060. * Operating Systems Review, Vol. 33, No. 4 (October 1999): o D. Patiyoot and S. Shepherd. Security Issues in ATM Networks. pp. 22-35. o D. Patiyoot and S. Shepherd. WASS: A Security Service for Wireless ATM Networks. pp. 36-41. o J. Bull and D. Otway. A Nested Mutual Authentication Protocol. pp. 42-47. o Y. Zhang, Z. Li and G. Xiao. An Approach to the Formal Verification of the Two-Party Cryptographic Protocols. pp. 48-51. * Communications of the ACM, Vol. 42, No. 10 (October 1999): o T. Lau, O. Etzioni and D. Weld. Privacy Interfaces for Information Management. pp. 88-94. * Computer Communication Review, Vol. 29, No. 4 (October 1999): o C. Shields and J. Garcia-Luna-Aceves. KHIP - A Scalable Protocol for Secure Multicast Routing. pp. 53-64. * Journal of Cluster Computing, Vol. 2, No. 2 (1999): o T. Ryutov, G. Gheorghiu and B. Neuman. An authorization framework for metacomputing applications. pp. 165-175. * IEEE Computer, Vol. 32, No. 8 (August 1999): o T. Ritter. Cryptography: Is Staying with the Herd Really Best? pp. 94-95. * IEEE Transactions on Information Theory, Vol. 45, No. 6 (September 1999): o N. Merhav and E. Arikan. The Shannon Cipher Systems with a Guessing Wiretapper. pp. 1860-1866. * IEEE Communications Magazine, Vol. 37, No. 9 (September 1999): o P. Wing and B. O'Higgins. Using Public-Key Infrastructures for Security and Risk Management. pp. 71-73. o S. Wakid, J. Barkley and M. Stall. Object Retrieval and Access Management in Electronic Commerce. pp. 74-77. o E. Maxwell, S. Wakid and J. Moline. A Policy Perspective on Electronic Commerce. pp. 87-91. * IEEE Communications Magazine, Vol. 36, No. 7 (July 1998): o M. Greenberg, J. Byington and D. Harper. Mobile Agents and Security. pp. 76-85. _______________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 3: Books _______________________________________________________________________ * Stefan Brands. Rethinking public key infrastructures and digital certificates - building in privacy ISBN 90-901-3059-4 287 pages. (Contact brands@cs4all.nl) * Rudolf Kippenhahn. Code Breaking A History and Exploration The Overlook Press 1999. ISBN 0-87951-919-3. LoC Z103.K56 283 pages. Index, bibliography, four short appendices. $29.95. (Review by Bob Bruen above.) * Simon Singh. The Code Book. The Evolution of Secrecy from Mary, Queen of Scots to Quantum Cryptography. Doubleday 1999. ISBN 0-385-49531-5. LoC Z103.S56. 402 pages. 10 appendices, glossary, index, bibliography, illustrations. $24.95. (Review by Bob Bruen above.) * William Stallings. Cryptography and Network Security: Principles and Practice. 2nd edition Prentice Hall. 1999. ISBN 0-13-869017-0 LoC TK5105.59.S713 569 pages. Appendix, glossary, bibliography, index. $73.00 (Review by Bob Bruen above.) * Jan Vitek and Christian D. Jensen (Editors). Secure Internet Programming : Security Issues for Mobile and Distributed Objects Springer-Verlag (Lecture Notes in Computer Science, 1603) 1999. ISBN 3540661301. $56.00. ________________________________________________________________________ Calendar ________________________________________________________________________ ==================================================================== See Calls for Papers section for details on many of these listings. ==================================================================== "Conf Web page" indicates there is a hyperlink to a conference web page on the Cipher Web pages. (In many cases there is such a link even though mention is not made of it here, to save space.) Dates Event, Location ----- --------------- * 11/ 6/99-11/ 7/99: ISW '99. Kuala Lumpur, Malaysia Conf Web page * 11/ 9/99-11/12/99: IETF, Washington DC * 11/15/99: FIRST '2000, Chicago, Illinois; submissions due * 11/17/99-11/19/99: HASE '99. Washington, DC Conf Web page * 11/18/99-11/19/99: IICIS '99. Amsterdam, The Netherlands Conf Web page * 11/22/99: WWW9. Conf Web page; submissions due * 11/30/99-12/ 2/99: CQRE. Duesseldorf, Germany Conf Web page * 11/30/99-12/ 2/99: CQRE '99, Duesseldorf, Germany; Conf Web page * 12/ 6/99-12/10/99: 15th ACSAC; Phoenix, Arizona. Conf Web page * 12/13/99-12/15/99: FSTTCS '99. Chennai, India * 12/16/99-12/17/99: FMC '99. Chennai, India * 1/15/00: AES 3, New York, New York; Submisssions due * 1/15/00: ISSSTA, Parsippany, New Jersey; Submisssions due * 1/15/00: USENIX Sec Symp, Denver, Colorado; Submisssions due * 1/18/00- 1/20/00: PKC 2000. Melbourne, Australia Conf Web page * 1/25/00- 1/27/00: DISCEX 2000. Hilton Head, South Carolina * 1/31/00: CSFW 13, Cambridge, UK; Submissions due * 2/ 2/00- 2/ 4/00: NDSS '00. San Diego, California; Conf Web page * 2/20/00: ACISP '2000, Brisbane, Australia; Submissions due * 2/21/00- 2/24/00: fc00. Anguilla, British West Indies Conf Web page * 3/15/00: ESORICS 2000, Toulouse, France; Submissions due * 3/24/00: NSPW '2000, Ballycotton, Ireland; Submissions due * 3/26/00- 3/27/00: OPENARCH '00. Tel Aviv, Israel Conf Web page * 3/27/00- 3/31/00: IETF, Adelaide, Austraila * 4/ 4/00- 4/ 7/00: CFP '00,Toronto, Canada Conf Web page * 4/13/00- 4/14/00: AES 3, New York, New York * 5/14/00- 5/17/00: IEEE S&P 00, Oakland, CA * 5/15/00- 5/19/00: WWW9. Conf Web page * 5/16/00- 5/19/00: 12th CITSS, Ottawa; no e-mail address available * 6/25/00- 6/30/00: FIRST '2000, Chicago, Illinois * 7/ 3/00- 7/ 5/00: CSFW 13, Cambridge, UK * 7/10/00- 7/12/00: ACISP '2000, Brisbane, Australia * 8/14/00- 8/17/00: USENIX Sec Symp, Denver, Colorado * 9/ 6/00- 9/ 8/00: ISSSTA, Parsippany, New Jersey * 9/18/00- 9/20/00: NSPW '2000, Ballycotton, Ireland * 10/ 4/00-10/ 6/00: ESORICS 2000, Toulouse, France * 5/14/01- 5/16/01: IEEE S&P 2001, Oakland, California * 5/13/02- 5/15/02: IEEE S&P 2002, Oakland, California Key: * ACISP = Australasian Conference on Information Security and Privacy * ACSAC = Annual Computer Security Applications Conference 15th ACSAC * AES = Advanced Encryption Standard Candidate Conference Second AES * CCSS = Annual Canadian Computer Security Symposium (see CITSS) * CFP = Computers, Freedom, and Privacy CFP '00 * CITSS = Canadian Information Technology Security Symposium * CQRE = [Secure] Exhibition and Congress CQRE * CSFW = Computer Security Foundations Workshop * DISCEX = DARPA Information Survivability Conference and Exposition * ESORICS = European Symposium on Research in Computer Security * FC = IFCA Annual Financial Cryptography Conference 3rd Annual FC * FC = Financial Cryptography FC '00 * FIRST = Computer Security Incident Handling and Response * FMC = Foundations of Mobile Computation FMC '99 * FSTTCS = Foundations of Software Technology and Theoretical Computer Science * HASE = IEEE Symposium on High Assurance Systems Engineering HASE '99 * IEEE S&P = IEEE Symposium on Security and Privacy IEEE S&P 00 * IETF = Internet Engineering Task Force IETF * IICIS = IFIP WG 11.5 working conference on Integrity and Internal Control in Information Systems IICIS99 * ISSSTA = International Symposium on Spread Spectrum Techniques and Applications * ISW = Information Security Workshop ISW '99 * NDSS = ISOC Network and Distributed System Security Symposium NDSS '00 * NSPW = New Security Paradigms Workshop NSPW '2000 * OPENARCH = Open Architectures and Network Programming OPENARCH '00 * PKC = Practice and Theory in Public Key Cryptography PKC 2000 * USENIX Sec Symp = USENIX UNIX Security Symposium, 8th Annual * WWW = World Wide Web Conference WWW9 ________________________________________________________________________ Listing of Academic (Teaching and Research) Positions in Computer Security maintained by Cynthia Irvine (irvine@cs.nps.navy.mil) ________________________________________________________________________ Swiss Federal Institute of Technology, Lausanne (EPFL), Switzerland/Eurecom/Telecom Paris General Director Areas of particular interest: Education and research in telecommunications. Applications begin immediately. http://admwww.epfl.ch/pres/dir_eurecom.html Department of Computer Science, Naval Postgraduate School, Monterey, CA Junior and Senior Tenure Track Positions in Professorship Areas of particular interest: Computer Security, but applicants from all areas of Computer Science will be considered. Applications begin immediately and are open until filled. http://www.cs.nps.navy.mil/people/faculty/chairman.html Department of Computer Science, Purdue University, West Lafayette, IN Assistant, Associate or Full Professor in Computer Science Areas of particular interest: Computer graphics and scientific visualization, database systems, information security, operating systems and networking, and software engineering. Positions beginning August 1999, interviews beginning October 1998; open until filled. http://www.cs.purdue.edu/facAnnounce/ Swiss Federal Institute of Technology, Lausanne (EPFL) Communications System Section, Switzerland Assistant, Associate or Full Professor in Security of Communication and Information Systems Areas of particular interest: Cryptography, security protocols and systems (ex. authentication, confidentiality, protection of software resources, security aspects of electronic commerce. Closing Date for applications: January 9, 1999. http://sscwww.epfl.ch Department of Computer Science, Florida State University, Talahassee, FL Tenure-track positions. (6/99) Areas of particular interest: Trusted Systems, software engineering, provability and verification, real-time and safety-critical systems, system software, databases, fault tolerance, and computaional/simulation-based design. Emphasis on issues of certainty, reliability, and security. http://www.cs.fsu.edu/~lacher/jobs.html Department of Electrical and Computer Engineering, Iowa State University, Ames, Iowa Assistant, Associate, or Full Professor in Computer Engineering Areas of paricular interest: Distributed and parallel computing, computer netwroking, security, software engineering, computer architecture, VLSI CAD, computer graphics, and human/computer interface design. Date closed: December 19, 1998, or until filled. http://vulcan.ee.iastate.edu/~davis/job-ad.html Naval Postgraduate School Center for INFOSEC Studies and Research, Monterey, CA, Visiting Professor (Assistant, Associate, or Full Professor levels) (9/98) Areas of particular interest: Computer and information systems security. http://cisr.nps.navy.mil/jobs/npscisr_prof_ad.html This job listing is maintained as a service to the academic community. If you have an academic position in computer security and would like to have in it included on the Cipher web page and e-mail issues, send the following information : Institution, City, State, Position title, date position announcement closes, and URL of position description to: irvine@cs.nps.navy.mil ________________________________________________________________________ How to become <> a member of the IEEE Computer Society's TC on Security and Privacy ________________________________________________________________________ You do NOT have to join either IEEE or the IEEE Computer Society to join the TC, and there is no cost to join the TC. All you need to do is fill out an application form and mail or fax it to the IEEE Computer Society. A copy of the form is included below (to simplify things, only the TC on Security and Privacy is included, and is marked for you) Members of the IEEE Computer Society may join the TC via an https link. The full and complete form is available on the IEEE Computer Society's Web Server by following the application form hyperlink at the URL: http://computer.org/tcsignup/ IF YOU USE THE FORM BELOW, PLEASE NOTE THAT THE IT IS TO BE RETURNED (BY MAIL OR FAX) TO THE IEEE COMPUTER SOCIETY, >>NOT<< TO CIPHER. --------- IEEE Computer Society Technical Committee Membership Application ----------------------------------------------------------- Please print clearly or type. ----------------------------------------------------------- Last Name First Name Middle Initial ___________________________________________________________ Company/Organization ___________________________________________________________ Office Street Address (Please use street addresses over P.O.) ___________________________________________________________ City State ___________________________________________________________ Country Postal Code ___________________________________________________________ Office Phone Fax ___________________________________________________________ Email Address (Internet accessible) ___________________________________________________________ Home Address (optional) ___________________________________________________________ Home Phone ___________________________________________________________ [ ] I am a member of the Computer Society IMPORTANT: IEEE Member/Affiliate/Computer Society Number: ____________________ [ ] I am not a member of the Computer Society* Please Note: In some TCs only current Computer Society members are eligible to receive Technical Committee newsletters. Please select up to four Technical Committees/Technical Councils of interest. TECHNICAL COMMITTEES [ X ] T27 Security and Privacy Please Return Form To: IEEE Computer Society 1730 Massachusetts Ave, NW Washington, DC 20036-1992 Phone: (202) 371-0101 FAX: (202) 728-9614 ________________________________________________________________________ TC Publications for Sale ________________________________________________________________________ 1. Proceedings of the IEEE CS Symposium on Security and Privacy The Technical Committee on Security and Privacy has copies of its publications available for sale directly to you. Proceedings of the IEEE Symposium on Security and Privacy -------------------------------------- 1999 $25.00 1998 $20.00 (Sorry, the TCSP has sold out of the 20 year CD. It may be available from the Computer Society. Check the URL below.) For domestic shipping and handling, add $3.00. For overseas delivery: -- by surface mail, please add $5 per order (3 volumes or fewer) -- by air mail, please add $10 per volume If you would like to place an order, please specify * how many issues you would like, and * where to send them, and * the shipping method (air or surface) for overseas orders. For mail orders, please send a check in US dollars, payable to the IEEE Symposium on Security and Privacy to: Brian J. Loe Treasurer, IEEE TC on Security and Privacy Secure Computing Corp. 2675 Long Lake Rd. Roseville, MN 55113 U S A For electronic orders, in addition to the information above, please send the following credit card information to loe@securecomputing.com: - the name of the cardholder, - type of card (VISA, Mastercard, American Express, and Diner's Club are accepted) - credit card number, and - the expiration date. You may use the following PGP public key to encrypt any information that you're not comfortable sending as cleartext. -----BEGIN PGP PUBLIC KEY BLOCK----- Version: 4.0 Business Edition mQCNAy+T6TkAAAEEAN/fnVu7VCPtcmBQhXFhJbejSoZJkEmWNUYvx13yRwl/gyir 61ae+GUjgWjWs9O06C6dugRGrjFZpBhMosu7sgGJMz54hvKbBNrYBSHpH0yex6e/ +c2mzbCbh40naARgPAaAki2rCkV2ryETj2Z6w98/k5fMgOZDnEy6WVOs56vlAAUR tBtCcmlhbiBKLiBMb2UgPGxvZUBzY3RjLmNvbT6JAHUDBRAvlQ8qNU4dUKmt/G0B Aba2AwCu48Oq1DPElV16DNQb7SvQAwQPGYYM3zg9RT0AyFeXajBHb9O2GkOmai8y ryJt4t3Q8aQ2BckWUsck29TT2M/U7hOrC+hJPMbziqbw5juR906pjs9OzPSR5Pta AW66CUqJAJUDBRAvlQ56enbk/HH5npkBAfkwA/9zVKeAJh/X4qzUzYJt/w9Hi3mF AAzm0YUcDwnNLkv/c1k3Kg0APh+BGbrbGvy2sVa1PgFKZluheCqSVO/BtApaf3QS ygoS118k20mzBU2QsX9KMvJ6z8nocSCWU9RopUirk8zwAisqwAq8dmgNwNsMfxDK mdCx3FiE46FrSnEKlokAlQMFEC+UKJdMullTrOer5QEB2aID/16rqeJkcfKRH/bs /1yGSqFgu6r8TUKKsD5pg/vc51t9d5X6/APGv1nO/aJOtr8NQ3InNTsl6VZEWWi/ 6TvKI7o+vuNtZ6qazRZixBXfSMh6UGzrDfgDgILVue4fG3qArF3rcRkKqFWxlX4Y 3ekZ8vYJAFyatphhFvhDX6BKhywAtCVCcmlhbiBKLiBMb2UgPGJyaWFuLmxvZUBj b21wdXRlci5vcmc+tCZCcmlhbiBKLiBMb2UgPGxvZUBzZWN1cmVjb21wdXRpbmcu Y29tPg== =jEJA -----END PGP PUBLIC KEY BLOCK----- You may also order some back issues from IEEE CS Press at http://www.computer.org/cspress/catalog/proc9.htm. 2. Proceedings of the IEEE CS Computer Security Foundations Workshop (CSFW 1, 5 through 12) The most recent Computer Security Foundation Workshop (CSFW12) took place the 28th through 30th of June in Mordano, Italy. Topics included formal specification of security protocols, protocol engineering, distributed systems, information flow, and security policies. Copies of the proceedings are available from the publications chair for $25. Copies of earlier proceedings starting with year 5 are available at $10. Photocopy versions of year 1 are also $10. Checks payable to "Joshua Guttman for CSFW" may be sent to: Joshua Guttman, MS A150 The MITRE Corporation 202 Burlington Rd. Bedford, MA 01730-1420 USA guttman@mitre.org ________________________________________________________________________ TC Officer Roster ________________________________________________________________________ Chair: Past Chair: Thomas A. Berson Charles P. Pfleeger Anagram Laboratories Arca Systems, Inc. P.O. Box 791 8229 Boone Blvd, Suite 750 Palo Alto, CA 94301 Vienna VA 22182-2623 (650) 324-0100 (voice) (703) 734-5611 (voice) berson@anagram.com (703) 790-0385 (fax) c.pfleeger@computer.org Vice Chair: Chair, Subcommittee on Academic Affairs: Michael Reiter Prof. Cynthia Irvine Bell Laboratories U.S. Naval Postgraduate School 600 Mountain Ave., Room 2A-342 Computer Science Department Murray Hill, NJ 07974 USA Code CS/IC Monterey CA 93943-5118 (908) 582-4328 (voice) (408) 656-2461 (voice) (908) 582-1239 (fax) irvine@cs.nps.navy.mil reiter@research.bell-labs.com Newsletter Editor: Paul Syverson Code 5543 Naval Research Laboratory Washington, DC 20375-5337 (202) 404-7931 (voice) (202) 404-7942 (fax) syverson@itd.nrl.navy.mil Chair, Subcommittee on Standards: Chair, Subcomm. on Security Conferences: David Aucsmith Jonathan Millen Intel Corporation SRI International EL233 JF2-74 Computer Science Laboratory 2111 N.E. 25th Ave 333 Ravenswood Ave. Hillsboro OR 97124 Menlo Park, CA 94025 (503) 264-5562 (voice) (650) 859-2358 (voice) (503) 264-6225 (fax) (650) 859-2844 (fax) awk@ibeam.intel.com millen@csl.sri.com ________________________________________________________________________ Information for Subscribers and Contributors ________________________________________________________________________ SUBSCRIPTIONS: Two options: 1. To receive the full ascii CIPHER issues as e-mail, send e-mail to (which is NOT automated) with subject line "subscribe". 2. To receive a short e-mail note announcing when a new issue of CIPHER is available for Web browsing send e-mail to (which is NOT automated) with subject line "subscribe postcard". To remove yourself from the subscription list, send e-mail to cipher-request@itd.nrl.navy.mil with subject line "unsubscribe". Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher CONTRIBUTIONS: to are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. For Calendar entries, please include a URL and/or e-mail address for the point-of-contact. For Calls for Papers, please submit a one paragraph summary. See this and past issues for examples. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. BACK ISSUES: There is an archive that includes each copy distributed so far, in ascii, in files you can download at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/cipher-archive.html ========end of Electronic Cipher Issue #34, 03 November 1999============