Subject: Electronic CIPHER, Issue 32, June 07, 1999 _/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ==================================================================== Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 32 June 07, 1999 Paul Syverson, Editor Bob Bruen, Book Review Editor Hilarie Orman, Assoc. Editor Mary Ellen Zurko, Assoc. Editor Anish Mathuria, Reader's Guide ==================================================================== http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/ Contents: [4061 lines total] Letter from the Editor and Newsbriefs Twentieth Anniversary IEEE Symposium on Security an Privacy o Pointers to program with URLs plus additional information Preliminary Program of the 1999 IEEE Computer Security Foundations Workshop Commentary and Opinion: Book Reviews by Bob Bruen o Removing The Spam. Email Processing and Filtering by Geoff Mulligan o The Global Internet Trust Register, 1999 Edition by Ross Anderson et al. Conference Reports: o Computers, Freedom, and Privacy Conference (CFP '99) by Danielle Gallo o 1999 IEEE Security and Privacy Symposium Program by Mary Ellen Zurko New Interesting Links on the Web: IACR Newsletter, Iowa State Security Lab New Reports Available via FTP and WWW: Net User Attitudes on Online Privacy Who's Where: recent address changes Calls for Papers Reader's guide to recent security and privacy literature o Conference Papers o Journal and Newsletter articles Calendar List of Computer Security Academic Positions, maintained by Cynthia Irvine Publications for Sale -- S&P and CSFW proceedings available TC officers Information for Subscribers and Contributors ____________________________________________________________________ Letter from the Editor ____________________________________________________________________ Dear Readers, We are pleased to bring you another issue of Cipher. A few months have passed since our last issue: Melissa has come and gone, leaving behind the usual mess plus technical and moral debates about GUIDs. Fighting continues in Kosovo as of this writing, and there have been interesting associated developments on the Internet. Radio B92, the only independent broadcaster in Yugoslavia was finally shut down completely. They had been broadcasting via RealAudio on the Internet for the last few years after their transmitter was shut off by Yugoslavian officials. Meanwhile, Anonymizer.com set up a gateway to assist people in Yugoslavia to hide the destination and content of their email. NATO was struck by a denial-of-service by saturation attack on their Web site. A few weeks later, Newsweek reported that president Clinton authorized the use of computer hackers to destabilize Slobodan Milosevic by attacking his foreign bank accounts, amongst other things. In other security and privacy news, Adi Shamir announced a new machine at EUROCRYPT that makes the factoring of 512 bit keys feasible. TWINKLE (The Weizmann INstitute Key Locating Engine) increases by at least a few orders of magnitude the speed at which factoring can be done. Shamir's paper on this can be found at http://jya.com/twinkle.eps Finally, in legal/political news: the US Ninth Circuit Court of Appeals found current export controls on cryptography to be unconstitutional on free speech grounds, because source code can sometimes be expressive and therefore considered speech. This ruling is expected to be appealed. More recently the White House has said that recent congressional reports on Chinese spying will make unlikely any changes to US rules on crypto export. At the same time, citing espionage concerns, Germany has encouraged the use of strong crypto by German businesses and individual citizens. (The above stories were culled from wired.com/news, zdnet.com/zdnn, and various personal communications.) I have provided you this very brief summary of some recent news because, in lieu of her usual insightful ListWatch column, Mary Ellen Zurko has provided us with an insightful summary of the 1999 IEEE Security and Privacy Symposium. This year's gathering was a special 20th anniversary Symposium. In addition to the presentation of research papers, there were three panels looking at well-known and less well known accomplishments of the last twenty years, and predictions for the next twenty years. Mez has given a thorough description of all the presentations and panels. This symposium was also special because it was expected to be the last one held at the Claremont---where the "Oakland Conference" has been since its inception. However, recent developments have made it at least unclear whether we will be back at the Claremont next year or not. On the Cipher Web page, besides Mez's summary, you will find URLs for almost all of the papers presented at Oakland, and a couple of other relevant pointers as well: Where did that new proceedings cover come from? And, be sure to check out the proceedings for sale. Now available is a CD with all the papers from all of the Symposia to date. We also have contributions from our regular writers and reviewers plus a lively summary of Computers, Freedom, and Privacy by Danielle Gallo, who is regular in that she wrote this conference up for us last year as well. Hilarie Orman, who manages our calendar, has noticed a drop off of calendar items received. If you have a call for papers or other calendar item relevant to the Cipher audience, please send it to us at cipher@itd.nrl.navy.mil. Likewise if you have any conference writeups, news items, address changes, new reports available on line, etc. please send them in. Thanks as always to everyone who contributed. Paul Syverson Editor, Cipher ____________________________________________________________________ Twentieth Anniversary IEEE Symposium on Security and Privacy ____________________________________________________________________ o A program with hyperlinks to nearly all of the papers presented at the Symposium, along with slides for many of them can be found at Technical Committee on Security and Privacy home page www.itd.nrl.navy.mil/ITD/5540/ieee/SP99-Program.html o A writeup of the Symposium by Mary Ellen Zurko is below. o George Davida was Chair of the First Symposium in 1980. He was unable to attend this year's Symposium, but he sent the following statement. 20 years ago the IEEE Computer Society launched the TC on Security and Privacy and with it field of data security was born. The resistance from some government branches, supported by their allies in the academia, made it a difficult and unpleasant experience for some of us. We can now be certain that those who supported the new discoveries and fields of research were right: that the country, and indeed the world, needed this discipline in more ways than could be foreseen. We have witnessed its use in defending democratic movements as well as its applications to lowering the barriers to commerce and economic development. The nay-sayers were wrong. Those who wanted to put me in jail and fine me thousands of dollars for merely writing an application for a patent have been left, as a recent President said, on then ash heap of history. o The photograph of the Trojan Horse that appears on the cover of the 1999 S&P Symposium Proceedings and the cover of the 20 year CD was taken by Carl Landwehr. To get an idea of what it looked like before doctoring and to see where one finds a life size Trojan Horse to photograph nowadays look at www.dells-delton.com/bigchief.html o If you did not pick up your group photograph of the Security & Privacy '99 attendees at the Symposium and would still like to get it, you can contact Jon Millen, the vice chair, at millen@csl.sri.com to request your copy. ____________________________________________________________________ 1999 IEEE Computer Security Foundations Workshop (CSFW12) June 28--30, 1999, Mordano, Italy Preliminary Program ____________________________________________________________________ MONDAY June 28, 1999 8:45 - 9:00 Welcome (Roberto Gorrieri and Paul Syverson) 9:00 - 10:00 Formal Models A Formal Framework and Evaluation Method for Network Denial of Service by Catherine Meadows I/O Automoton Models and Proofs of Shared-Key Communications Systems by Nancy Lynch 10:00 - 10:30 Break 10:30 - 12:00 Security Protocol Analysis: Notation, Transformation, and Simplification Safe Simplifying Transformations for Security Protocols by Mei Lin Hui and Gavin Lowe Decision procedures for the analysis of cryptographic protocols by logics of belief by David Monniaux A meta-notation for protocol analysis by I. Cervesato, N.A. Durgin, P.D. Lincoln, J.C. Mitchell, and A. Scedrov 12:00 - 2:00 Lunch 2:00 - 3:00 Strand Spaces Mixing Protocols by F. Javier THAYER Fabrega, Jonathan C. Herzog, Joshua D. Guttman Honest Functions and their Application to the Analysis by Al Maneki 3:00 - 3:30 Break 3:30 - 5:00 Panel: Formalization and Proof of Secrecy Properties Chair: Dennis Volpano Panelists: Martin Abadi, Riccardo Focardi, Cathy Meadows, Jon Millen TUESDAY June 29, 1999 9:00 - 10:00 Local Names Authentication via localized names by C. Bodei, P. Degano, R. Focardi and C. Priami A Logic for SDSI's Linked Local Name Spaces by J.Y. Halpern and R. van der Meyden 10:00-10:30 Break 10:30 - 12:00 Interaction and Composition Trusted System Construction by Colin O'Halloran Secure Composition of Insecure Components by Peter Sewell and Jan Vitek Security Function Interactions by Pierre Bieber 12:00 - 2:00 Lunch 2:00 - 3:00 Logics for Authorization and Access Control A Logic-based Knowledge Representation for Authorization with Delegation by N. Li, J. Feigenbaum, and B. Grosof Logical Framework for Reasoning on Data Access Control by E. Bertino, F. Buccafurri, E. Ferrari, and P. Rullo 3:00 - 3:30 Break 3:30 - 4:30 CSFW Business Meeting WEDNESDAY June 30, 1999 9:00 - 10:00 Advances in Automated Security Protocol Analysis Athene, a new automatic checker for analysis of security protocols by Dawn Xiaodong Song CVS: A Compiler for the Analysis of Cryptographic Protocols by Antonio Durante, Riccardo Focardi and Roberto Gorrieri 10:30 - 11:30 Noninterference using process algebras Process Algebra and Non-interference by P Y A Ryan and S A Schneider What is intransitive noninterference? by A.W. Roscoe and M.H. Goldsmith 11:30 - 12:00 Closing remarks. Presentation of Bocce Awards ____________________________________________________________________ COMMENTARY AND OPINION ____________________________________________________________________ ____________________________________________________________________ Removing The Spam. Email Processing and Filtering by Geoff Mulligan. Addison-Wesley 1999. ISBN 0-201-37957-0 LoC TK5105.73.M85 190 pages. Index. Appendix. $19.95. Reviewed by Robert Bruen, Cipher Book Review Editor bruen@mit.edu ____________________________________________________________________ Removing The Spam assumes that spam is something undesirable, so there is little effort made to convince the reader to conform to this point of view. Instead Mr. Mulligan provides a straight forward, in depth explanation of how electronic mail works. Armed with this information one can take action keep spam out of the bit stream. There are only four chapters in the book: 1) The Dawn of Electronic Mail, 2) Sendmail, 3) Procmail and 4) Mailing Lists. The appendix is fifteen pages of web sites, mailing lists, RFCs and other useful resources. The book is really more about the intricacies of email and how to administer mail and less about spam, which makes the book a valuable resource, for example, to get Sendmail up and running. If you are new to managing email on a Unix system, this book is well worth $20 because it gives a clear introduction to filtering using Procmail and managing mailing list using Majordomo, Listproc, Smartmail and even Sendmail. The pros and cons of each are presented without obvious bias making your choice a bit easier. The source sites for each of them, the steps to build to them and the day to day management issues are included. In the end, of course, the procedures for removing spam with each are highlighted. Sendmail is a fairly complex piece of software that many sysadmins can successfully ignore. This chapter gets you past the mystique with a simple explanation of how to get, set up and do elementary configuration changes. Since no experienced sysadmin can really get by without mastering Sendmail, reading this chapter will serve as a good introduction before you tackle the real Sendmail book. Procmail as a filtering agent gets its own chapter with examples of the recipes and rc files to filter mail. There is certainly enough detail here to get you actually filtering email even if you have never heard of Procmail. The focus of the filtering is for spam, but the examples include steps for making the mail you want to get, easier to get. Because the approaches and coverage is different, I see this book as more of a companion rather than a competitor to Stopping Spam by Schwartz and Garfinkel. Each book takes a small topic, spam, and makes a book out of it by adding related material. S & G chose to add lots of text about how bad spam is, accompanied by activist resources and methods to stop it, including legal information. Mulligan has chosen to spend most of his attention on mastering the software that controls all aspects of email so that the tools are available to protect your own site. The resources listed in each book overlap, but they are different lists. Fortunately each costs less than $20. Buy both. The value added by each is worth it and the goal is the same. ____________________________________________________________________ The Global Internet Trust Register, 1999 Edition by Ross Anderson et al. MIT Press 1999. ISBN 0-262-51105-3. Appendix. Bibliography. $24.95. 174 pages. Reviewed by Robert Bruen, Cipher Book Review Editor bruen@mit.edu ____________________________________________________________________ Firing another volley in the struggle to keep the net free and open, Anderson and company have published the second version of a list of over a thousand PGP keys and their owners. Printing this information on paper insures that freedom of speech/press protects the contents even as governmental bodies seek to put a muzzle on these rights on the net. The Global Trust web site is at: http://www.cl.cam.ac.uk/Research/Security/Trust-Register The key listings are broken up into several chapters: - certification authorities in chapter 2 - EDI Keys (only two) in chapter 3 - CERTs in chapter 4 - other institutional keys in chapter 5 - individual keys (the vast majority) in chapter 6 - medical keys in chapter 7 Levels of trust are represented by a letter designation (A, B, C, D): - (A) based on whether one the authors know the person - (B) if the information has been verified with passports or certificates, or a recommendation from an (A) level. - (C) if the email address can be bound to the key or or a recommendation from an (B) level. - (D) no reason exists not to believe the key is associated with the person, but no other verification exists either. Chapter 1 is a brief statement on what the Global Trust Register is and why crypto is important for people using the net. The Appendix is a more detailed look at the issues with explanations of the some of the serious problems such as revocation. The Global Internet Trust Register represents an important measure to keep freedom alive as the world gets connected rather than to cede the new technology to the old ways of oppression. The nature of the net along with powerful PCs and Open Source software are basic building blocks for the way we will all live in the next century. The game is afoot. Everything we do either helps to open up or close down the freedom to think and communicate with others. I recommend supporting efforts like this one. ______________________________________________________________________ Conference Reports ______________________________________________________________________ ______________________________________________________________________ ACM Computers, Freedom and Privacy (CFP99) Washington, DC USA April 6 - April 8, 1999. by Danielle Gallo (dgallo@research.att.com) ______________________________________________________________________ [N.B. In general, square braces in Cipher indicate a comment from the editor. In the current article, text in square braces is the work of the author, not the editor (other than around this comment). One personal observation on the conference I could not resist making: a concrete metaphor for the difficult and complicated road to freedom was a sparrow that spent the three days of the conference flitting about the auditorium looking for a way out.---Paul Syverson] Computers, Freedom and Privacy 1999 was held in the Omni Shoreham Hotel in Washington, DC from April 6-8, 1999. This was my second CFP; you can read the report I wrote from last year's conference here. The theme "the Global Internet" was achieved through including many international panelists. It was refreshing to see Asian and South American panelists on the program, as they provide different experiences with privacy and censorship than most; it is also enlightening to hear their views and successes, as well as the problems they are still having. The following summary is based on my notes and the newsletters distributed at the conference, which are available on the CFP site (Reports 1, 2, and 3). CFP is guaranteed to be chock full of information, and this year did not disappoint. I am only presenting a summary based on the panels I attended, so this report will not cover each session. I often refer to Roger Clarke's notes on CFP, which I found quite useful. Note: links to many papers can be found on the cfp99 site. Now, let's get down to to it, shall we? ------------------------------------------------------------------------ Tuesday General Session (April 6, 1999) The General Session began on Tuesday April 6, 1999 with the panel, "Freedom and Privacy and the Global Internet I" moderated by Deborah Hurley ( http://www.ksg.harvard.edu/iip/Biographies/hurley_bio.html), Director of the Harvard Information Infrastructure Project. Hurley began the panel with a series of statements about the human need to communicate, and how this essentially extends to the present information explosion. This explosion is coupled with the need to limit audiences. Presently, there are more ways to record communication; however, we lose the ability to limit the audience. As moderator, Hurley limited each speaker to five minutes. This limit had advantages and disadvantages, and as Roger Clarke (in his CFP 99 report) states, "successfully avoided any one person dominating the agenda; but it also limited speakers who wished to structure an argument." President of the Open Society Institute Aryeh Neier offered the idea that "information is power and communication enhances that power." He states that we must use this power to gain freedom and also to inform others. Neier used Sarajevo under siege as an example of how information is publicized and transferred both into and out of the area. In regards to privacy, Neier offered that in some respects, the danger is as great as the power of technology. Moreover, even certain accurate information may be limiting, and this has a subsequent effect on freedom. The US government viewpoint was presented by Paula Breuning, who remarked that despite it being essential for the Internet to grow, people must be comfortable using the medium. This means limiting the distribution of personal data so to restore privacy. The NTIA approach is to rely on self-regulation and sector specific legislation. Breuning places importance on posting privacy preferences and notifying customers of policies regarding personal data. Industry efforts include TRUSTe and the Online Privacy Alliance. The action suggested to facilitate this effort is performing a sweep of Web sites for posted privacy policies. I personally find industry regulation a process that is excruciatingly time-consuming and hard to gain majority acceptance for. Simon Davies of the London School of Economics was next to speak. Understandably, Davies spent time enthusiastically responding to Scott McNeely's remark, "There is no privacy. Get over it." Davies made a solid point in stating that personal privacy should not be approached with such a laissez-faire attitude; he suggested a more aggressive attitude towards privacy protection. Davies offers the belief that privacy is one of the great features of culture, but as with any law, it is hard to maintain. The people causing the problem should be identified and forced to recognize their wrongdoing. Hong Kong's Data Protection Commissioner Stephen Lau spoke about privacy in Hong Kong. Surprisingly (maybe not so surprisingly), there was not a word for privacy in Chinese until recently; the word now is comprised of "self" and "hide." Lau outlines the difficulties in protecting privacy through legislation. The results of a survey of approximately 6000 sites in Hong Kong showed that many local Web sites do not post privacy statements or conform to regulations found in the Data Protection law. Lau also offers the credo, "whatever is illegal offline is illegal online" and has produced guidelines for Internet privacy. AOL (http://www.aol.com) Senior Vice President George Vradenburg began by stating that AOL has a publicly visible privacy statement posted on aol.com. Vradenburg focuses on the pace of the Internet, and how companies like his are governed by it. He offers that traditional models don't always apply to the situation, and the government may not be the ideal intermediary. Industry can't look to regulatory models of the past in dealing with the medium that combines media, advocacy, and communication. Finally, Vradenberg supports the current self-regulation efforts in the Privacy Alliance and TRUSTe, and notes the significant market pressure on companies like AOL and Microsoft. The last speaker was Barbara Simons of the ACM (http://www.acm.org), who posed this question to the audience: do we value privacy so little that we depend on chance to reveal breaks in it? Simons used the example of the existence of GUIDs in Microsoft Word and how they related to the recent spread of the Melissa macro virus. Simons relates the protection of intellectual property online to the problem of privacy; she states the same principles apply. Simons also predicts that instead of raising security, the general response to the dilemma will incorrectly be to increase surveillance. The Creation of a Global Surveillance Network This session was moderated by Barry Steinhardt of the American Civil Liberties Union (http://www.aclu.org). The focus is on surveillance, the level of which is necessary, and the resultant effects on privacy. It is an interesting issue to consider the degree to which global surveillance networks adhere to the responsibility of tracking malice, and to what degree they are invading the privacy of law abiding citizens. The panelists offer a range of viewpoints to this ever-debated issue. Representative Bob Barr focused on the efforts in Congress in regards to privacy. Barr states that the protection of privacy is currently being achieved by subventing federal laws. He argues that the 3rd century's biggest asset is information and the manipulation, gathering, communication, and passing of it. Barr also argues that difficulties in forging ahead with the privacy issue don't break down within party lines. Steve Wright of the Omega Foundation described the Echelon system, a satellite projection system that wasn't only being used for surveillance for terrorists, but for economic surveillance as well. The Echelon system captures a large amount of European traffic. Wright states that surveillance system should ensure democratic accountability and only survey targeted parties. Statewatch, the independent human rights organization, questioned the government about this system; however, their questions were suppressed. Ken Cukier of CommunicationsWeek International in France, compared the French Echelon system to the UK's; however, he states it's on a smaller scale. An interesting point made by Cukier here was that the desire to widen the system to other European countries may be the beginning of a Euro-wide effort for surveillance. He states, however, that a Trans-Atlantic effort may be difficult because the existence of two surveillance systems may trigger negative attention from privacy advocates. USDOJ Computer Crime Unit head Scott Charney offered the viewpoint that the creation of a global surveillance network will give rise to a plethora of views on surveillance vs. privacy. Charney outlines the debate by pitting the advocates of tight surveillance against the "privacy-centric," who tend to support no surveillance at all. The middle ground is covered by the Fourth Amendment, which states the Constitution allows for an invasion of privacy to protect the public good. Does the Fourth Amendment still exist in Cyberspace? Charney believes it does. He states that although most people are law-abiding, there are some who are malicious and want to cause the community danger. As in the physical world, Charney believes that law enforcement must have the "tools to be able to investigate effectively." To achieve the necessary balance, Charney offers the Electronic Communications Privacy Act. The courts are involved in the process when any surveillance is necessary, and it is limited to serious crimes. Although the act will not make everyone happy, Charney believe is it a step in the right direction, and serves as a "pretty good model." Lastly, Charney also believe that we must strive to focus and protect the values rather than specific technologies. In the Question and Answer portion, Patrick Ball of the AAAS (http://www.aaas.org) asked if a few dead human rights activists was all price to pay for strong encryption. USDOJ rep Scott Charney offered the predictable response that the exclusionary rule must be used; policy is not designed to solve all problems and there must be a balance. He states, though, that strong encryption does have positive uses. Anonymity and Identity in Cyberspace The last panel of Tuesday's general session was moderated by my colleague at AT&T Research, Lorrie Cranor (http://www.research.att.com/~lorrie). Lance Cottrell of Anonymizer, Inc.(http://www.anonymizer.com/3.0/index.shtml) began his portion of the discussion with the example of information flowing out of Kosovo in the clear. Postings through email attach an identity to the text; therefore, the user may be subject to abuse. The problem lies in being able to relate information securely and privately. According to Cottrell, the limitations include: * bandwidth (constrained communications in these areas) * unreliable connection * no apparent form of training available * lack of advanced technology * rapid deployment * users' want to remove incriminating software from machines Cottrell forces the audience to view the problem of anonymity through the Kosovo example. It becomes apparent that achieving anonymity in countries like this can be a serious challenge; however, he does not downplay the challenge of achieving it in technologically advanced countries like the US. Cottrell's reply to a question posed in the Q&A session furthers this idea. He states that anonymity in the physical world is taken for granted. For example, one does not have to identify themselves when buying groceries (if paying in cash) or sending a letter (neglecting to add a return address). This type of anonymity is harder to achieve on the Internet, where tighter surveillance is enabled. Mike Reiter of Lucent Technologies (http://www.bell-labs.com) outlined technology that can be used to hide information that may identify you. He uses the LPWA (Lucent Personal Web Assistant) proxy server as an example, explaining that users can redirect a Web request through the proxy. LPWA offers support for personalized browsing by issuing an account and password; a control code is entered into any Web form, and the LPWA provides the site the account/password so they are identified without really being identified. The problem with technology like this is the level of trust given to the administrators. The following question arises: do products like these make the issue worse? It seems to be a problem of scope; the more popular and advanced products like this become will increase the amount we will have to trust admins. Reiter is also the co-creator of Crowds (http://www.research.att.com/projects/crowds), an anonymous Web surfing technology. Paul Syverson of the Naval Research Laboratory spoke about the Onion Routing project (http://www.onion-router.net). The idea is based on a network of nodes scattered around the Internet. The current request only knows the previous and next nodes while raw TCP/IP sockets rout all traffic through the Onion. The technology can exist as a proxy and on a firewall. In addition, the Onion can exist at the desktop; it is interesting to note that, depending on how system is configured, local system administrators may or may not be privy to whom employees are communicating or what protocols they use. Essentially, this takes some emphasis off the admins. The USDO's Phillip Reitinger was the gratuitous government representative for this panel. What's a panel on anonymity and privacy without a spoiler? Reitinger outlined law enforcement's concerns regarding anonymity. Reitenger states, "we can't put a pseudonym in jail." He concedes that anonymity is constitutionally protected in some forms; however, networks allow for anonymous crimes to be committed with distance, in regards to both location and identity. A communication trace can be easily circumvented, states Reitinger, by the use of fake email or IP addresses. Content can also be covered if encryption is used, and due to the lack of biometrics the crime may go unsolved. The core argument by the government rep is that anonymity services cause serious headaches for law enforcement (surprise). The Q&A produced some interesting banter between audience members and the panelists. An interesting question posed was, how does the amount of anonymity in the physical world compare to the amount in Cyberspace? Reitinger offers that the Internet anonymity and traceability together, while the physical world keeps them separate. He also states that the amount of Internet privacy depends on the level of sophistication [in technology] utilized. Cottrell gives the answer seen above regarding the groceries and letter with no return address. Another question was, do anonymity service providers get approached by law enforcement to gain the identity of users? Austin Hill, president of Zero Knowledge (http://www.zeroknowledge.com), replies with a yes, but they remain true to their name. Crowds co-creator Mike Reiter states since its a distributed technology, there isn't a focal point for requests. Paul Syverson offers the same response as Reiter in regards to Onion Routing. Squash anyone? The day closed with the EFF Pioneer Awards and the opening reception, followed by the evening working group. I didn't attend the 9:00-11:00 PM working group because I was enjoying the Washington DC nightlife. ------------------------------------------------------------------------ Wednesday April 7, 1999 General Session I'm sure the continental breakfast (that began at 7 am) was chock full of industrious attendees who were eager to exchange opinions and ideas on the previous day's session while nibbling on fresh fruit. I wouldn't know, though, as I was the one who ran down at around 8:05 and grabbed a bagel just as the first session was beginning. You can always network at lunch, you know, and it's better to be conscious for it. Keynote Address: Mozelle Thompson, FTC The FTC's (http://www.ftc.gov/) mission is to create an environment of consumer protection so that markets will flourish and consumers will benefit from the abundance of choice. The FTC has been protecting consumer across all media, including the Internet. Thompson suggests that E-commerce has had a growth rate of 200% annually, and roughly $13 billion in 1998. According to Thompson, the opportunities for Internet Fraud are abundant due to the low startup costs, real-time payments, and the ability to mimic a legitimate business. He adds that there are infinite places to hide from law enforcement. Thompson examines the issues in preventing Internet fraud. He cites the need for "real, effective, and timely" self-regulation. Essentially, he asks if industry can take the lead in solving consumer public policy issues; he also asserts that consumers have a right to expect government and business to create a safe environment for them to conduct online business in. The FTC has been pressing industry to post privacy statements on their Web sites. In my opinion, the idea of self-regulation is still a fantasy; Roger Clarke proposes an interesting co-regulatory scheme in his paper "Internet Privacy Concerns Confirm the Case for Intervention." Read it. Keynote Address: Congressman Ed Markey Markey asserts that privacy protection comes with exercising basic civil freedom, and he would like to see strong pro-consumer encryption policy and support for privacy policies. A posted privacy policy isn't always a good one, he states; it must be clear, conspicuous, and concise. Markey places importance on technological solutions such as P3P (http://www.w3.org/P3P/), as well as a government enforced set of basic privacy rules. Lastly, he promoted industry self-regulation. Markey seems enthusiastic about technological solutions, and supports necessary actions to ensure efforts by the private sector. Copyright on the Line: Blame it on Rio? Or Title 17? Jonathan Zittrain of Harvard Law School (http://www.law.harvard.edu/) began this panel by speculating on numerous issues raised by the use of compressed audio (mp3). Zittrain offers the use of digital watermarking to determine ownership and questions if "fair use" should be built into copyright law. Henry Cross, Artist/producer, plays the true spoiler to the music industry in this panel. He emphatically asserts that the industry is attempting to crush MP3. His main points include: * MP3 gives rise to independent artists who would not otherwise have a chance to distribute their music * Issue is control (music industry's need for it); by empowering people with MP3, they have a choice * Labels are worried because MP3 allows artist to directly access fans/audience Cross relayed his points in a dynamic and emphatic manner and was quite influential. Michael Robertson, President of Mp3.com (http://mp3.com), placed importance on the need for competition, which will enhance democracy and expose more artists simultaneously. His belief that "legislation shouldn't throttle technology" was apparent in the assertion that MP3 serves as a litmus test for other digital media and distribution. Scott Moskowitz (Blue Spike, http://www.bluespike.com) spoke about the use of digital watermarking and encryption for ensuring content uniqueness. He would like to see artists/publishers to evolve from packaged media to a more dynamic distribution of content. Moskowitz argues that artists should be empowered to be their own PR and publishing force. Carol Risher, Vice President of the American Association of Publishers, defended the segmented supply chain. She argued that each part of the chain adds a certain amount of value, and the emergence of digital distribution destroys potential opportunities and income for these parts. Roger Clarke points out, "she signally failed to address the key question about whether the industry value-chain could be greatly trimmed, and could provide a larger proportion of the revenue-stream to the originator." Carey Scherman from the RIAA (http://www.riaa.com) predictably asserted the music industry's opinions of MP3; they are not concerned with MP3 so much as the piracy of it. According to Scherman, artists should have the right to put material on the Internet (it will benefit them), but protection should be in place so that they get paid. Scherman also discussed SDMI (Secure Digital Music Initiative), the movement to override MP3 by creating a standard; SDMI is intended to be an infrastructure for an infinite variety of ways music can be sold (such as subscription services, rent to own, or per number of listens). The SMDI will attempt to control piracy while pleasing the RIAA. Michael Robertson later stated that, "Cary [Scherman] is not for the artists. He's for his constituency, which pays his salary." Unfortunately, I cannot do the panel/audience interaction following the statements justice, but Declan McCullaugh can. Read his Wired article on the panel. http://www.wired.com/news/news/politics/story/19007.html Roger Clarke also made some interesting comments on this panel. http://www.anu.edu.au/people/Roger.Clarke/DV/NotesCFP99.html Chemical Databases on the Internet: Risk to Public Safety or Government Accountability? This panel focused on the scenario of a published electronic searchable database of facilities and worst-case scenarios could give criminals an advantage. It was argued that critical pieces of chemical information could be used to plan an attack through the Internet. The National Security Council is opposed to this type of database because the threat of terrorism it poses. One panelist argued that the technological environment today is vastly different from what it was in 1990; we can't control what's on the Internet but we need some sort of safety net. There has been a growth in information technology but not a proportional growth in information protection technology; this is an important point in regards to this issue. Industry has been more positive about informing society by describing what the worst case is, and why it's impossible. Free Speech and Cyber-Censorship II The panel in this discussion consisted of a diverse group of individuals that spanned the globe, keeping with the conference theme. Richard Swetenham from the European Commission DG XIII concentrated on the European Union viewpoints on free speech and censorship. Since the EU doesn't have a federal constitution, it promotes cooperation between law enforcement and citizens; for example, they have a "tip line" where citizens may provide leads to crimes, etc. In regards to content harmful to minors, Swetenham states that it is illegal to give minors access to such content. The method of determining what is harmful to minors is subjective, however; if parents decide their children cannot see certain content, it's considered harmful. EU efforts are currently directed at providing funding for self-rating schemes. The next speaker [Sobel] outlined the universal aspects of the censorship issue, and the relationship between free speech and privacy. The government needs a way to identify and locate the person who breaks the "harmful content" regulation which will lead to more ways of locating "posters" and identity-location mechanisms. One solution is to utilize online age verification; the individual will identify themselves through a credit card. Obviously, this has an effect on anonymity. Sobel goes on to evaluate the options by stating that parental responsibility and education is a viable option, but neither law nor technology will protect children from harmful content. Technology offers commercial software, which Sobel notes is clumsy on the average. In essence, Sobel notes that no system can keep up with the growth of the Internet, and the use of technology may not always be voluntary (may be mandated by government). In my opinion, Sobel seems to provide a middle of the road account of the problem without tackling any of the issues underneath the surface. Professor Zehao Zhou of York College (http://www.york.edu) spoke about China's situation regarding privacy and censorship. Zhou states that China has made significant strides in these areas but still has a way to go. For example, the 1998 Starr Report was banned in print format but was available online. The government controls all Chinese Web sites, and only occupational and social information is allowed. In regards to censorship, Zhou explains how Website access and use is monitored. One interesting thing to note is Zhou's statement that the Internet is a status symbol; therefore, the desire to get online is increasing rapidly. Fadi al-Qadi offers the viewpoint that the Internet has no code of ethics and therefore no possibility of regulation. The lack of philosophic backbone to the Internet proposes the true challenge of finding ways to utilize information technology; we must use the Internet as a true censorship free vehicle for information communication. Not surprisingly, he also asserts that the Internet is no longer a US based entity. The challenge offered by al-Qadi is insurmountable for the fundamental reason that we can never create a unified view of what constitutes as censorship. Lastly, Margarita Lacabe of the Derechos Human Rights offered the Latin American viewpoint to the panel. She stated that censorship in Latin America is indirect; for example, journalists and human rights activists receive threats with little prosecution of the offenders. In addition, the government prohibits the publication of insults to [government] officials. Of course, the Internet allows everyone to have a voice without filters; however, people who don't share the correct views are in danger. Anonymity isn't always guaranteed and the tools available to track the anonymous are becoming more advanced. On the other hand, Lacabe offers the terrorist law in Argentina as a success. The day closed with the Privacy International Big Brother Awards, followed by the banquet dinner. Recipients of the Big Brother Awards included: * US Senate candidate Bill McCollum (Overall Worse Public Official) * FDIC's Know Your Customer Initiative (Most Invasive Government Proposal) * pharmacy data purchaser ELENSYS (Worst Corporate Invader) * the FBI as Orwell Lifetime Menace * Microsoft Corporation (outstanding nominee in the Peoples' Choice category). The Brandeis privacy awards went to PGP developer Phil Zimmerman and Diana Mey, the Virginia housewife who fought and won against a relentless telemarketing firm. Congratulations to the Brandeis recipients! ------------------------------------------------------------------------ Thursday April 8, 1999 General Session Unfortunately, I did not attend all of Thursday's session because I was traveling in the late afternoon and sightseeing for part of the morning. I attended the Mock Trial and Tim Berners Lee's keynote address. I attempted to attend the Point-Counterpoint session, "Are There Limits to Privacy?" but I spilled hot chocolate on my pants on my way into the ballroom. By the time I changed and got back down, the session was over. So, blame my lack of coordination on missing that one. Keynote Address: Tim Berners Lee, W3C Director The talk is available at http://www.w3.org/Talks/1999/0408-cfp-tbl/. Lee outlined the Web itself, focusing on the major points: * It is a completely universal tool, independent of hardware, software, and OS * Idea of learning through shared knowledge * It furthers human communication and fosters reading/writing/mediating He also outlined the World Wide Web Consortium and its mission: * Scope - common Web architecture * Responsible for specification, lead in implementation, and interaction between interest groups * Industry members and invited experts Roger Clarke's comments on the current concerns: o bias in information. But that risk can be coped with if people understand that bias exists, what it's nature is, that they have choice, and how to exercise that choice. o the right to link. But a link does not imply an a priori endorsement. Meta data should be free of constraints. o communications protocols. They are technical matters that do not change laws or create or destroy rights. They need a legal framework around them. In particular, Platform for Privacy Preferences (P3P) creates a technical framework that enables automated negotiation by agents; but it also needs consumer protection law around it. Lee asserts that there is great fluidity on the Web, and technology and policy must continue to interact and make progress. W must work to protect the consumer and make the Web experience more enjoyable and secure. Finally, I'd like to add comments from Roger Clarke's report on his conversation with Tim Berners-Lee following the talk. I tackled Tim after the session on the question of whether W3C should establish a standard for state-maintenance, to replace the flawed cookies design (which is a Netscape add-on adopted also by Microsoft, not a web standard). Tim didn't realise that the IETF Draft had expired in January this year, and that there is therefore no current proposal to define a suitable solution to state-maintenance. He said someone could propose that it be a work-item, and that at the very least W3C could mirror the now-lapsed draft; but someone (presumably meaning a paid-up member) would have to bring forward a proposal. As for the mock trial, I attended it, but confess my attention was waning a little. Roger Clarke gave an informed summary, however, and I would suggest reading his take. It's about 3/4 of the way down on his site (URL listed at the top of this document). Read more from the CNet News article on this panel. ------------------------------------------------------------------------ I found this year's CFP to be enlightening and informative. Last year was my first CFP so I was a little overwhelmed by the amount of information that springs out of the panels. Not that I wasn't overwhelmed this year, as CFP packs a lot of panels, working groups, and keynotes into each day's session. I learned more about tools to achieve anonymity, something I didn't feel I knew enough about. It was also refreshing to hear the global views on censorship and content monitoring (especially for Asia and South America); being a netzien based in the US, I often don't consider the principles and procedures in regards to these issues (how selfish). Random Thoughts... Like everyone else, I still think self-regulation needs work...this year's CFP made a valiant effort at giving a global perspective... I think the SMDI is going to spark a huge music industry/independent explosion in the future (more than it currently has)...I don't think MP3 will die any time soon...still haven't gotten a pair of tie-dyed socks like the ones I saw John Gilmore wearing at last year's CFP...that's about it. See you next year. By the way, the chair for CFP2000 is Lorrie Cranor. Disclaimer: The views presented in this document are entirely my own and do not reflect the views of my employer, AT&T. Any complaints, rants, or even compliments should come directly to me (dgallo@research.att.com). ______________________________________________________________________ 1999 IEEE Symposium on Security and Privacy Oakland, CA USA May 9 - May 12, 1999. by Mary Ellen Zurko (Mary_Ellen_Zurko@iris.com) ______________________________________________________________________ The 1999 IEEE Symposium on Security and Privacy was held in Berkeley, CA, May 9 - 12 (at the traditional Oakland venue, The Claremont, which changed its mailing address from its front door in Oakland to its back door in Berkeley several years ago). It was the 20th anniversary of the symposium, and was particularly well attended and lively. I apologize for not recording the names of the majority of questioners, and for any names that I got wrong. The first session, Monday, May 10, was on Systems, chaired by Roger Needham (Microsoft Research). The first paper presented was "Hardening COTS software with generic software wrappers" by Timothy Fraser, Lee Badger and Mark Feldman (TIS Labs at Network Associates, Inc.). Timothy presented. The paper addresses the problem of adding security to Commercial Off The Shelf software that was not designed with security in mind, particularly in the face of the limited security support in widely deployed OSes like Linux and WNT. In addition, they want to add security functionality, not just limit what applications can do, and on a per-application basis. All this must be done at an acceptable cost. They chose the approach of developing and deploying generic software wrappers, which intercept all interactions with the OS, including inter-process communications. The wrapper is a small state machine that examines the parameters and may take any action, transform the call, and so on. They have wrappers for Solaris and FreeBSD, and one with less functionality for WNT, available for downloading. They can perform Mandatory Access Control, do intrusion detection, provide micro firewalls around processes, provide sandbox execution environments, participate in new protocols, and increase assurance by restricting behaviors. However, they cannot provide information flow control on cover channels, as they can only monitor explicit events. They can't observe behavior in inverted system calls, where the request loop runs inside the kernel. They do not provide an Orange Book style layered assurance argument. They express rich functionality with a Wrapper Definition Language, provide platform independence with abstraction, manage and ensure wrapper state with an Sql-like interface to share and store it, automate deployment with Boolean expressions, and provide non-bypassable control with a loadable kernel module architecture. A wrapper references an interface and says which events it wants to intercept from that interface. Timothy presented some performance numbers, indicating about 1.5% slower end user application performance, a 3% penalty for an HTTP Webstone benchmark, and a 6% penalty for a kernel build (all unoptimized). Wrappers are duplicated for forked processes. In wrapper composition, the first wrapper gets to intercept a call out first, the last works on a call back in first. Several questioners brought up the issue that this still provides no protection from a flawed OS, though Timothy pointed out they do run in kernel space. One questioner asked about the relationship between a parent and forked child database. There are three levels of scope in the WDL; global database tables, a middle level for all instances o f a wrapper, and one wrapper on one process. The child wrapper can see parents'. Another questioner asked how programs are identified. It's by the name of the binary. The second paper was "Firmato: A novel firewall management toolkit" by Yair Bartal, Alain Mayer (Lucent Bell Labs), Kobbi Nissim (Weizmann Institute), and Avishai Wool (Lucent Bell Labs). Avishai presented. Their paper addressed the problem of telling firewalls what to do. This generally requires lengthy and complicated lists in a large environment. Firewalls are configured separately, and they may be from different vendors. Globally, security managers hope that they get the policy they want. Their work tries to separate the design of the global security policy from the firewall vendor specifics, and separate the design of that policy from the network topology, by automatically generating the configuration files. This can also allow high level debugging of the rule bases and policies. A model definition language describes what the security manager wants to achieve. It goes through a compiler and generates the rule bases for the fire walls. The rule illustrator takes a rule base and displays it graphically, in a hopefully easy to understand manner. They use the term "role" for a property which may be assumed by hosts on the network. Roles define positive capabilities between peers. You need a pair of roles to specify that capability. Whatever is not specifically allowed is disallowed. They also have inheritance. They support host groups in an IP range, and any role for the subgroup is inherited by member subgroups. The notion of a "closed role" is used for roles that should not be inherited. Policy and network topology are also input to the parser. This generates rules in a central rule base, which can be optimized. The rule distributor cuts the rule base into pieces and puts the pieces on firewalls. A rule illustrator helps with debugging the compiler. You can look at the rules generated. It also allows backwards engineering. If a customer has hand written rules, it can help the transition to Firmato. The graphical display shows host groups, location with respect to firewalls, containment trees, and services. Th e Firmato compiler is a working prototype that currently supports the Lucent firewall product. Questioners asked if they had show the rule illustrator output to any security managers. They tried it out on their home system and showed it to the people running it. They found rules that were left over, and there were some surprises in the structure of host groups. Another questioner asked about correctness preserving properties between the compiler and illustrator. They were implemented by a small group of people. Avishai stated that "It's a good idea to get both right." Another questioner noted that the visualizer does some decompilation, and asked if they had thought about decompiling the rule base into policies. Avishai stated you can lose information on the way. For example, some firewalls don't support what their language supports. Another questioner noted that it was disconcerting to bind topology at compile time, since it changes frequently and unexpectedly. Avishai replied that they are modeling a limit ed part of the topology intentionally. It does not include routing information. In response to questions, Avishai noted that it can't help with deciding what goes on each side of the firewall, and that detecting policy conflict was future work. They are not showing application proxy firewalls, just packet firewalls. The third paper was "Flexible-policy directed code safety" by David Evans and Andrew Twyman (MIT). David presented. Their goal is to prevent programs from doing harmful things while allowing useful work. They want to protect against malicious attacks, as patching code to prevent them after the fact is a losing battle. They are also concerned about buggy programs (which are more common) and user mistakes (from bad interfaces). Each of these may be represented by different kinds of policies. For example, you can be more precise about buggy programs. Their work takes a program and a safety policy and outputs a safe program. They want to reuse their descriptions of safety policies across multiple platforms. Examples of different policies include access constraints (including JDK policies), resource use limits, application specific policies (such as one for tar), behavior modifying policies, and soft bandwidth limits (that slow down the application). The user view is of abstract resources like files, not system calls and disks. They want to express at the policies at the user level but enforce them at the system level. They enforce them at the level of the system library. A safety policy definition includes a resource description, a platform interface, and a resource use policy. The resource description is a list of operations with parameters and a comment. The platform interface gives it meaning. A policy is a constraint on the resource manipulations. They can enforce any policy that can be defined by checking that can occur on resource operations. They cannot constrain CPU usage. To enforce policies, they run them through a policy compiler to produce a policy enforcing system library. An application transformer takes a program and makes it policy enforcing by forcing it to use that library. They have code working on JavaVM and Win32. It scans for kernel traps, so certain good programs won't run. It uses software fault isolation to prevent jumps around the wrappers. They replace DLL names in the import table. "The most surprising result is that it works." They can support policies on Microsoft Word. Their testing showed that PKZip performance degraded 12% using a policy that limits writes. The overhead will depend on the policy and application. It's faster than JDK because it generates policies statically. They can optimize out unnecessary checking. Their JavaVM needs to be attacked. It's available on the web. You can upload a program, select a policy, and have it execute on their machine, though "you don't actually have to bring down our network." One questioner said he had had problems with certain programs when replacing DLL names in import sections. The authors had not yet run into difficulties. Questioning brought out that they don't allow self modifying code, which precludes Lisp. They also have to worry about reflection being used in Java. When asked about covert channels, they stated that while their web site will send out the violation message, in practice, no information goes back to code provider. Only the user sees the policy violation. The second session was on Policy, chaired by Ravi Sandhu (George Mason University). The first paper in the session was "Local reconfiguration policies" by Jonathan K. Millen (SRI International). His work covers the intersection of multilevel database security (the aggregation problem) and system survivability (fault tolerance and reconfiguration). A reconfiguration is a state change, usually forced by a component failure. Services should continue to be supported if possible. A reconfiguration policy says what state changes are supported. For example, multiple services may be supported by different combinations of resources. A state is a set of services and a composition of resources. A configuration has to map the services to the components. For example, a router may be shared for both printing and dialin. Components can be non-sharable. In a multi-level secure database, records have sensitivity labels. A low level collection can be more sensitive than might be expected. An example is the NSA phone book. So, sensitivity levels apply to aggregates as well as datasets. Information flow is related to an aggregation. It can be allowed or not between certain aggregates. Flow is permitted if the sensitivity of the recipient dominates (a safe flow). You also need to make sure the union of datasets is allowed to have information flow if both have flows to a single recipient, since it's a new aggregation. Meadows showed that there is a unique, maximal safe flow policy that can be computed. Jon then drew an analogy between aggregation and reconfiguration. Aggregates like compositions and datasets like components. He needed a special lambda to correspond to sensitivity level. Flow policies are like reconfiguration policies. An induced reconfiguration policy is a flow on states. If the flow is allowed and realizable, it is allowed in the reconfiguration policy. He viewed a system as a dataset and examines the consequences of maximality. First he needed to invent a notion like sensitivity. It is based on the notion that the more services a component supports, the more sensitive it is. The sensitivity level of a composition is the set of service-sets of realizable states supported by that composition. He uses this formal analogy to discuss safe flow policies and maximal flow, and to determine if an induced reconfiguration policy is service preserving. Determining the maximal safe flow gives a localizable policy (component replacements). A questioner asked about future generalizations. Jon wants to extend it to hierarchies of support. He would like to apply it to mission critical systems. It can minimize the amount of information needed to make rational decisions about substitutions. The next paper was "A user-centered, modular authorization service built on an RBAC foundation" by Mary Ellen Zurko (Iris Associates), Richard T. Simon (EMC), and Tom Sanfilippo (Groove Networks). I presented. (This part of the writeup is based on my presentation notes, which explains its length.) Our work is called Adage (Authorization for Distributed Applications and Groups). The goals were to provide user-centered authorization support for security administrators and applications, to be policy neutral, to be a modular part of an evolving infrastructure and to take advantage of new services as they were needed, and to use a role-based (RBAC) backend. Our GUI provided affordances for new and intermittent users, while our authorization language provided extensibility and programmability for power users. Our APIs were as simple as possible for application developers. We support a variety of policies, including Bell and LaPadula, Biba, Chinese Wall, ACL granularity, roles, separation of duty, and ad hoc policies observed in nature. Our RBAC backend provided excellent support for user queries and allowed us to experiment with deployment issues for RBAC. The user interface supports three basic entities; actor, application action, and target. Actors can hold all the principals associated with a single person, and are needed for separation of duty policies. Targets use application-specific namespaces. It supports heterogeneous and homogeneous groupings of those entities. The modular architecture takes inputs from authentication and attribute authorities and responds to authorization requests from applications. The front end has been integrated into the Apache web server, Tcl, CORBA, and third-part administration clients written in Java. The backend can use a variety of database and communications technologies. Two databases represent the user view of authorization information and a simplified engine view. The authorization language is a declarative language for defining and performing operations on authorization objects . Its built on a more general purpose language (currently Tcl). The management APIs are Tcl pipes which don't need to be altered when new management commands are added to the system. The RBAC authorization engine uses roles derived from UI groups to determine the relevant rules. Attributes on entities and groups determine their levels, and constrain when actions may be performed. Rules are made up of actors, actions and constraints, and targets. Policies group rules and form the context for queries. Only one policy is active at any time. We conducted two types of usability testing, in addition to our initial user-centered design techniques such as scenario-based design, to verify our claim that the system is easy to use. We performed contextual usability interviews, which capture a rich view of the problem domain and are good at finding unknown conceptual problems. Our unstructured interviews included the user's background, job responsibilities, organizational culture, work habits and processes, and security policies (if they existed). Our formal usability testing elicited detailed feedback on a specific GUI prototype. Users performed typical tasks while thinking out loud, and an exit questionnaire provides measurement of the user's experience. This testing verifies that simple tasks are easily executed, gauges the satisfaction of novice users with the initial GUI implementation, documents novice user errors and confusions, and provides fodder for future changes. We asked the user to accomplish four tasks in one hour; remove a user from the system, add a new user with appropriate controls, service a complaint about lack of access to a resource, and design a policy to control the release of information. We used five subjects from two organizations who had experience doing security administration with distributed systems. Our contextual interviews showed that a security administrator's job is unobtrusive, dynamic, cooperative, and learning oriented. No subject had a documented, detailed authorization/security policy. They preferred GUIs, but required command interfaces as well for situations when they couldn't use a GUI. The formal usability testing verified that novice users can perform meaningful tasks in under an hour. They asked how rules combined, and were satisfied with the answer that all must be passed. They like the notion of flexible, named groups for targets but underutilized them. We had hoped they would use those groupings for the fourth task, but most relied on namespace groupings instead. All of the subjects were unfamiliar with static separation of duty. The third task was constructed so that the obvious solutions would run afoul of separation of duty. The error message made the problem clear to most of the users. Several administrators suggested simply circumventing the constraint. On a 6 point scale (1 being best), they rated ease of viewing the database at 1.4, ease of finding things at 2.6, and overall satisfaction at 2.8. One questioner asked about sharing of information across administrative domains. We had hoped to incorporate Hosmer and Bell's work on multiple policies, but lacked time. Another asked how difficult it was for users to translate their policy into the GUI for task 4. Our GUI supported the notions they wanted and expected to use, like wildcarding. Another asked about changing constraints and checking backward compatibility. TIS had a paper on this issue at last year's Oakland. Another pointed out that testing on military system administrators would likely produce different results. The first afternoon session was on Verification, chaired by John Mitchell (Stanford University). The first paper was "Secure communications processing for distributed languages" by Martin Abadi (Compaq Systems Research Center), Cedric Fournet (Microsoft Research) and Georges Gonthier (INRIA). Cedric presented. They want to reason about security properties, while still shielding high level code from both network security concerns and the complexity of the network. They aim for a high level model with strong implicit security and network transparency. Their work is based on distributed implementations of secure RPC and RMI that do serialization and call cryptographic APIs. The high level model represents programs in the join calculus. This ensures correct protocols and is translated to their distributed implementation, programs in the sjoin-calculus. One correctness theorem is that the translation does not enable attacks. This means that distribution does not enable attacks (of the servers, between machines). It is simpler to reason about security in the high level model than the low level cryptographic calls. Join-calculus is a simple model of distributed programming. Processes can send messages on channels. Processes can also define new channels. Channel definitions can express synchronization patterns. Two processes are equivalent when they have the same behavior in every context of the join-calculus. The goal is that, no matter what the attacker does, it won't get information by interacting with a process. Their work is based on capability-based control. It also deals with integrity and anonymity. Correctness ensures that various program transformations are sound. At the implementation level, such properties are hard to achieve. Processes cannot communicate through private channels. Contexts can express many realistic low level attacks. Sjoin-calculus uses more details and is more explicit. It supplements join-calculus with public key cryptography. It adds keys and ciphertexts, and encryption and decryption constructs. Two channels represent the public network, which is available to any process. Processes use these two channels only to emit and receive. Messages can be observed/intercepted on the network. The anonymity of a sender is not guaranteed, even if the key is kept secret. A protocol is correct when its processes satisfy a few properties in the sjoin-calculus (e.g., communication occurs silently). They set up countermeasures for message interception, replay, session interaction, and traffic analysis. The protocols combine several standard techniques (message replication, challenge responses, nonces, confounders, noise). For a given process, they program a filtering context that intercepts every message that escapes this process and goes to another. Whenever a message crosses the filter, new filters and processes may be unfolded, to ensure that all further messages are filtered. The translation depends only on the interface of the process. The translation reflects the actual communications processing in the implementation of the distributed language. Protocols can be uniformly applied to obtain a secure distributed implementation of a programming language with a precise definition (as a translation), with correctness theorems, and "with considerable expense!" Their results do not depend on the particular program being considered, its runtime distribution, and its interaction with untrusted machines, for the equations to be preserved. A questioner asked what they mean by trusted. They mean running on a machine implementing their language. The next paper was "Verification of control flow based security policies" by T. Jensen, D. Le Metayer, and T. Thorn (IRISA). Thomas Jensen presented. They discuss how to use techniques from the semantics of programming languages to verify sec properties. They produced a formal program model and specification language for verifying global security properties of code that may contain local security checks. They model security properties related to the flow of control in the program. This can model various Java security architectures, such as sandboxing (code can call methods from its own site/sandbox or (certain) local methods), resource protection (A can only call C via B, no other way), and stack inspection (a method can be called only if all code participating in the call on the control stack has permission to do so). Verification is automatic based on static program analysis and checking. The program model is a control flow graph. Call edges indicate control, nodes mark that methods have finished. A program is modeled by a set of call stack traces. Properties on nodes include origin, endorsement, read/write permissions, and so on. Properties of traces hold if all stacks satisfy a given property. If there are loops, the stacks can grow infinitely high, so can't be verified statically. For given property and program, they find a bound on the size of the stacks on which a property has to be checked. Intuitively, the number of next nodes in the formula decides how many times to unfold the loop. They apply this work to Java. In related work, Schneider uses a finite state automata to describe secure behavior and enforce security in a system by parallel composition with a secure automaton. Wallach and Felten formalized stack inspection as a belief logic and propose secure passing style as a generalization. Programs are transformed so that procedures/methods take an extra argument, their permissions. This work produced a formalism for specifying sec properties based on temporal logic, and a semantics-based method for automatic verification of security properties. There is no GUI; no one but experts can use it. The framework can model the stack inspection from the new Java security architecture. In the future, they like to take data flow into account, deal with mutually recursive calls across protection domains, and specify dynamic segregation of duty-style properties such as the Chinese Wall. A questioner pointed out that the control flow information is not necessarily perfect. If the code calls a base method, it could go many places. In that place, you have to say it can go anywhere. A panel called "Brief History of Twenty Years of Computer Security Research" closed the day. The chair was Teresa Lunt (Xerox PARC). G. R. Blakley (Texas A&M University) started with 20 years of cryptography in the open literature. Engineers like to make problems, not solve them. The Wright Brothers didn't solve any existing problems; people were institutionalized if wanted to fly, or burned at stake if did fly. He started with the Paleolog era. In 1974 George Purdy had a high-security log-in scheme based on spare polynomials, publicizing the idea of one-way functions. In 1976, Diffie and Hellman introduced the general idea of one-way functions, trapdoor one way functions, and digital signing. In 1977 the US National Bureau of Standards adopted DES. There was a lot of worry then about trapdoors in the algorithm. In 1978 RSA was published. Merkel and Hellman published their knapsack public key cryptosystem. In 1979 Shamir and Blakley published threshold schemes. In 1980 people wanted to show that cryptographic systems are secure. Even and Yacobi published cryptosystem that was both NP hard to break but almost always easy to break. This see med to support Bob Morris' assertion that crypto systems have lifetimes. "We are just leaving a period of relative sanity in cryptography that began shortly after the 1st World War. During that time, people spoke of crypto systems that we secure for hours, ... sometime years. Before and after they spoke of crypto systems that were unbreakable". In the Mesolog, in 1980, the IEEE Symposium on Security and Privacy began. Davida pointed out that threshold schemes could exhibit security very far from Shannon perfect security (merely cryptographic security). In 1984, Meadows and Blakley introduced ramp schemes that were more economical than threshold schemes, but exhibit only Shannon relative security. In 1985 Miller introduced the elliptic curve PKC. In 1994, NIST adopted DSS. That brings us to the Neolog (today). There is still no proof of DES security, knapsack or RSA. Trapdoor knapsack is basically broken. Secrecy systems, authentication systems, integrity systems; few have proofs, all have lifetimes. Electronic signing and sealing are replacements for inked signatures. Virgil Gligor (U Maryland) then spoke on 20 years of OS Security (Unix as one focus), moving us to the inflammatory. 35 years ago, we had Operating System security. Multics, PSOS, Hydra, CAP, KVM, KSOS, PDP-11, capabilities, and so on. Much of what followed was based on this early work. The reference monitor is the way of the future and always will be. It encapsulated all the properties of the security policy; you could look at it, prove it, and were done. Then we made some concessions. Some properties were embedded in outside trusted processes. Privileged processes could circumvent isolation. Then, we went to many reference monitors, used recursively at application level. Untrusted subjects became smaller and smaller, more had to be trusted and verified. The reference monitor may be necessary, but not sufficient for policies we're interested in. We can't encapsulate, for example, denial of service. Information flow property is not a safety property. Fred Schneider showed we can encapsulate safety properties in a reference monitor. Otherwise, have to look at the untrusted code itself. We want to make sure no Trojan horse exploits information flow. We prove information flow of approximations of systems (not abstractions). We do no prove the unwinding theorem because the OS is a large covert channel, so non-interference proofs will always fail. Penetrate and patch wins in the short run. It requires only a small investment, assuming penetrations are rare. Open OS is useful; everyone can take a crack and post the fixes. Open may be more secure in the long run. In Linux, problems get fixed. He's not sure about proprietary operating systems. Cookbooks of OS security assurance have regrettably had a limited role so far. There are no metrics to measure and they offer largely theoretical advice. Practical advice assumed that security was a central. It really is after both functionality and performance. Intrusion detection is of fundamental concern. We still have to find out what insiders have done. We need anomaly detect ion and event signatures for accuracy. Intrusion detect will fail unless we can improve performance and filter the events that we really want to audit. The difference between security and privacy is that we need privacy guarantees. Steve Lipner (MITRETEK) spoke on 20 years of criteria development and commercial technology. He said Virgil gave about half his talk. He chose a political theme. He asked, Are you better off than you were 20 years ago? Are you more secure than you were 20 years ago? Well, maybe. We have laptops and constant electronic connection. We might be holding even. In 1980, we had some identification and authentication. Most security was in the OS. We had DAC, auditing, and basic assurance from the vendors. IBM was committed to penetrate and patch. We wanted MAC, higher assurance by neutral third parties (beyond the author saying "yep, it's good alright"; we though the Orange Book and Evaluated Products List were the answer). We wanted to process classified information in environments with users not all cleared to that level. We didn't want information to leak out over the Internet. Many sessions early in this conference focused on evaluation and criteria. We got the OB and evaluations, we got products into evaluation. Every major vendor through the 80s was working on B and A level systems. There was an initiative to populate EPL; C2 by 92! C2 was supposed to be easy. By 1992, we got identification and authentication, DAC, auditing, and basic assurance by neutral third parties. But users bought what they bought. The C2 evaluation was after the product shipped. In 1999, it's the Internet, stupid! We have firewalls, intrusion detection, cryptography and public key infrastructures, and strong authentication. You can buy lots of security products; products and technologies that actually improve security, are compatible with the technology that users want, perform well, and are not evaluated. Steve sends encrypted email to his colleagues pretty easily. Everyone wanted to connect up. Everyone was spooked by it and went looking for products. Money was an amazing motivator. They're not perfect, but they improve. For evaluation for the new century we have the common criteria, which Virgil said won't work. We have commercial evaluation labs that evaluate any feature set. The user must understand the evaluation and integrate products. They have explicit responsibility. Government users still need MAC and higher assurance by neutral third parties. Government users insist on COTS products and features. Will the market demand mandatory security and assurance? Jonathan K. Millen (SRI International) spoke next on 20 years of covert channel modeling and analysis. A covert channel is a form of communication. He likened it to a game of volleyball. There are rules. Some restriction on communication needed (access control). You can communicate in spite of that by effecting a shared resource, but it might not be the usual form of communication. With information hiding, you're allowed to communicate, but the referee isn't supposed to notice an additional message. There are several branches of study of covert channels; modeling, theory of information flow, searching in the code and specifications, bandwidth estimation, bandwidth reduction, audit, and so on. Jon also mentioned some basic papers from the Lampson note on the confinement problem from '73 to Hu reducing timing channels with fuzzy time in '91; the Denning lattice on information flow, Gypsy, Goguen and Messenger paper led to noninterference work, Kemmerer produced a practical approach - the shared resource matrix, Millen's finite state estimate capacity of covert channels based on Shannon's work, and Hu's work on fuzzy time (the notion was that defense against timing channels was to perturb the system realtime clocks). It was the first published paper on hardware channels. Early work assumed the channels were noiseless as a conservative, simplifying assumption. This produced an upper bound. Moskowitz and Miller looked at the effect of random delays from an interfering processes. When Jon saw the function they were using, he decided to change his focus to protocol analysis. In database systems, they looked at polyinstantiation and transaction processing. In cryptosystems, they researched subliminal channels and protocol attacks. Jon then shared some questions and answers from the field. Have covert channels even been exploited? Yes. Do they require a Trojan horse? Yes, and a nondiscretionary security policy. How fast can they get? Thousands of bits per second. What's the difference between storage and timing channels? No one knows. Can you detect all potential storage channels? Yes, and then some, by analysis of the OS and trusted software. Can you eliminate all covert channels? No, unless you're willing to redesign the system in a fairly drastic way. John McLean (NRL) spoke about 20 years of formal methods, and why some may be of limited use. He had several evocative soundbites, such as, "What's the use of being formal if you don't know what you're talking about?" and "Algebraic symbols are what you use when you don't know what you're talking about.". Formal methods are a system of symbols together with rules for employing them. The rules must be recursive and computer processable. He searched the CD of all the past Oakland proceedings for "formal methods" and started counting the papers. There were papers on formal models, flow analysis, verification, and application to A1 OSes in first few years. '84 had a whole sessions on formal methods; verification methods, security models, and flow. He then showed us the results we got with Excel, the 2nd great program he learned to use after becoming a manager (Powerpoint being the first). In '84 there were 14 formal methods papers. That began the heyday of formal methods, which lasted through '96. In '84 there was work on Ada and an A1 OS. Millen's interrogator foreshadowed cryptographic work in '90. In '89, after system Z, he learned not to take on the NSA lightly. Most of the Clark and Wilson impact happened outside of this conference, though the paper itself was published here. John had several lessons learned. If you're going to say something, make a mistake, or no one will write about it. Security is not simply good software engineering, and security verification is not simply an issue of software correctness. Correctness is not preserved by refinement or composition, and testing doesn't work either. Proofs can be flawed or they can prove the wrong thing. A large verified OS is too costly and stagnant. Formal methods are not appropriate for lightweight security. Some security properties are not treated by formal methods (integrity). It's good for small, complex specifications or programs. Formal methods can shed insight into properties they have been used to analyze. There are lots of algebras, and so on, and all have different uses. Formal methods require supporting technologies. To be used by practitioners, formal methods must be built around the practitioners' established ways of doing things. They need to show added benefit when the developers don't do anything differently. Formal methods isn't all we have to think about. Steve Kent (BBN/GTE), the last presenter in the panel, spoke on Network Security: Then and Now or 20 years in 10 minutes. He showed a comparison on a variety of topics between '79 (though it probably should have been '80) and '99. In key management, we had Key Distribution Centers (KDCs) in '79, we have Public Key Infrastructures now. We're still worried about what happens when crossing administrative boundaries. Net encryption for the DoD has the same goals as then, to interpose between trusted and definitely not trusted parts of the network. Net encryption for the public used proprietary link devices, and now we have IPsec. There was little of it back in '80. Some people used DES. In Crypto modules, today custom chips implement part of the protocols. For public algorithms, we had and still have DES and RSA. DES was new back then; RSA was becoming known. Today we have DES, RSA, DH, and the advanced encryption standard. In access control and authentication, we had MAC and DAC then and now, and now we have role based access control too. For access control models we had and have ACLs and capabilities. For perimeter access controls we had guards and now we have firewalls and guards. Firewalls are now the major form of perimeter control, even sometimes in the military. For user authentication we had and have passwords. We also now have one time password systems, fancy tokens, increased use of certificates, and more biometrics than is good for us. Now we send passwords over SSL. In net intrusion detection we used to say "huh?" and now there are many alternatives. For network protocols we need to secure, we used to have telnet, ftp, email. We still have all those, and the list goes on, including HTTP, BGP, SNMP, and so on. In '79, for security protocols, "yeah, we made 'em up!" Then Jon Millen started analyzing them and took all the fun out of 'em. Now we have IPSEC, SSL, GSSAPI, SOCKS, and so on. The IETF still has many cowboy properties. For assurance we had SFA and code reviews. Now we have FIPS 140-1, TNI, ITSEC, an d common criteria, and "we are so, so much better for it." "Every good talk should end with assurance, though we do it up front." Then came the question and answer period. Snow said that the emphasis on penetrate and patch saddens him very much. It assumes the best penetrators will post a patch. Virgil said it was economic justification. There's zero up front cost so no large investment, no extension of time to market, and they're betting that there will be few intrusions. Lipner said that sadly, Virgil is exactly right. Time to market is important. Companies start collecting revenue and beat their competitor and get market share. Mike Reiter pointed out that penetrate and patch doesn't hold in avionics; they don't let a plane crash and fix it later. Lipner said that software is soft, and the criticality of applications isn't acknowledged. They can patch after it's fielded. People dead gets more lasting attention than flurries around a visible computer flaw. VLSI folks do assurance much more than OS vendors, because a flaw in the field (any flaw) is a big deal. McLean said it's good enough for the markets; they'll respond with insurance , lawyers, interest rates, and so on. "In the Navy, we classify our mistakes." (so that outsiders don't find out about them.) McHugh suggested we fight efforts to absolve developers of liability. Olin Seibert said we focused on OSes to stop people from doing things. Melissa shows that programs designed to have flaws to allow these bad things. We can't stop a program from sending mail when a user can. Where should we go? Lipner said you want to get away from programming generality. It's a mater o f design. Drew Dean asked for reactions to open systems, where problems are found and fixed. Also, what about the folks who don't update because they have a system that works (or almost works); how do we deal with legacy? Virgil said it was the same as other problems like choosing good password. Users and system administrators have a responsibility, and must live with the consequences. Kent pointed out it's human nature. Home burglar alarms are mostly installed after the client or a neighbor was burgled. Prevention not high on the list. Peter Neuman said that the Web paradigm takes care of upgrades. You download just you what need, do dynamic linking, reload, all in a secure fashion. On CERT, Steve Bellovin went back and looked at advisories for the previous year. 13.8 of them are buffer overflows. We've been talking about buffer overflows for 30 years. We're talking about software engineering; good practice is necessary. Hilary Orman said that OpenBSD is committed to eliminating buffer overflows from source. Kent pointed out that people got excited about using SSL to protect credit cards for all the wrong reasons. To steal credit cards, all you have to do is set up a site and ask for them, and be sure to use SSL. A questioner asked, except for cryptography, has anything else in the security community changed the outside world? Virgil said we have had an impact, but its not measurable. Vendors do know more, know what it costs. McLean said we have graduate students now. Bob Blakley said that he made an economic argument 2 years ago about why security isn't going to work. He suggests something deeper, like an adaptive response to complexity. We can get 5 billion people to do half the work for free. Is there any other adaptive response? McLean said we rely very much on cryptography. Virgil said that investing in up front costs is one response, another is penetrate and patch, and the 3rd approach is to educate students and future designers. He asked how many don't use structured programming? We were taught how to do that in universities. A questioner asked about approaches to the problem of trusted insiders doing harm. Kent said that network security says perimeter security. Commercial clients realize its inadequate. They deploy network security in a more layered fashion, separating parts of network. Lipner said we have been beating on technology, technologists and suppliers. End users have to take some level of responsibility. Organizations have to establish policies, and put controls in place. One of the most valuable employees he ever had was a great system administrator. He read the audit trails, and knew what was doing what. Tuesday started with a session on Intrusion Detection, chaired by Cynthia Irvine (Naval Postgraduate School). The first paper was "A data mining framework for building intrusion detection models" by Wenke Lee, Sal Stolfo, and Kui Mok (Columbia University). Wenke presented. After outlining problems with traditional approaches, he outlined their approach based on data mining. Their work automatically learns detection rules, selecting features using patterns from audit data. A set of data mining algorithms computes frequent patterns. The performance of their approach was tested at a DARPA evaluation. Traditional approaches consist of pure knowledge engineering. They handcode patterns for known intrusions or select features based on experience or intuition. There is a push for an automated approach. They call their work MADAM ID. It learns detection models automatically. It selects features using frequent patterns from audit data. A frequent pattern is like a statistical summary of system activities. It combines knowledge engineering with knowledge discovery. The raw audit data (such as tcpdump packet data) is not suitable for building models. They process it into packet based ASCII data and summarize it into connection records. The tool then mines patterns and suggests additional features, then builds detection models. They build classification models, such as intrusion vs normal, determine relations between attributes, and find sequential patterns. They use various algorithms to find each. The key is to include useful features into the model. They automate feature selection by mining frequent sequential patterns from the audit data. They identify intrusion only patterns to construct temporal and statistical features. The basic algorithms is based on association rules and frequent episodes. Extensions exploit the characteristics of audit data. This approach is efficient and finds only relevant and useful patterns. Examples include "30% of the time when a user sends email it is from a particular host in the morning; this accounts for 10% of the user's overall activity". They have extended the basic data mining algorithms. An axis attribute is the service used, like HTTP. Reference attributes include the subject of a sequence of related actions. They construct features from parsed patterns. The DARPA Intrusion Detection evaluation was in '98. They got 7 weeks of labeled training data (tcpdump and BSM output). It was labeled normal or intrusion. They then got 2 weeks of unlabeled test data. They were to label each connection and send it back. There were 38 attack types in 4 categories (denial of service (dos), probing, remote gaining access to local (r2l), and user gaining root privilege(u2r)). Some of the attacks were new. They used features constructed from mined patterns. There were temporal and statistical traffic features that describe connections within a time window, such as % of rejected connections. They had a content model for TCP connections. For example, sometimes a user gets root going through su root, sometimes through buffer overflow. They had a traffic model for all connections. They had a very good detection rate for probing and an acceptable rate for u2r and dos. They did poorly for r2l. There are many different ways and there was a lack of representative data in the training. In the future, they want to do anomaly detection for network traffic. They can currently classify only at the end of the connection; they also want to get evidence during. One questioner asked what types of probes were detected. Some were slow moving; 1 per 1/2 hour for instance. They can sort by destination host. It's an expensive computation to sort that way. A questioner asked why they believe that some subspaces more dense than others. Wenke replied that it is the nature of intrusion; you have to do bad things quickly. A questioner pointed out that denial of service can be one packet. Wenke said that the model is not 100% accurate. A questioner asked about the right time threshold level for detecting attacks. They don't know. They would like to incorporate a sense of cost into intrusion detection activities; how much time or resources to commit. Some people do and some don't care about slow probing. The next paper was "Detecting intrusions using system calls: Alternative data models" by Christina Warrender, Stephanie Forrest, and Barak Pearlmutter (University of New Mexico). Christina presented. They did a comparison of several different analysis methods for intrusion detection. UNM showed that you can monitor program behavior to detect intrusions by watching just the system calls, not any arguments. While the traces of calls can vary widely, small patterns within them are characteristic of normalness. They use simple pattern matching techniques which are, well, simple. They asked, can we do better? New and more sophisticated analysis techniques can be applied to the same data. They chose some and tried them on lots of different data sets. Their original technique was to use fixed length windows, say of 3 or 6 calls, and extract all the fixed length sequences from the trace. In testing they look for anything not in the database of normal sequences. A single mismatch may just be a new normal. But clumps o f mismatches in a short portion of the trace may indicate a problem. There are also frequency-based methods, like EMERALD, that look at which sequences appear and how often. There are data mining methods that identify the most important sets of system calls. Finite state machines describe the process that produces the traces in their entirety. There are numerous language induction methods, neural networks, time-series prediction. Traces have relatively large alphabet sizes (20 - 60 is typical) and varying lengths. History matters locally but not globally. They are generated by deterministic computer programs, so stochastic methods might not help. Their goals were to provide precise discrimination, efficient on-line testing (catch it while its happening) and reasonable off-line training. The representatives they chose to test were their sequence time-delay embedding (stide), where patterns never seen before are suspicious, stide with a frequency threshold (rare patterns are suspicious), rule induction (RIPPER) where sequences violating prediction rules are suspicious (it has a smaller data representation and better generalization), and Hidden Markov Model, where a trace that is difficult for HMM to produce is suspicious (this allows for nonstationary behavior). HMM can run for hours or days for training; they ran it for 8 weeks. They used a variety of data sets to do an exploratory study. They collected data while were users going about their daily business. Some were constructed through explicit test runs as well. Most of the detection methods have sensitivity thresholds changing how many intrusions are detected and how many false alarms are triggered. HMM has less sensitivity and RIPPER has a very narrow range in the true positive vs. false tradeoff. Their evaluation of these was not symmetric. For true intrusions, either it detected it or it didn't. However, they count each alarm for false positives. They compared the results to a random classifier; all did better. Their simple frequency based methods was not a s good as the others. All the others have points that are near ideal, but none are exactly ideal. With average data, HMM seems closest to ideal. There were widely variant results across the different data sets. They posit that if they add another data set, the average and median results for each will change. Their conclusions were, most methods perform well, the computational effort is not directly related to the results, large differences in performance are due to the individual data sets, and finding data appropriate to the problem is as important as using sophisticated analysis methods. One questioner asked, if I was an attacker outside trying to get in, what fingerprint could I see external to system that would allow me to adjust strategy. A coworker answered you could try to figure out which programs are traced. Whether a fingerprint is left depends on how it is implemented. Another asked if they had looked at how quickly they can detect and identify an intrusion before it begins altering the system. Most are currently off line tests of what they hope to be online. A questioner asked where their window size came from. It was a tradeoff. They found 6 - 10 worked well. With smaller, there was less information. With larger, the larger database made it more time consuming. Another asked how you know that normal is normal. There are no guarantees. The method they use didn't really address that. Another asked if you'd need training sets for each part of an organization. Yes, the definition is very sensitive to the system and users. The final paper of the session was "Detecting computer and network misuse through the production-based expert system toolset (P-BEST)" by Ulf Lindqvist (Chalmers University of Technology) and Phillip A. Porras (SRI International). Ulf presented. The talk outline covered expert systems for misuse detection, P-BEST integrated into EMERALD, a P-BEST language introduction, rule examples for P-BEST for the audit trail and network, and student testing of the usability of the tool. The desired characteristics for a misuse detection system is that it is fast, with low resource consumption, has the ability to describe all types of intrusions, known and not, the language is easy to understand and use, and it supports manual and automatic update of rules. Rule based expert systems have rules and facts, with a forward chaining system, which is data-driven by facts and uses rules to produce new facts. There is a separation of rules, facts, and the inference engine. The rules are production rules (if situation then action (antecedent -> consequent)). There are traditional reasons against expert systems as intrusion detection systems. The performance is too low, they are designed for user interaction, not for automatic event analysis, they are difficult to integrate with other systems, and the language complexity puts up a learning threshold for new users. P-BEST is a general purpose forward chaining inference engine. Rules are translated into C code, not interpreted, which produces a significant performance advantage. It's ready for extension and integration; you just write another C function and call it. You can call the engine from other C program. The language is small and thus easy to comprehend for beginners. P-BEST was first used in MIDAS which monitored docmaster and did analysis on a separate Lisp machine. P-BES was used in IDES and NIDES. It has 39 rule sets and 69 rules in NIDES. An EMERALD monitor is a generic architectural building block. A monitor instantiation watches over a particular target. A resource object describes the target. The profiler engine performs statistical anomaly detection. The signature engine does misuse detection. The resolver compiles reports from the engines and responds with warning messages, and so on. There is distributed monitoring with communication between the monitors. In the P-BEST language, a fact is declared as a pattern type (ptype). P-BEST rules have an antecedent and consequence. The antecedent is a bit like a template over attributes that need to match. The consequent can extract event information and produce side effects, and manipulate the event as well. It makes sure a rule is not eligible to fire again by deleting the event. This also keeps the fact base from running out of space. It can keep state between rules, as in repeated authentication failures. For example, if bad logins are older than a time period, it can remove the facts and decrement a counter. Ulf showed an example of network monitoring for an FTP server. It tries to detect the result of the attack, instead of the method of attack, such as an anonymous login occurring. He also gave a SYN flooding example. The SYN detection "pool guard" shows an alert when too many people are in the pool too long. At a threshold, it sends an alert, and deletes old facts/events. His student exercise was to monitor an FTP server. They were evaluated against recorded real data. There were 46 groups with 87 students. 25 were completely correct; 8 misinterpreted the vaguely formulated requests. Some suggested improvements to the usability of the tool. In conclusion, Ulf said that P-BEST is powerful in performance and functionality, flexible in the problem domain, easy to use for beginners and experts, and qualified for student exercises and deployment in modern intrusion detection systems. I asked for more information about student reaction. Ulf said that they said it was fun to work with P-BEST, and that they questioned everything. A questioner asked if the level of abstraction is too low. It can be difficult to implement even a simple state machine, but after implementing the first, it's easier. Another questioner asked what happened if 2 rules can require the same fact. You can mark the facts; each rule has own unique mark. The lowest priority rule deletes the fact. The Tuesday panel was "Near Misses and Hidden Treasures in Early Computer Security Research." It as chaired by Stan Ames (MITRE). Stan introduced his panelists (Tom Berson of Anagram Labs and Xerox PARC, Dick Kemmerer of UC Santa Barbara, and Marv Schaefer of Arca) as Tom, Dick and Hairy (you had to be there, or know Marv, to see the humor). They allocated chunks of the past conference proceedings to the panelists chronologically. Tom began with the first several years of the conference, calling it "Mining the Codex Oaklandius '80 - '82." In '80, the hotel was in Oakland. The near miss was it could increase its rates if it moved to Berkeley, so it changed its postal address to the back instead of the front. Tom's criteria for selection was papers not enough people know about or were almost right but didn't quite make it. Carl Landwehr thought this panel up. Tom looked for papers that were prescient, excellent and/or fun. He looked over 60 odd papers; it was like being on a program committee. Some he remembered with a certain thrill, but were perhaps forgotten. It was fun to reread them knowing what we know now. This all happened before the web. He noted that there was currently sloppy scholarship, here and elsewhere, and that faculty is to largely blame, while the program committee also bears some of the responsibility. For this presentation, his own blinders leave out things happened that didn't get recorded here and things that happened before this conference. For each year he presented papers that were Notable (still have a message), Quotable (fun), and Hidden Treasures. In '80, the Notable papers were by Liu, Tucker Withington, and Simmons. Liu's "On Security Flow Analysis in Computer Systems" discussed expression flows to certify systems and static authorization functions. Tucker Withington's "The Trusted Function in Secure Decentralized Processing" said what Virgil said yesterday. Simmons never double spaced his papers. His "Secure Communications in the Presence of Pervasive Deceit" discussed how symmetric and asymmetric techniques differ only in the secure exchange, and the problem of authenticating a public key directory. The '80 Quotable paper was by Ames and Keeton-Williams, "Demonstrating Security for Trusted Applications on a Security Kernel Base". "The problem with engineers is that they tend to cheat in order to get results. The problem with mathematicians is that they tend to work on toy problems in order to get results. The problem with program verifiers is that they tend to cheat at toy problems in order to get results." The '80 Hidden Treasure was by Merkle, "Protocols for Public Key Cryptosystems". It anticipates the use of signed mobile code distributed over communications networks, timestamping, witnessed digital signatures. It discusses digital signatures with no disputes signed by a network administrator for distributing code. In '81 the Notable papers were by Millen and Simmons. Millen's "Information Flow Analysis of Formal Specifications" built a tool to analyze systems. Simmons' "Half a Loaf is Better Than None: Some Novel Message Integrity Problems" shows that signatures can be disavowed by publishing or claiming to have published the secret key. He had three Quotable '81 papers. Tasker's "Trusted Computer Systems" was a PR piece where the government decided to get commercial vendors to submit designs for evaluation. "Involvement of computer manufacturers is, of course, essential. The DOD does not want to be in business of building and maintaining its own line of computers and operating systems." Ames's paper, "Security Kernels: A Solution or a Problem?": "Why then is there such a growing concern about computer security? I believe that the answer lies in two areas of computers that are somewhat unique. The first is that management has little, if any, idea of what being done in their computer centers, or how vulnerable they are. This could be referred to as the "Ostrich" problem. The second problem is that the computer industry has grown out of a technology based upon program hacking. The general rule of thumb has been to build it now, design it later, and only be concerned with making it work after delivery while on a maintenance contract." The government was paying for the patching; now no one is. In a panel on "Cryptography" chaired by Davida, Wilk said "The main message is that government policies which apply to cryptography are a few decades old - formed at a time when nondefense interests were trivial. These policies may need to be updated to accommodate evolving private sector needs." And Morris said "Another observation I can't help making from being at this meeting and from previous experience is this: I talk to all of you, I look at your name tags and see the organizations you represent, and I see that with very few exceptions you represent suppliers of cryptography or security systems and I keep asking the question, Where are the users? Which of you represent the potential users of cryptographic services? I've met 3 or 4. Crocker bank is represented, for example, and there are some others. Where is the market? How are the people doing selling DES chips? There's a warning here." The '81 Hidden Treasure was by Miller and Resnick, "Military Message Systems: Applying a Security Model." It said that the Bell & LaPadula model is neither adequate nor appropriate as the basis of a military message system. It anticipates object oriented security pollicies, and defines policy from the users point of view of the system. Ap plying the Orange Book outside of its appropriate scope was a tragedy for security research. '82 had three Notable papers. Lipner while at DEC wrote "Non-Discretionary Controls for Commercial Applications". He said that the lattice model may be applicable to commercial processing, but it requires different ways of looking at it. Goguen and Meseguer wrote "Security Policies and Security Models", the foundation of information flow. It was impossible to read then, and its still impossible to read. They are not modest and they don't mince words. Kemmerer wrote "A Practical Approach to Identifying Storage and Timing Channels", which had a tremendous impact. It put forth a shared resource matrix methodology for all phases of the software lifecycle. Four papers were Quotable in '82. Millen's "Kernel Isolation for the PDP-11/70" includes a two paragraph review of the state of play of security kernels and the current state of research on them. Turn's "Privacy Protection in the 1980s" for shadowed the current state of privacy, including the recent Economist cover story that concluded privacy has eroded in last 20 years, and will continue. The introduction to Anderson's "Accelerating Computer Security Innovations" neatly lays out the tensions between defense needs, commercial need, defense deployments and commercial deployments. Purdy, Simmons, and Studier's "A Software Protection Scheme" is a plea for tamper resistance. The '82 Hidden Treasure was Simmons and Holdridge's "Privacy Channel". It might be a near miss. It warns against the small-codebook problem in public-key cryptography. It prefigures the PKCS#11 padding attack. Tom's final message was beauty is altogether in the eye of the beholder; search for your own hidden treasures. Dick had '83 - '85, though they all looked through all 10 years too. He looked at the cryptography vs. non-cryptography papers. There were 10 non-cryptography and 8 cryptography papers in first year. They did all cryptography papers in a row, and each camp didn't listen to the other. He looked for papers that he would have students go back to. Rushby and Randall's "A Distributed Secure System" discussed standard Unix systems and small trustworthy security mechanisms. It was based on the separation kernel idea. The best work is always based on previous work. Millen's "The Interrogator: A Tool For Cryptographic Security" was the first published paper on tool-based analysis of encryption protocols using formal methods. It was Prolog based. Goguen ad Meseguer "Unwinding and Inference Control" was based on their '82 paper. Unwinding reduces the proof of satisfaction of a policy to simpler conditions, which by inductive argument guarantee that the policy holds. It was the foundation for the non-interference, deducibility, and non-deducibility work. It made things more practical. In "Fingerprinting", Wagner gives a taxonomy for fingerprints. He suggests a subtle encoding for computer software. This work is mostly unknown in the fingerprinting community. Bonet in '95 does refer to it; otherwise there are no references to it. The taxonomy includes logical (machine processable) vs. physical (on a drinking glass), degree of integration (from perfect, where if it is altered, the object is unusable, to statistical, where to some level of confidence, if there are n altered objects, you can determine source), association method, granularity (discrete or continuous). These were Dick's terms. Dick also noticed that nobody's affiliation seems to be constant. He asked How many others are still working in the same place as 19 years ago (a few raised their hands). He closed by asking, Whatever happened to Ada? SDC? Don Good? The details are all in the papers; enjoy reading them. Marv remembers when hacking was a job qualification, not a federal offense. He had '86 - '89. The times were very trying. We were trying to do things, trying to get things evaluated and couldn't, trying to build practical systems, trying to build Seaview. He did a breakdown on the papers in those years. Approximately 18 were dedicated to modeling, 17.5 on database management (The .5 was on encrypted database mgmt), 15 on formal verification and methods, 14 on secure systems and architecture, 11 on operating systems by themselves, 7 on networks, 5 on cryptography and key management (a key generating and encrypting system was on the cover the first two of those years), 5 on information flow and covert channels, 5 on viruses (all in same issue), 3 on authentication, 1 on graph theory applications, 2 on intrusion detection, and 1 on A2 criteria. He read all the papers twice each for hidden gems. There were fads that came and went quickly. There were 3 papers in one session on capability based systems, and 3 in a row on Compartmented Mode Workstations. The Q&A destroyed the continuity and a special session called for that evening. It created quite a flurry. The covert channel papers were fun in the beginning. There was 1 in '86, 4 in '87, 1 in '88, and 1 in '89. There were designer channels in the megabit per second range. Papers on the philosophy and foundations of computer security appear in '87 - '88. McLean and Bell wrote papers producing a tremendous amount of insight based on the Bell & LaPadula model. The Clark & Wilson model was published in '86. Seaview had 1 paper '86, 1 in '87, 2 in '88. Virgil has changed his position on something. In '86, he said of Secure Xenix that you cannot have compatibility, performance and security. This year he said you can't have functionality, performance and security. There were important pape rs and a lot of very good papers that were not followed through on today. In '86 Chalmers wrote about military and commercial security policies. Also in '86 Dobson and Randall wrote "Building Reliable Secure Computing Systems Out of Unreliable, Insecure Components", which anticipated the systems we have today. Birrell, Lampson, Needham, and Schroeder wrote "A Global Authentication System Without Global Trust." Dan Nesset wrote on distributed system security, crystallizing the understood issues for building a distributed secure system. Gligor wrote about a new security testing method, defining black white, grey box testing. '87 was the year of CMW and the Clark and Wilson paper. McLean wrote on reasoning about security models. It was the most important paper that year. A year later he wrote on the algebra of security models. McCollough wrote about the hookup property showing that two secure systems together could be not secure. Berenson and Lunt modeled the Berlin subway. Akl and Denning wrote on checking fo r consistency and completeness. Kemmerer upset the NSA by using formal method techniques to analyze encryption protocols. The Trojan horse made the cover in '88. There was Mclean's algebra paper and Bell's rebuttal on modeling techniques for B&L. Go back and read the papers. Clark Weisman's paper didn't get published that year. It was on Blacker A1 security for the DDN. It won excellent paper award and was published in '92 as an appendix. Jackson Wilson's paper on views of secure objects in MLS is still cited. Gligor wrote on a bandwidth computation of covert channels. McCollough wrote on non interference and composability. He showed how you can build a very insecure network only from secure components. Gligor's paper on the prevention of denial of service was a hidden gem until the Morris worm manifested. We got a virus cover '89. Lunt's paper on aggregation and inference was a capstone on 15 years research. Williams and Dinolt wrote on a trusted file server model; there was much discussion. Gong published his secure identity based capability system and Karger wrote on new methods for immediate revocation. Brewer and Nash published the Chinese Wall and Crocker and Pozzo wrote on a virus filter. "With Microscope and Tweezers" was a good panel on the Morris worm. Badger published multi-granularity integrity policies, and Estrin wrote security issues and policy writing. Schaefer et al. showed that a system meeting the Trusted Network Interpretation could fail in 3 interesting ways; leaking information, being remotely modifiable, and falling to denial of service attacks. It was then time for the question and answer period. A questioner asked if we had grown in strength or weakened. A panelist answered that in '80 it was fascinating, and he expects to leave wide eyed today. There are fads or fashions, particularly fashions aligned with ones own interest. The prejudices in the program committee react to those fads. Another questioner asked when "research" came and went from symposium title. In the beginning, it was applied work, real examples of systems being developed. Later, verification, modeling, and equations became more prevalent. The selection process determined what should be here. It's been an awfully good conference with awfully good people. Year 1 there were 40 people. In year 2, everyone came at last minute. A questioner asked if you can correlate fashions with DoD money in those areas. Someone answered "You know we're all above that." Someone else answered "You know we're all below that." A questioner asked what about ideas that were bad and not forgotten fast enough. Early on there was a wide variety of wild ideas. Funding mono cropped the conference to what was relevant to the DoD. Someone asked which papers made a difference. McLean's work, non-interference by Goguen and Meseguer., Millen's tool based work, Firetag and Berenson on how log in, challenge response authentication. Formal protocol analysis effected how cryptographic protocols were developed. In early conversation, we asked what would happen if we really had a penetration. Intrusion detection and insider attack papers were looked down on, as beyond A1 would keep the TCB small and safe. A questioner asked if there were any rejected papers that were fantastic. Dick answered, yeah, he had a couple. The next session on Information Flow was chaired by John McHugh (Portland State University). The first paper was "A multi-threading architecture for mulitlevel secure transaction processing" by Haruna Isa (US Navy), William R. Shockley (Cyberscape Computer Services) and Cynthia El Irvine (Naval Postgraduate School). Bill presented, as Haruna called up to Europe the night before. Bill started by saying that the counter reformation is in full swing. This may be last paper on MAC and security kernels accepted here. Why did they target distributed transaction processing? That's where the money is. His talk outline included discussion of how their architecture differs from earlier approaches, technical issues, and current status. The Palm Pilot is a good place for high assurance, as it has a trusted path. How good security is depends on how good the keys are. The master keys to the world should not used very often. You need to store those someplace really safe. That's also a good target for high-assurance security. Systems that implement critical functions for very large outfits are very high value target s, as are systems that support multiple users who demand isolation but need to be hooked up to the web. The value throughput is huge. Traditional Transaction Processing (TP) involves a DBMS, TP Monitor, and TP programs coupled with queues. Queues make it simple to couple programs and make them easy to rearrange. There are known show stoppers; an unacceptable performance penalty, a security architecture that supports the wrong processing style, an architecture that does not support incremental migration with a low barrier to trial, an architecture that does not accommodate heterogeneous levels of assurance. Most PCs will not be secure. You want to add high security for certain critical functions. The paper is a piece of this. They adapt an existing security kernel design (SASS). It's a paper design that does static allocation at initialization time. They want to support multi-threaded processes and multiple execution points in the same server space, queue-driven server processes, and a mandatory policy with a gazillion potential categories (one per dot.com), so you can protect your own stuff. They retained the minimized TCB B3 style architecture. Their first unsuccessful idea was to use one server process per access class, with an isolated address space. The gazillion access classes made this a performance show stopper. That suggested the need to re-use pre-initialized processes. The second unsuccessful idea was a single trusted server process, which is the current de facto approach. The one trusted server creates a thread per request. The performance good, but the security is poor. The per thread context held in a single address space is shared by all threads. The ideas set up a false dichotomy. The first incorrect assumption: there is only one address space per process. The Intel CPU architecture provides two. They associated one with the process, and one with the thread. No one else uses that second address space. The second incorrect assumption was that access classes of incoming requests are random. They postulate that there may well be exploitable locality of access class. They have a three tier process architecture. Ring 0 is SASS. Ring 1 is the trusted minimized per process task manager with invariant TCB code. Ring 2 holds the untrusted per access class tasks. All tasks in same process share a read-only Global Descriptor Table (GDT) tailored to their query. Ring 3 holds the per query threads. They have a multilevel secure transaction processing architecture with isolation of tasks by process. The process in ring 1 is multi-level, in GDT space. The tasks are in Local Descriptor Table (LDT) space. They are not yet at the demonstration phase. They have much of the framework. They can start a process and switch LDTs and GDTs. They would like a segment descriptor table per ring in the CPU. There were required kernel modifications including functions to manage an additional table and modifications to the scheduling algorithm. "And yet it moves." The next paper was "Specification and enforcement of classification and inference constraints" by Steven Dawson (SRI International), Sabrina De Capitani di Vimercati (University of Milan) and Pierangela Samarati (SRI International). Pierangela presented. They address the existence of internal databases that have to undergo external release. Not all the information has to be disclosed. Information release is to be controlled by a mandatory policy. There are classification requirements that explicitly assign access classes to data and establish constraints between classifications of some data with respect to the classification of other data. There are simple classification constraints that explicitly assign access classes to attributes, possibly depending on conditions. Association constraints put constraints on the classification on the combination of attributes. Inference constraints are constraints due to inference relationships existing among attributes. Classification integrity constraints are constraints required by the underlying multilevel DBMS. Previous related work has been done on view based classification, classification upgrading to block inference channels, database design, and data upgrading and controlled release. In their approach, they input an unprotected database, a classification lattice, and a set of constraints (classification requirements). The desired output is a classified DB that is secure (satisfies all the constraints) and minimizes information loss (does not classify data more than needed to provide protection). They use a set of classification assignments that satisfy all the constraints. A constraint is a labeling expression and a selection condition. A labeling expression is a level, or a relationship to a level or another classification. A selection condition is a conjunction with operations on attributes (like a template). They want a correct and minimal classification. One possible approach is to evaluate all constraints against the DB until a fixed point is reached. There may b e multiple solutions, and it may find a non-minimal solution. Also, the DB may be large. Their approach is to evaluate the constraints once, against the schema. They derive a set of classification assignments that can be enforced with one pass over the DB to produce a correct and minimal solution (possibly satisfying some preference criteria). Least upper bound (lub) expressions have different solutions to a set of constraints from the different combination (strategies) of the ways each lub constraint is solved. As a simplification, each lub expression can be solved by requiring dominance of one of the attributes in the expression. They decompose each complex constraint in a set of simple constraints. They construct all possible combinations of simple decomposed constraints plus simple constraints. They define a set of satisfiable combinations of conditions appearing in the classification constraints. For each condition pattern there is at least one strategy that produces a minimal solution. The space of strategies is independent of complicated patterns. The solution to a set of simple constraints (strategy) is straightforward. They achieve correctness, minimality, completeness, and consistency. Complete means everything gets a classification. Consistent means no two classification assignments specify different security levels for the same DB element. This supports explicit classification as well as inference and association constraints and ensures minimal information loss. In the future, they'd like to work on efficient approaches to produce a solution, relaxation of restrictions due to the application of the approach at schema level, partially ordered lattices, enrichment of constraints, derivation of constraints, modifications to the DB, and insertion of cover stories. A questioner asked if their approach was NP complete. The optimal solution is NP complete. You pay an exponential price. A questioner noted that this is good for DBs of low dimensionality that are densely packed but bad for CAD or web DBs. The final paper of the session was "A test for non-disclosure in security level translations" by David Rosenthal and Francis Fung (Odyssey Research Associates). David presented. Sometimes there is a need to translate MAC security levels between domains that have different levels. For example, sending email in MISSI. Ideally a complete non-disclosure analysis should occur when a common domain is designed. This common domain information can be used to properly build translation functions. If a level is named the same but has slightly different qualifications, what do you do? They developed and analyzed properties that could be used to check for non-disclosure based on more limited information that may be operationally available - the orderings of the levels of the two domains and the translations functions. The non-disclosure property that they would like is the B&L Simple Security Property. A relabelling should not result in lower level users being able to read higher level information. They want a level incre asing property; relabelling can only further restrict access. They asked what property should we use for analyzing just the security orderings of the domains and the translation functions? They introduced the Security Level Translation Property (SLTP). Start at a level, map across to the other domain, pick any level higher, map back to the original domain, and make sure you're higher than when you started. Why is this the best possible condition? SLTP holds precisely when there is some potential common domain in which the translation functions are secure (level increasing). They call these potential common domains comparison domains. The existence of comparison domains means that it is possible that mappings really are secure, given only the limited information being analyzed. Note that this best possible non-disclosure check is not a sufficient condition. The potential comparison domain may not be the real common domain. It's just the best they can do; it's not for sure. The construction of the comparison domain is based on extending the level ordering to obtain the right transitivity conditions with the translation functions. They introduce another property called order compatibility. It is not necessary for non-disclosure but it is usually desired. They don't want to unnecessarily translate levels too high. If the translations are total and order compatible, LTP can be expressed in a simpler form. They applied their analysis to a particular representation of military messages and allowable translation functions. The levels consisted of a hierarchical part, restrictive categories, and permissive categories. They provide a restatement of SLTP in terms of this representation. They don't want to talk about all possible subsets. For a restrictive category, if you apply SLTP, and have a subset ordering, the result must include the one you started with. For permissive categories, after you map, you have to end back where you started, because of reversed subset ordering. SLTP could be used in automated checks to see if the representation of translation information would lead to insecure translations. A questioner asked if it works with more than 2 domains. For multiple hops, it works. For multicast, they have to handle each case separately. I asked if it was good enough for MISSI use. They could use as a check on the translation functions. The final session of Tuesday was the Work-In-Progress (5-minute Presentations), chaired by Heather Hinton (Ryerson Polytechnic University). The talks were: "Information Power Grid: Security Challenges for the 21st Century" by Thomas H. Hinke (University of Alabama in Huntsville) "The History and Future of Computer Attack Taxonomies" by Daniel L. Lough, Nathaniel J. Davis IV, and Randy C. Marchany (Virginia Polytechnic Institute and State University). Dan presented. "Simulating Cyber Attacks, Defenses, and Consequences" by Fred Cohen (Sandia National Laboratories and Fred Cohen & Associates). "Developing a database of vulnerabilities to support the study of denial of service attacks" by Tom Richardson, Jim Davis, Doug Jacobson, John Dickerson, and Laura Elkin (Iowa State University). Tom presented. "System Assurance Methodology Using Attack Trees to Drive Design Improvements" by Dale M. Johnson (TIS Labs at Network Associates). "NetHose: A Tool for Finding Vulnerability is Network Stacks" by Anup Ghosh, Frank Hill, and Matt Schmid (Reliable Software Technologies). Anup presented. "High Assurance Smart Card Operating System" by Paul A. Karger, David Toll, Vernon Austel, Elaine Palmer (IBM T.J. Watson Research Center), Jonathan W. Edwards (IBM Global Services), and Guenter Karjoth and James Riordan (IBM Zurich Research Laboratory). Paul presented. "NCI Security Architecture" by Peter Xiao, Garret Swart, and Steve Weinstein (Network Computer Inc.). Peter presented. "A Scalable Approach to Access Control in Distributed Object Systems" by Gregg Tally, Daniel Sterne, Durward McDonell, David Sherman, David Sames, Pierre Pasturel (TIS Labs at Network Associates). Lee Badger presented. "Using Software Fault Isolation to Enforce Non-Bypassability" by Mark Feldman (TIS Labs at Network Associates). "Porting Wrappers from UNIX to Windows NT: Lessons Learned" by Larry Spector and Lee Badger (TIS Labs at Network Associates). Larry presented. "Sequence-Based Intrusion Detection using Generic Software Wrappers" by Douglas Kilpatrick and Lee Badger (TIS Labs at Network Associates). Lee presented. "Implementing Specification-based Intrusion Detection using Wrappers" by Calvin Ko (TIS Labs at Network Associates). "JSIMS Security Architecture Development Process" by Richard B. Neely (SAIC) and Bonnie Danner (TRW). Rich presented. "Security Approach for a Resource Management System" by Cynthia Irvine and Tim Levin (Naval Postgraduate School). Tim presented. "High Assurance Multilevel Secure Mail Service: Session Server and IMAP Server" by Steven Balmer, Susan Bryer-Joyner, Brad Eads, Scott D. Heller and Cynthia E. Irvine (Naval Postgraduate School). Steven presented. "Integration of Natural Language Understanding and Automated Reasoning Tools within a Policy Workbench" by James Bret Michael (Naval Postgraduate School). "Data Damming in Proprietary/Public Databases" by Ira S. Moskowitz and Li Wu Chang (Naval Research Laboratory). Li Wu presented. "Interactive Zero Knowledge Proofs: Identification Tools Based on Graph Theory Problems" by C. Hernandez-Goya and P. Caballero-Gil (University of La Laguna, Spain). Hernandez-Goya presented. "A Note on the Role of Deception in Information Protection" by Fred Cohen (Sandia National Laboratories and Fred Cohen & Associates). Fred gets the quotable quote of the session: "All conferences should do these 5 minute sessions - they're the way to go." "Sequential and Parallel Attacks on Certain Known Digital Signature Schemes" by M. I. Dorta-Gonzalez, P. Caballero-Gil and C. Hernandez-Goya (University of La Laguna, Spain). Dorta-Gonzalez presented. "Athena, an Automatic Checker for Security Protocol Analysis" by Xiaodong Dawn Song (Carnegie-Mellon University). The first session on Wednesday was on Authentication and Key Exchange, chaired by Dieter Gollmann (Microsoft Research). The first paper was "Software smart cards via cryptographic camouflage" by D. Hoover and B. N. Kausik (Arcot Systems). Doug presented. They use cryptographic camouflage to protect private keys in public key infrastructure. Putting the protection in software makes it cheaper and easier to manage. Public keys are scalable, provide distributed authentication for entities that don't need to be known in advance, and you can add users easily. Their work resists dictionary attacks like a smart card does. The most common form of authentication is login passwords. A password is easy to remember, but not very strong (small and portable, but vulnerable like a house key). A software key container of the conventional kind is a file somewhere encrypted with a password containing a private key. This provides a strong key, but users can't remember it, so they leave it under the equivalent of a door mart that someone could break into. You also can protect keys with a smartcard (like a credit card or PCMCIA card). It costs mor e to have smart card readers and everyone thinks it's a pain to manage. Their notion is to put a lot of keys under the doormat. If the burglar picks the wrong one, the alarm goes off. The key container must be ultimately protected by something like a password. Smart card avoids dictionary attacks by locking up after some number of tries. They can't use your key if it's stolen. Dictionary attacks work because the password space is small enough to search and you can tell when you've guessed the right one, by recognizing some structure or context. If you make the password space large, people can't remember their passwords and put notes on their machines with the passwords. Another traditional approach is to try to make decryption slow (with a salt), such as PBE with iterated hashes. This is vulnerable to increases in computer power. Camouflage makes the plaintext unidentifiable. They only encrypt random-looking plaintext. Symmetric keys are usually random. Private keys are either random (DSA X) or are essential ly random (RSA private exponent). If there is a test by contextual evidence they make sure many false decrypts will pass it. Somebody has to tell you whether you've got the right private key. With camouflage you can't distinguish the right key until you try to authenticate with a server, who can take defensive action. This prevents dictionary attacks. Camouflage requires a number of variations from usual practice. It denies even probabilistic tests. With RSA, the private key is determined by the modulus and private exponent and the public key is determined by the modulus and public exponent. This is bad practice with camouflage. You don't encrypt a structured encoding like BER. You don't encrypt several objects with known relations (like n, p, q, d). You don't encrypt something with known mathematical properties, like the modulus n. Therefore, you can only encrypt the private exponent d. They allow a d of a fixed length longer than n. It is not obvious how to test whether a candidate could be a private expone nt for n. Prime factors should be included in the private key. They provide a test for a valid private exponent. They permit the private exponent to be computed from the public exponent. Then you cannot use a short public exponent to make server side operations more efficient. If the public exponent is known, you test a candidate modulus. With camouflage, only trusted entities have access to the public exponent. A full hash serves as a test for the correct password. A partial hash can be kept to catch typos by the user. The security of the system comes from the number of passwords that can pass the hash test. A known signature attack is possible. If the attacker has both a message and that message encrypted with a private key, she can see if she gets the same result. They randomize signatures by padding randomly as in PKCS #1 instead of deterministically. Theirs is a closed PKI; public keys are only revealed to trusted parties. They encrypt the camouflaged public key with the agent's public key before embedding it in a certificate. The servers don't have to know you, but they must be known at certification time. Thus, they have two -factor authentication (what you know and what you have), but the key container can be copied, thus having the key container is a bit weaker than usual. It's as strong as storage smart cards, but weaker than cryptocards. They can use this for user authentication/login, document signatures for verification by trusted parties, and electronic payment. It's not useful for stranger to stranger authentication such as email. Extra computing power does not help to break it until it breaks the cryptography (e.g., factors the modulus). The security is comparable to a somewhat weaker than usual storage smart card. A questioner asked how this is different from symmetric key encryption. The server doesn't have to know you in advance. You could use symmetric encryption and put the key in a certificate encrypted the same way. The security of server is slightly more sensitive. If someone breaks into a server and observes your public key, they still need to steal your key container to break into your account. Another questioner asked, if the signature is also a secret, how do you protect the key. For RSA, they just randomize the padding, they don't keep the signature secret. They would have to with DSA, so it would only be good for authentication where it's thrown away immediately. Another questioner asked, could an attacker with a tracer could break camouflage offline by recording a successful run of an authentication, trying to take trial keys out, and trying to run the previous run of protocol. No, the signature is randomized for that reason. The final paper of the symposium was "Analysis of the Internet key exchange protocol using the NRL protocol analyzer" by Catherine Meadows (Naval Research Laboratory). Applying formal methods to security protocols is a growing problem. We increasingly relying on network computer systems to perform sensitive transactions. We have more complex protocols supporting specialized transactions, and more general protocols supporting a broad range of transactions. These protocols allow a choice of algorithms, mechanisms and sub-protocols. We must avoid insecure interactions between protocols. Cathy extended the reach of formal analysis of cryptographic protocols by looking at a good sized cryptographic protocol with different related sub-protocols. She used the NRL protocol analyzer (NPA) to look at Internet Key Exchange (IKE). It does key generation and negotiation of security associations. This work led to improvements in the NPA. IKE had already been looked at by a number people. She wondered if you can learn anything new by using formal methods. NPA is a formal methods tool for analysis of cryptographic protocols. It uses a model of intruders that's standard; they can read, intercept, and so on. It can determine if it is possible to get to a bad state, such as a secret leaking or a participant being spoofed. You specify the insecure state and NPA works backwards trying to find a path. The search space is initially infinite, so it needs to be cut down with filters using lemmas. It uses a generate and test strategy. This can be slow, but it is easy to put in new lemmas. IKE is a key agreement protocol developed by the IETF for IP. It is currently in RFC status. It agrees on security associations as well, which include the keys, algorithms, protocol formats, and so on for subsequent communications. There are two phases whose variations make up 13 different protocols in all. In phase 1, the security association is negotiated and key exchange occurs. In phase 2, you negotiate and exchange session keys for communication. In 1, there are 4 choices of authentication mechanisms (shared key, digital signature, and two types of public keys). There are two modes: the main one hides identities until relatively late in the protocol while the aggressive one reveals them earlier allowing fewer messages. Thus, there are 8 possibilities in phase 1. In phase 2, you may or may not exchange identifies, may ore may not use Diffie-Hellman (DH), plus there's a new group mode. Reminding the audience of the comment earlier about engineers cheating to get results, Cathy stated that this qualifies as an engineering paper. However, she wants to be honest about the cheating. She assumed that the initiator sends a list of security associations and the responder picks one. It's actually more complex, so she only looked at 10 out of the 13 variants. The new group mode is technically difficult and generates an infinite regression. One of the public key protocols was introduced after she started. She didn't go back and respecify everything. She looked at t he two phases separately, so she didn't find interactions. DH relies on commutativity, which NPA doesn't model. She hacked the specification to work around commutativity. So, she looked at the two specifications, phase 1 with 6 protocols and phase 2 with 4 sub-protocols. She looked at the other public key protocol later. NPA gives more user control of the search strategy. You can discard some search states as trivial. It allows users to specify state members as trivial, as they may not be important to an attack. If it finds an attack, you need to verify that it's a true attack. She added a rule tester to the NPA that generates states that are likely to be encountered. It speeded up the analysis by about 170%. Her results verified the standard properties of IKE. Keys are not compromised. If a key is accepted, it and the security association were offered by the claimed participant. There is prevention of replay attacks. In phase 2, there is perfect forward secrecy with DH. There were no major problems. There was a funny "attack" originally found by Gavin Lowe. The ambiguity could lead to implementation problems. There was an incomplete specification in phase 2 where you can not transmit identities when they're identical to IP addresses. You're going to see them anyway, but they're not authenticated. Shared Keys are indexed by identities so spoofs can't be decrypted. The "attack" relies on a particular implementation assumption. Participant A thinks it's sharing two keys with B, but it's really sharing the same key with itself, twice. It can converse with itself, thinking it's B. A is its own man in the middle. What was missing? A message ID is pseudo random, so it's unique. The protocol checks the message ID. If there is none, it's assumed to be the initiation message. If there is one, it assumes the next message is in the ongoing phase 2 exchange. So, the attack is impossible, but it took a long time to figure that out. The pseudo random requirement was not in IKE, it was in ISAKMP. Some implementers weren't doing it. Nowhere did it say the message ID must be used to determine what the message is for. It was just easier to check a message ID than decrypt and read the message. But it's security relevant, so it should be stated. Cathy concluded that formal methods can be useful even in protocols that are much discussed. For example, they can identify hidden assumptions. The IETF specification was written by volunteers and tended to be sketchy. She identified some potential problems outside of main requirements. Lowe's attack was probably not much of a threat now, but now they know about it. The exercise also improved NPA. A questioner asked how many hours she spent on this including modifying NPA. She said months, though she didn't keep track. She did a complete redesign of NPA. Another asked if she did any regression testing to determine if NPA works on previous problems better. Yes. The changes only help with larger protocols. For example, on Needham-Schroeder, it only helped 10%. You get exponential help as the protocol gets bigger. Another questioner stated that Windows 2000 has IKE; does Microsoft know of her results? She told the results to the IETF and they said they would pay attention. Finally, she was asked if she's trust her privacy and security to IKE. "As much as to any protocols out there." The final session was a panel called "Time Capsule - Twenty Years From Now" chaired by Jim Anderson. Jim framed his predictive talents by saying that in '80 he predicted Oakland would be over in 5 years. The first speaker was Mike Reiter (Lucent Bell Labs), on the future of computing, filling in for Mark Weiser, who was chief scientist at PARC. Mark was diagnosed with cancer and died 3 weeks before. After 15 seconds of silence, Mike spoke representing Mark's opinion based on his reading of Mark's text. He spoke on how computers will be used differently in the next 20 years. The relationship of people to computing is the most important factor. It has changed twice already; we are in the midst of a third change. The first was the era of mainframes. They were for specialized use, by specialists, and the machines were nearly worshipped; they got the air conditioning, the nice raised floors, and 24 hour attention. There was a von Neumann quote that it would be a waste to use computers on bookkeeping; it was too menial. The second was the era of the PC. They are our special helpers. We use them for chatting, games, and taxes. We can use them for hours, because they are dedicated to us. We can reboot it and no one else is effected. The third is the era of the Internet. It is involving hundreds machines in everything you do (web access, searches). It is the inverse of the mainframe. Now computers are abused, are we get the air conditioning, and so on. It is the start of the transition to ubiquitous computing. The PC diminishes as the interface. The appliance becomes computers (tables, chairs). They are nearly invisible, but they make us smarter in almost everything we do. The hardware is low energy, wireless and uses the TCP/IP infrastructure. The form factor is getting smaller and becoming invisible. Gentle sounds will change to convey information about friends, family, stocks, work projects. Displays built into objects will tell you your to-do list, who is calling you, what's for dinner, where to park. They will do it without any login; the room knows it's you. Smart cameras will watch you pay bills, read the newspaper, and pre-fetch additional information you will need if you chose to have it watch. "The heroes of the 20th century will be the teams of designers, anthropologists, psychologists, artists, and (a few) engineers who integrate the power of technology with the complexity of the human interface to create a world that makes us smarter, with us, in the end, barely noticing." Roger Needham (Microsoft Research, Cambridge) spoke on the future of hardware technology. It's always difficult to predict the future, particularly before it happens. He shares much of Mark's vision. If there's a difficulty, the technology might make it go away. This is a problem for the researcher, who sees an opportunity in every problem. Things are getting bigger and faster, there's more available bandwidth. This takes away the need to be clever. It is impossible to foresee the consequences of being clever. Therefore you should avoid it whenever you can. He sees it in cryptographic protocols. We put as little in the messages as possible. This has a strong tendency not to work, and is a consequence of being clever. We can use cycles to buy simplicity, which is a wonderful thing. In the next 5 years, a paper on OS security will be redundant. Most computers have one user, or application, or both. The conceptual operators from 60s, that there are hostile multiple users, don't apply. The application knows what the security requirements are. If it's doing something for one person, there are no security issues. There will be division of function, using the network as an insulator, not a connector. This is a bit of a despondent conclusion. What if it makes things more difficult? Mark's vision is fine if you're doing things that don't matter. If they're used for anything important, say a computer in every shirt button, rather a lot of things, the way we identify is liable not to work. Objects can come together in a short term alliance, like renting a car. They will be of large enough scale to do the paperwork and associate you and the car. This is on a slow time scale - days. What if there is a bowl in a hospital with smart thermometers, each emitting a temperature gained by radio. You establish a relationship between the doctor, thermometer, and patient so that something undesirable doesn't happen. In a few years, papers will appear on how to establish and tear down these relationships reliably, for serious purpose, with short term relationships between smart objects. Privacy will be somewhat at risk. It was bad enough having dumb cameras all over the place. Britain now has the most intensive surveillance of anywhere. Something could record everything said by and near you for your life. Either we get over our decreasing privacy, or there are serious opportunities for technical people like us, to try to push back, and to go and get grants. If you look 20 years forward, you see how important our subject will be. Roger won't be around in 20 years. It will be an active but smaller field, with many of the things we are struggling with replaced by conventional engineering. There will be room for some good people on the sharp edge. Howard Shrobe (MIT AI Lab) spoke on the future of software technology. We will be computing in a messy environment with an emerging computational infrastructure. There will be several orders of magnitude variation in bandwidth in devices. The networks will be unreliable and subject to chaotic excursions. There will be no centralized control and we will be vulnerable to intentional attack. There are two responses; "life sucks and then you core dump" or engineer to be bullet proof. Just because you're paranoid doesn't mean that they're not out to get you. Software technology is not ready. We think we can give up and start again (like rebooting a PC). That will be unacceptable. We can think in terms of a fortress or an organism; rigid vs. adaptive. "Almost everything I predict is wrong." It's more likely to be like Visual C++, "have a nice day". The right things happen, they just longer than he likes. He sees systems as frameworks for dealing with a particular problem, like robustness. There are dynamic domain architectures. In domain engineering, you look at a domain and find a lot of variability within common themes. You capture the themes, reify them as services, and deal with the variability in lower level routines. Reuse is at the top layer. You reuse the asset base with variant implementations of common services. There is an analogy with Object Oriented programming. You get a lot of method calls for certain tasks. You use the type signatures as a description of when an approach is applicable. For a given component, you want to know how likely it is to succeed and what is the cost, so you can try maximizing the benefit/cost ratio. If it fails, you debug where you are, and try something else. Every super routine has a plan. Plans are more general than super routines. A rationale can guide the system. In the development environment, we focus on super routines. At runtime, instrumented code, synthesized by development tools, monitors, gives feedback, and so on. It will do precisely the things programmers do badly, like error management. We will have to think about what's going on in the future in the runtime world. We will need to diagnose and recover from failure. There are two mindsets about trust; binary vs. continuous. There will be cumulative, distributed partial authorities that are goal based and cope with partition and probability. There will be an active trust management architecture that is self adaptive and makes rational decisions. It will be focused on noticing attacks. You want to know how the system has been compromised. There is a difference between privacy and performance compromises. We will make the system behave like a rational decision maker without very expensive computations that seem necessary. The future of programming will be dominated by domain analysis. Through community process and virtual collaboration we will come to agreement about the structure of a domain, including plans and rationale. The programming environment will act as an apprentice. It synthesizes the code that's tedious and hard to think about, that collects data from running applications. It asks for justification for the non-obvious in the rationale. Debugging involves dialog with this apprentice. Hilarie Orman (Novell, Inc.) spoke on the future of networking, calling it "Networks a Score Hence." She was impressed by the backward looking talks; they had so many facts and were so authoritative. Dave Farber says you can only look 10 years out realistically. Hilarie thought this was great; she won't have to be realistic. She talked to Bob Kahn, who helped Gore invent the Internet. She said to him, surely you knew how amazing it would be. No, he said he had very little expectation of it taking off. He understood all the realistic obstacles. The power and value of networking is undeniable. She collected some pithy quotes. "IP Over Everything" - Vint Cert. It's a great architecture. "We don't believe in kings, presidents or voting. We believe in rough consensus and running code." - Dave Clark. The Net Will Rock You! Communication is a fundamental force in human civilization. Only two media will remain - speech and the network (today as the web). Things will replace the web as a dominant paradigm. Non-networked existence will be a lesser form of participation in civilization. A higher percentage of the world population will be involved. 2/3 of the world has never made a phone call. That will have pluses and minuses. We can empower more in more interesting ways. Things will go faster, faster, faster. Maybe instead of Moore's law we'll have more laws in 20 years as we hit the wall. Networks will be more ubiquitous. They will be completely global and beyond. They will be deeply serviced with pervasive objects, location, location, location, and instant notifications. Naming, caches, middleware and applications will all join the infrastructure. Various ways of slicing light will enable this. There will be optical switching for global light pipes. Globally engineered protocols will provide multicast and a programmable infrastructure. It will be tuned for a global structure. There will be large distributed systems with 100 million users and 100 billion objects. Synchronization and replication will be stressed. In terms of wireless communications, "I have one word to say to you, batteries." Security will change a lot. There will be tradeoffs about who's ahead (protectors vs. hackers). Cryptography and the speed of light and quantum states will influence this. We are at the frontiers of virtual legalism. We need a complete reconsideration of intellectual property (Farber). There will be heterogeneous interoperating systems for identification, authentication, and authorization. Video, video, video and global commerce will be important. There will be an empowerment of resource limited countries. It will be a a world that models itself. We will have a touchy society with instant fads, extreme behaviors, and a tower of Babel. The net responds quickly and instantly. Look at day trading today and the rapidity of news. There will be an end of privacy and property. Surveillance, surveillance, surveillance is already happening. The context you use property in will have the value, and maybe there will be value using where it lives. "This is the first century of the rest of your millennium." There will be nothing common in 20 years that we don't see today. There will be an accelerating rate of technical change, becoming chaotic. Brian Snow (National Security Agency) spoke on the future of security. He's been with the NSA since '71. The future is not assured - but it should be! He contrasted the previously ominous "I'm here to help you" with the fact that he is here to help us. It's not your father's NSA anymore. They're polite and collegial, and a little less arrogant. He feels like Cassandra, who was loved by the god Apollo. She told the future, but those who heard her would not understand, or would not act. "It's ASSURANCE, Stupid." Buffer overflows will still exist in "secure" products. They should not be called secure, but will be sold as such. Secure OSes will still crash when applications misbehave. We will have sufficient functionality, plenty of performance, but not enough assurance. He's very confident of these predictions. Resource sharing was the bottom line. A cloud of processes will follow you about in your suite of computers. The nodes you walk by will be supporting hordes of people in the mall, matching their taste that day. Bandwidth will be cheaper but never will be free. There will be no real sense of locality. Computing will be routed through timezones for less busy processes. There will be non-locality and sharing. For security, you need separation to keep the good safe from the bad. This requires a strong sense of locality (the safe, the office). How will it fit on top of industry relying on sharing and non-locality. Your job won't go away. There will be very hard work to do in 20 years. There is a great fear of being as effective as we need to be. Security functionality has been around for centuries as confidentiality. Other aspects such as integrity, availability, and non-repudiation did not all occur at the same time. Availability as security is recent. Non-repudiation was not on the list for the electronic domain 20 years ago. Something else will happen like this. Something from the social world will find enabling technology. They might not really be necessary. We have a fair amount of mechanism today; we might not need much more. But the new thing will look necessary. Predicting 20 years out is hard, so he did a little cheating. He limited himself to 20 net years, which is 5 years real time, about which he has great confidence. There will be little improvement in assurance and little true security offered by industry. We will counter many hack attacks but will not be safe from nation states. We will think we are secure and act accordingly. We will be vulnerable. He's delighted we're getting so much attention. Once we get the mechanisms to stop the hacker, we'll think we're done. It's a two sided sword. The last nine yards are the hardest. The hacker just wants to score; intelligence looks for a source. You'll never know the other guy is there. The mechanisms won't hold up in malicious environment and there's plenty of that. We need advances in scalability, interoperability, and assurance. The market will provide the first two. We don't need new functions or primitives. We fail to fully use what we have in hand. Th e area needing help is humans seeking convenience over security. We need mechanisms designed in development, but present throughout the life cycle. We need to do more of design processes, documentation, and testing, which will cause a delay to market. The OS should make a digital signature check prior to module execution to keep it benign. This can add enough performance degradation to bring a machine to its knees. This can also kill viruses. There are assurance issues in the signing. OS security is the basis for secure systems, and the current black hole of security. Software modules need to be well documented from certified development environments, tested for boundary conditions, invalid inputs, and improper sequences of commands. Formal methods need to show how it meets the specification exactly, with no extraneous or unexpected behavior. In hardware, we need smart cards, smart badges, tokens or other devices for critical applications. We need an island of security in a sea of crud. We need third party testing from NIST, NSA labs, or industry consortia to provide a test of vendor claims. This will only be successful if users see the need for other than proof by emphatic assertion. The results must be understandable by users. Integration is the hardest part and we're dumping it on the user. We need independent verification of vendor claims, legal constraints, due diligence, fitness-for-use criteria, and liability for something other than the media the software is delivered on. Y2K may provide a boon. Most attacks result from the failures of assurance, not function. Each of us should leave with a commitment to transfer the technology to industry. We need more research in assurance techniques. The best way to predict the future is to invent it. A questioner asked how open source change assurance and the development of security. Howie said it's not such a new thing. From the mid '80s Symbolics has been doing it. They did have people point out vulnerabilities. The ratio worked in their favor. Mike said it's not a replacement for assurance; it can find the low hanging fruit. Another questioner asked if there will be a Luddite effect in the next 20 years. Roger said obviously yes. Brian said society is fragmenting, the bulk will go to connectivity, but who will take care of the have nots. Hilarie expects Luddites express themselves vigorously over the Internet. A questioner asked if digital signatures had stood test of court. Brian said they are not yet part of the full body law. Somebody has to stand up and start doing it. Then those who don't will be liable. Hilarie said we need a contest in court and to get a ruling in favor of digital signatures. Roger said we should use a paper to say that we are bound by our electronic signature. That ends the leg al question. There was a quantum computing question. Brian said they're started working on a replacement for DES called AES. Insurance against quantum computing is covered. Mike said, at the risk of disagreeing with someone at NSA, and he's not a cryptographer, the biggest risk is to public key algorithms, not secret key algorithms. Will public key survive? There are known algorithms for factoring that could be used on a quantum computer. What happens if it all falls down? An audience member suggested that only the NSA will be able to afford quantum computing. Mike said no. Predictions that it's expensive may be wrong. Scientists can impress us with what they achieve. Hilarie said that people who work with symbols are impressed with people who work with physics. Factoring larger number doesn't break all algorithms. It changes the lifetime factor. A questioner asked about export control restrictions on secure OSes above B2. Brian wanted to go back to the last question on quantum computing. He said that export controls are not just from the NSA. Congressional hearings are the place to work this out. You may not like the answer that results. He may be discontent. Work the process through the political system. In quantum computing, he agrees with Hilarie. It doesn't destroy cryptography, it just changes matters of scale. Key management or initial material is more expensive. A questioner asked, if market won't provide assurance, should we require assurance by legal means? Jim said we see people redefining the problem so that low levels of assurance are adequate. We only need high assurance in small number of cases. Brian said yes, it's a good way to go about it as well. Y2K may help. We like to sue. The other levels of assurance are pathetic. Open source will help, but not enough. A questioner asked about licensing engineers. Brian said in his talk that it would help. Hilarie replied "we don't need no stinking licenses." Howie said if architecture is done wrong it always has serious consequences. This is not true for the bulk of software. It fails often, and we don't care. There are different categories of software. Plane software is different. Maybe we should have licensing for producing software with serious consequences if it fails, but it's wrong for everyone else. Roger stated that it is known what other engineers ought to do. Brian pointed out that Romans built marvelous bridges, but over compensated. It was practice, not science. Now we have less of a safety margin. If we lack science and engineering principle, we use art. We're still at the art practitioner. It's hard to license. We should reduce it to practice. A questioner asked how long until smart cards are as popular as credit cards in the US. Roger said that smart cards are quite commonly used in Europe. Local phone calls are billed for in Europe, so there is an economic advantage in doing any transaction without the phone. Going back to the licensing discussion, a questioner asked what happens when Powerpoint is a potential part of plane software. Hilarie reacted "God help us." People want analogies that may not be possible because of the rate of technical change. She learned to save her files every 5 minutes on early machines as they failed every 15 minutes on average. We may work out new paradigms for assurance. Roger said the comparison with software for anti-lock brakes might be a better analogy. Huge amounts of money are spent on minute programs, looked by more than one independent testing outfit. Then they sell it in a large multiple. We write huge software for which the concept of right doesn't exist. Hilarie said embedded software is getting larger. A car bus may be running a group membership protocol to agree on which wheels are working. Everything will be drive by wire. You have to be able to afford quality software in that environment. A questioner pointed out that the Navy uses NT to drive its ships. That software wasn't developed for embedded systems. We should be licensing the integrators, it's a huge dollar industry, and they decide what components go in. Roger said that makes a lot of sense. Hilarie said that that's the person who delivers the claim. It will drive assurance back the food chain or produce new players for new niches. ________________________________________________________________________ New Interesting Links on the Web ________________________________________________________________________ o http://www.iacr.org/newsletter/ The Newsletter of the IACR (International Association for Cryptologic Research). Available on the Web and (to IACR members) by email. o http://www.issl.org (ISSL) Iowa State University's Information Systems Security Laboratory ________________________________________________________________________ New Reports available via FTP and WWW ________________________________________________________________________ o http://www.research.att.com/projects/privacystudy/ AT&T Labs Report on Net Users' Attitudes About Online Privacy ________________________________________________________________________ Who's Where: recent address changes ________________________________________________________________________ Steve Welke Trusted Computer Solutions 13873 Park Center Road Suite 225 Herndon, VA 20171 Email: welke@tcs-sec.com Voice: 703-318-7134 Fax: 703-318-5041 _______________________________________________________________________ Calls for Papers ________________________________________________________________________ CONFERENCES Listed earliest deadline first. See also Cipher Calendar. * WIH'99 Third International Workshop on Information Hiding, Dresden, Germany, Sept. 29 - Oct. 1, 1999. (submissions due: June 1, 1999) [posted here: 10/19/98] Many researchers are interested in hiding information or in stopping other people doing this. Current research themes include copyright marking of digital objects, covert channels in computer systems, detection of hidden information, subliminal channels in cryptographic protocols, low-probability-of-intercept communications, and various kinds of anonymity services ranging from steganography through location security to digital elections. Interested parties are invited to submit papers on research and practice which are related to these areas of interest. Submissions can be made electronically (pdf or postscript) or in paper form; in the latter case, send eight copies. Papers should not exceed fifteen pages in length and adhere to the guidelines of the LNCS series www.springer.de/comp/lncs/instruct/typeinst.pdf. Addresses for submission: pfitza@inf.tu-dresden.de, Andreas Pfitzmann, Dresden University of Technology, Computer Science Department, D 01062 Dresden, Germany * ISW'99 1999 Information Security Workshop, Kuala Lumpur, Malaysia, November 6-7, 1999, (submissions due: June 4, 1999) [posted here: 2/3/99] ISW'99, the second workshop in the international workshop series on information security, will be held in Monash University's Sunway Campus which is about 20km to the south west of downtown Kuala Lumpur. ISW'99 will seek a different goal from its predecessor ISW'97 held in Ishikawa, Japan. More specifically, the focus of ISW'99 will be on the following emerging areas of importance in information security: o multimedia watermarking o electronic cash o secure software components and mobile agents o protection of software ISW'99 will be co-located with the 6th ACM Conference on Computer and Communications Security, November 1-4, 1999, and Asiacrypt'99, November 15-18, 1999. Both the ACM conference and Asiacrypt will be held in Singapore which is only a short distance to the venue of ISW'99. Complete instructions for authors can be found on the conference web page at www.musm.edu.my/BusIT/isw99. You may also send general inquires to isw99-gen@musm.edu.my. * NDSS'2000 Network and Distributed System Security Symposium San Diego, California, USA, February 2-4, 2000. (submissions due: June 16, 1999) [posted here: 2/20/99] Technical papers and panel proposals are invited for the Internet Society's Year 2000 Network and Distributed System Security Symposium (NDSS 2000), tentatively scheduled for 2-4 February 2000 in San Diego, California. The symposium will foster information exchange among researchers and practitioners of network and distributed system security services. The audience includes those who are interested in the practical aspects of network and distributed system security, focusing on actual system design and implementation rather than theory. A major goal of the symposium is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technology. Proceedings will be published by the Internet Society. A best paper award will be presented at the symposium to the authors of the best paper to be selected by the program committee. The deadline for electronic submission is 16 JUNE 1999. The complete call is available at www.isoc.org/ndss00/. * NORDSEC'99 The fourth Nordic Workshop on Secure IT systems - Encouraging Co-operation, Stockholm, Sweden, November 1-2, 1999. (Submissions due: August 16, 1999) [posted here: 4/1/99] The NORDSEC workshops were started in 1996 with the aim to bring together researchers and practitioners within IT security in the Nordic countries. NORDSEC'99 is organised by the Department of Computer and Systems Sciences, DSV, Stockholm University & the Royal Institute of Technology, the Swedish Defence Materiel Administration and Telia AB. This is the theme our workshop addressed: knowledgeable and maturing use, applications, developments and research of IS/IT security in order to provide individuals, companies & organisations and societies with a reliable, safe and secure, and changing world. A complete list of topics can be found at the workshop's website at www.telia.se/nordsec99. The workshop will consist of paper sessions, panel discussions and invited talks. We also welcome the submission of application-oriented papers focusing on new or difficult problems, pilots, and experiences. Authors are requested to submit an extended abstract (4-5 pages) or a full paper (up to 15 pages) to Dr. Louise Yngstrvm (louise@dsv.su.se) by August 16, 1999. * PKC2000, International Workshop on Practice and Theory in Public Key Cryptography, Melbourne, Australia, January 18-20, 2000. (Submissions due: August 27, 1999) [posted here: 4/16/99] Original research papers pertaining to all aspects of public key encryption and digital signature, as well as associated issues in privacy and security, are solicited. Submissions may present theory, techniques, applications and practical experience. A complete list of topics, along with detailed instructions for authors, is provided on the conference web page at www.pscit.monash.edu.au/pkc2k/. * Financial Cryptography `00, Anguilla, BWI, February 21-24, 2000. (Submissions due: September 24, 1999) [posted here: 6/6/99] Original papers are solicited on all aspects of financial data security and digital commerce in general for submission to the Fourth Annual Conference on Financial Cryptography (FC00). Directions for electronic submissions are at www.fc00.cs.uwm.edu/esub.html. Additional information about FC00 may be found at fc00.ai. JOURNALS Special Issues of Journals and Handbooks: listed earliest deadline first. * ACM Transactions on Software Engineering and Methodology Special issue on Software Engineering and Security. Guest Editors: Premkumar Devanbu (devanbu@cs.ucdavis.edu, UC Davis) and Stuart Stubblebine, (stubblebine@cs.columbia.edu). (DEADLINE EXTENDED TO JUNE 1, 1999) [posted here: 12/14/98]. Software system security issues are no longer only of primary concern to military, government or infrastructure systems. Every palmtop, desktop and TV set-top box contains or will soon contain networked software. This software must preserve desired security properties (authenticity, privacy, integrity) of activities ranging from electronic commerce, electronic messaging, and browsing. From being a peripheral concern of a limited and specialized group of engineers, security has become a central concern for a wide range of software professionals. In addition, software is no longer a monolithic shrink-wrapped product created by a single development organization with a well-defined software process. Instead, it is composed of components constructed by many different vendors following different practices. Indeed, software may even contain elements that arrive and are linked in just prior to execution. Customers need assurance that constituent components and mobile code have certain desirable properties; this need conflicts with the need for vendors to protect their proprietary information. The issue of providing assurance without full disclosure has been studied in security research, and needs to be applied to this problem. To provide a focus for these and other interactions between security and software engineering, ACM TOSEM will bring out a special issue dedicated to the intersection of concerns between the two fields. We solicit submissions that address the following issues and sub-areas: o How can security be used to address problems in distributed software development? How does one build trust and control in the distributed enactment of software processes while protecting intellectual property? o Trust in software process; Trust in software tools; Trusted (distributed) configuration management. o Can conventional, standard software engineering techniques be used to achieve verifiably higher levels of security in heterogeneous, distributed systems? What new software engineering techniques are needed? o Formal Verified implementations of security protocols; Traceability of correctness into implementation; Testing of security protocols; Specification of Secure Systems; Domain specific languages for Secure systems; Static/Dynamic Analysis for System Security; Security Testing (property-based, coverage-based, etc.); Configuring trusted systems; Evolving Legacy Systems for greater security. o Intellectual Property Protection: can security techniques be used to protect the valuable investments in software? o Reverse engineering counter measures; Software watermarking and copy protection; Combination Software and Hardware-based techniques. Additional information about submitting papers can be found at www.cs.columbia.edu/~stu/tosem.html. * IEEE Network Magazine, Special Issue on Network Security (Nov/Dec 1999). Guest Editors: Bulent Yener, Bell Labs, Lucent Technologies (yener@research.bell-labs.com), and Patrick Dowd, Laboratory for Telecommunications Sciences, United States Department of Defense (p.dowd@ieee.org). (Submission deadline: June 1, 1999) [posted here: 3/15/99]. Network and Internet security has become a crucial requirement for both users and service providers. The Internet is a commercial infrastructure where sensitive and confidential personal and business data are carried over public networks. Although security is often treated as an after-thought, this attitude is changing. Security within an application needs to be considered as a fundamental element of the application, treated analogously to Quality of Service (QoS) considerations. Security is often viewed as a one-size-fits-all paradigm, but this is difficult to sustain due to the eclectic collection of communications mediums that compose the Internet infrastructure. The danger of a cookie-cutter strategy is that security will contend with performance since it is not suited to the environment. As the QoS requirements of applications and the physical layer properties internetworking become more diverse, agile but robust and consistent security solutions are needed. This is difficult, since custom solutions typically have difficulty surviving in a mass market, yet flexibility is needed for security use to become ubiquitous. We are interested in tutorial-oriented research papers that describe real services, software systems and experiments. Work-in-progress papers describing the state of on-going research projects in Internet security are encouraged. Research papers should demonstrate the feasibility of the approach and describe the state of realization. Case studies and applied papers should discuss the key factors that made the system work and should also mention the pitfalls and problems encountered and how they may be overcome. Topics of interest include: * Intrusion detection * Authentication * Mobile code and agent security * Privacy and anonymity * Key management * Access control and Firewalls * Wireless, mobile network security * Secure multicasting * Data integrity * Security verification * Security protocols * Policy modeling * Commercial security * Electronic commerce * Security management If you are unsure if your work falls within the scope of this special issue, please send an abstract to one of the guest editors. We would be happy to review it and provide feedback. Complete details on how to submit a paper are provided at www.comsoc.org/socstr/techcom/ntwrk/special/yener_dowd.html. * International Journal of Computer Systems: Science & Engineering Special Issue on Developing Fault-Tolerant Systems with Ada. (Abstracts due June 1, 1999; full papers due: June 15, 1999) [posted here: 2/5/99]. An electronic version of the abstract is to be sent to A. Romanovsky at: alexander.romanovsky@ncl.ac.uk (phone:+44 191 222 8135; fax: +44 191 222 8232) by June 1, 1999. Full submissions are to be forwarded by June 15, 1999 to one of the guest editors (electronic submissions are encouraged): A. Romanovsky or A.J. Wellings at andy@minster.cs.york.ac.uk More information: www.cs.ncl.ac.uk/people/alexander.romanovsky/home.formal/ftada.html. ________________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 1: Conference Papers by Anish Mathuria ________________________________________________________________________ * STACS'99- 16th International Symposium on Theoretical Aspects of Computer Science, March 4-6, 1999, Trier, Germany: [Security-related paper only] o How To Forget a Secret (Extended Abstract). G. Di Crescenzo, N. Ferguson, R. Impagliazzo and M. Jakobsson * IEEE INFOCOM'99, March 21-25, 1999, New York, USA: [Security-related papers only] o Key Management for Secure Internet Multicast using Boolean Function Minimization Techniques. I. Chang, R. Engel, D. Kandlur, D. Pendarakis and D. Saha o User-Friendly Access Control for Public Network Ports. G. Appenzeller, M. Roussopoulos and M. Baker o Multicast Security: A Taxonomy and Efficient Constructions. R. Canetti, J. Garay, G. Itkis, D. Micciancio, M. Naor and B. Pinkas o Transport Layer Security: How much does it really cost? G. Apostolopoulos, V. Peris and D. Saha * TACAS'99 - 5th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, March 22-26, 1999, Amsterdam, The Netherlands: [Security-related paper only] o Automatic Verification of Cryptographic Protocols through Compositional Analysis Techniques. D. Marchignoli and F. Martinelli * ASSET'99 - The Second IEEE Symposium on Application-Specific Systems and Software Engineering Technology, March 24-27, 1999, Richardson, USA: [Security-related papers only] o A CAPSL Interface for the NRL Protocol Analyzer. S. Brackin, C. Meadows and J. Millen o Accountability Issues in Multihop Message Communication. S. Bhattacharya and R. Paul o Formal Verification of Authentication Protocols. K. Indiradevi, V. Suku Nair and J. Abraham * HotOS-VII - The 7th IEEE Workshop on Hot Topics in Operating Systems, March 29-30, 1999, Arizona, USA: [Security-related papers only] o Drawing the Red Line in Java. G. Back and W. Hsieh o Xenoservers: Accountable Execution of Untrusted Programs. D. Reed, I. Pratt, S. Early, P. Menage and N. Stratford o How To Schedule Unlimited Memory Pinning of Untrusted Processes Or Provisional Ideas About Service-Neutrality. J. Liedtke, V. Uhlig, K. Elphinstone, T. Jeager and Y. Park * 1st USENIX Workshop on Intrusion Detection and Network Monitoring, April 9-12, 1999, Santa Clara, USA: [Security-related papers only] o Analysis Techniques for Detecting Coordinated Attacks and Probes. J. Green, D. Marchette, S. Northcutt and B. Ralph o Intrusion Detection and Intrusion Prevention on a Large Network: A Case Study. T. Dunigan and G. Hinkel o An Eye on Network Intruder-Administrator Shootouts. L. Girardin o On Preventing Intrusions by Process Behavior Monitoring. R. Sekar, T. Bowen and M. Segal o Intrusion Detection Through Dynamic Software Measurement. S. Elbaum and J. Munson o Learning Program Behavior Profiles for Intrusion Detection. A. Ghosh, A. Schwartzbard and Michael Schatz o Automated Intrusion Detection Using NFR: Methods and Experiences. W. Lee, C. Park and S. Stolfo o Experience with EMERALD to Date. P. Neumann and P. Porras o Defending Against the Wily Surfer-Web-based Attacks and Defenses. D. Klein o The Packet Vault: Secure Storage of Network Data. C. Antonelli, M. Undy and P. Honeyman o Real-time Intrusion Detection and Suppression in ATM Networks. R. Bettati, W. Zhao and D. Teodor o A Statistical Method for Profiling Network Traffic. D. Marchette o Transaction-based Anomaly Detection. R. Buschkes and Mark Borning and D. Kesdogan * 1st USENIX Workshop on Smartcard Technology, May 10-11, 1999, Chicago, USA: [Security-related papers only] o Design Principles for Tamper-Resistant Smartcard Processors. O. Kommerling and M. Kuhn o Which Security Policy for Multiplication Smart Cards? P. Girard o Efficient Block Ciphers for Smartcards. V. Rijmen and J. Daemen o PKCS#15 - A Cryptographic Token Information Format Standard. M. Nystrom o Remotely Keyed Encryption Using Non-Encrypting Smart Cards. S. Lucks and R. Weis o Smartcard Integration with Kerberos V5. N. Itoi and P. Honeyman o A Portable Solution for Mutual Authentication. B. Bakker o Software License Management with Smart Cards. T. Aura and D. Gollmann o Beyond Cryptographic Conditional Access. D. Goldschlag and D. Kravitz o Providing Authentication to Messages Signed with a Smart Card in Hostile Environment. T. Stabell-Kulo, R. Arild and P. Myrvang o Authenticating Secure Tokens Using Slow Memory Access. J. Kelsey and B. Schneier o SCFS: A UNIX Filesystem for Smartcards. N. Itoi and P. Honeyman o Java Card Secure Object Sharing. M. Montgomery and K. Krishna o Investigations of Power Analysis Attacks on Smartcards. T. Messerges, E. Dabbish and R. Sloan o Risks and Potentials of Using EMV for Internet Payments. E. Van Herreweghen and U. Wille o Breaking Up Is Hard To Do: Modeling Security Threats for Smart Cards. B. Schneier and Adam Shostack * WWW8- The Eighth International World Wide Web Conference, May 11-14, 1999, Toronto, Canada: [Security-related papers only] o On the Security of Pay-Per-Click and Other Web Advertising Schemes. V. Anupam, A. Mayer, K. Nissim, B. Pinkas and M. Reiter o Secure and Lightweight Advertising on the Web. M. Jakobsson, P. MacKenzie, J. Stern o Web-Enabled Smart Card for Ubiquitous Access of Patient's Medical Record. A. Chan o WDAI: A Simple W3 Distributed Authorization Infrastructure. J. Kahan o XFDL: Creating Electronic Commerce Transaction Records Using XML. B. Blair and J. Boyer * ICSE'99 Workshop on Software Engineering over the Internet, May 17, 1999, Los Angeles, USA: [Security-related papers only] o Security for Automated, Distributed Configuration Management. M. Gertz , S. Stubblebine and P. Devanbu o An Architecture for Supporting "Pay-per-use" Downloadable Systems based on Java 2 and JavaSpaces. G. Succi, R. Wong, E. Liu, C. Bonamico and T. Vernazza * ICSE'99 Workshop on Testing Distributed Component-Based Systems, May 17, 1999, Los Angeles, USA: [Security-related paper only] o Why COTS Software Increases Security Risks. G. McGraw and J. Viega * IM '99 - Sixth IFIP/IEEE International Symposium on Integrated Network Management, May 24-28, 1999, Boston, USA: [Security-related papers only] o Secure Management By Delegation within the Internet Management Framework. J. Schoenwaelder and J. Quittek o DecIdUouS: Decentralized Source Identification for Network-based Intrusions. H. Chang, R. Narayan, S. Wu, B. Vetter, X. Wang, M. Brown, J. Yuill, C. Sargor, F. Jou and F. Gong o Towards Securing Network Management Agent Distribution and Communication. L. Korba _______________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 2: Journal and Newsletter Articles, Book Chapters by Anish Mathuria _______________________________________________________________________ * ACM Transactions on Database Systems, Vol. 32, No. 4 (September 1998): o E. Bertino, C. Bettini, E. Ferrari and P. Samarati. An Access Control Model Supporting Periodicity Constraints and Temporal Reasoning. pp. 231-285. * ACM SIGCOMM Computer Communication Review, Vol. 29, No. 1 (January 1999): o L. Pierson, E. Witzke, M. Bean and G. Trombley. Context-Agile Encryption for High Speed Communication Networks. pp. 35-49. o M. de Vivo, G. de Vivo, R. Koeneke and G. Isem. Internet Vulnerabilities Related to TCP/IP and T/TCP. pp. 81-85. * Computer Communications, Vol. 22, No. 1 (January 1999): o S.-P. Shieh, C.-T. Lin and S. Wu. Optimal assignment of mobile agents for software authorization and protection. pp. 46-55. o W.-S. Juang and C.-L. Lei. Partially blind threshold signatures based on discrete logarithm. pp. 73-86. o N.-Y. Lee and T. Hwang. Comments on 'dynamic key management schemes for access control in a hierarchy'. pp. 87-89. * Computer Communications, Vol. 22, No. 2 (January 1999): o H.-M. Sun, S.-P. Shieh and H.-M. Sun. A note on breaking and repairing a secure broadcasting in large networks. pp. 193-194. * Computer Communications, Vol. 22, No. 3 (February 1999): o W.-C. Ku and S.-D. Wang. A secure and practical electronic voting scheme. pp. 279-286. o N.-Y. Lee and T. Hwang. On the security of fair blind signature scheme using oblivious transfer. pp. 287-290. * Computer Communications, Vol. 22, No. 4 (March 1999): o K. Tan and H. Zhu. Remote password authentication scheme based on cross-product. pp. 390-393. * Information Processing Letters, Vol. 69, No. 6 (March 1999): o W.-G. Tzeng and C.-M. Hu. Inter-protocol interleaving attacks on some authentication and key distribution protocols. pp. 297-302. * IEEE Computer, Vol. 32, No. 4 (April 1999): o S. Jajodia, P. Ammann and C. McCollum. Surviving Information Warfare Attacks. pp. 57-63. * IEICE Transactions on Information and Systems, Vol. E82-D, No. 4 (April 1999): o H. Kikuchi, M. Hakavy and D. Tygar. Muti-Round Anonymous Auction Protocols. pp. 769-777. o M. Iguchi and S. Goto. Detecting Malicious Activities through Port Profiling. pp. 784-792. * Operating Systems Review, Vol. 33, No. 2 (April 1999): o D. Patiyoot and S. Shepherd. WASS: Wireless ATM Security System. pp. 29-35. o D. Patiyoot and S. Shepherd. Cryptographic Security Techniques for Wireless Networks. pp. 36-50. o U. Halfmann and W. Kuhnhauser. Embedding Security Policies into a Distributed Computing Environment. pp. 51-64. * ACM SIGMOBILE Mobile Computing and Communications Review, Vol. 3, No. 2 (April 1999): o K. Martin and C. Mitchell. Comments on an "Optimized Protocol for Mobile Network Authentication and Security". page 37. o Authors' Reply. page 38. * Computer Networks, Vol. 31, No. 8 (April 1999): [Special issue on Computer Network Security] o P. Janson and H. Rudin. (Guest Editorial) Computer Network Security. pp. 785-786. o R. Molva. Internet security architecture. pp. 787-804. o H. Debar, M. Dacier and A. Wespi. Towards a taxonomy of intrusion-detection systems. pp. 805-822. o C. Ellison. The nature of a useable PKI. pp. 823-830. o S. Smith and S. Weingart. Building a high-performance, programmable secure coprocessor. pp. 831-860. o N. Asokan, H. Debar, M. Steiner and M. Waidner. Authenticating public terminals. pp. 861-870. o G. Ateniese, A. Herzberg, H. Krawczyk and G. Tsudik. Untraceable mobility or how to travel incognito. pp. 871-884. o R. Hauser, T. Przygienda and G. Tsudik. Lowering security overhead in link state routing. pp. 885-894. ________________________________________________________________________ Calendar ________________________________________________________________________ ==================================================================== See Calls for Papers section for details on many of these listings. ==================================================================== "Conf Web page" indicates there is a hyperlink to a conference web page on the Cipher Web pages. (In many cases there is such a link even though mention is not made of it here, to save space.) Dates Event, Location Point of Contact/ more information ----- --------------- ---------------------------------- * 6/ 1/99: WIH '99, Dresden, Germany; Submissions to pfitza@inf.tu; [*] * 6/ 1/99: Special Issue IJCSSE; Journal Web page; Submissions due * 6/ 1/99: IEEE-NetMag-NetSec, submissions due; magazine web page * 6/ 5/99: WSS '99, Austin, Texas Conf Web page * 6/16/99: NDSS '00. San Diego, California; Conf Web page; papers due [*] * 6/21/99- 6/23/99: ICATM '99. Colmar, France Conf Web page * 6/21/99- 6/27/99: ADAlgs; Siena, Italy * 7/ 2/99: Deadline extension, ISW '99. Kuala Lumpur, Malaysia,Conf Web page. Submissions due isw99-sbm@ecip.tohoku.ac.jp; [*] * 7/ 4/99- 7/ 8/99: IMACS-IEEE99. Athens, Greece Conf Web page * 7/ 5/99: WFMSP '99; Trento, Italy Conf Web page * 7/ 6/99- 7/ 8/99: ISCC '99. Sharm El Sheikh, Red Sea, Egypt Conf Web page * 7/12/99- 7/16/99: IETF, Oslo, Norway * 7/26/99- 7/28/99: IFIP WG11.3, Newark, NJ Conf Web page * 7/26/99- 7/27/99: WIAPP '99. San Jose, California Conf Web page * 8/ 5/99- 8/ 6/99: IDW '99. Alexandria, VA Conf Web page * 8/ 9/99- 8/10/99: SAC '99, Ontario, Canada * 8/12/99- 8/13/99: CHES99. Worcester, Massachusetts Conf Web page * 8/15/99- 8/19/99: MobiCom 99. Seattle, Washington Conf Web page * 8/15/99- 8/19/99: Crypto '99, Santa Barbara, California, Conf web page * 8/23/99- 8/26/99: 8th USENIX Security Symposium, Washington D.C.; conf web page; info at: conference@usenix.org * 9/ 2/99: DEXA-ECS, Florence, Italy Workshop Web page * 9/20/99- 9/24/99: FM '99, Toulouse, France Conf Web page * 9/20/99- 9/21/99: CMS '99. Katholieke Universiteit Leuven, Belgium Conf Web page * 9/22/99- 9/24/99: NSPW '99. Caledon Hills, Ontario, Canada; Conf Web page * 9/24/99: FC '99; Submissions due ; [*] * 9/27/99- 9/29/99: HUC '99; Karlsruhe, Germany Conf Web page * 9/27/99- 9/29/99: DISC99. Bratislava, Slovak Republic Conf Web page * 9/27/99: PCK2000. Melbourne, Australia; Conf Web page Submissions to pkc2k@pscit.monash.edu.au; [*] * 10/15/99: JSAC-WDM paper due: krishna@eecs.wsu.edu or bli@cs.ust.hk * 9/29/99-10/ 1/99: WIH '99, Dresden, Germany * 10/11/99-10/13/99: DSOM '99. Zurich, Switzerland Conf Web page * 10/18/99-10/22/99: ; NISSC '99, Crystal City VA * 11/ 1/99-11/ 4/99: CCS6, Singapore; Conf Web page * 10/31/99-11/ 3/99: ICNP '99, Toronto, Canada; Conf Web page * 11/ 6/99-11/ 7/99: ISW '99. Kuala Lumpur, Malaysia Conf Web page * 11/ 9/99-11/12/99: IETF, Washington DC * 11/17/99-11/19/99: HASE '99. Washington, DC Conf Web page * 11/18/99-11/19/99: IICIS '99. Amsterdam, The Netherlands Conf Web page * 11/30/99-12/ 2/99: CQRE. Duesseldorf, Germany Conf Web page * 11/30/99-12/ 2/99: CQRE '99, Duesseldorf, Germany; Conf Web page * 12/ 6/99-12/10/99: 15th ACSAC; Phoenix, Arizona. Conf Web page * 1/18/00- 1/20/00: PCK2000. Melbourne, Australia Conf Web page * 2/ 2/00- 2/ 4/00: NDSS '00. San Diego, California; Conf Web page * 2/21/00- 2/24/00: FC '00. Anguilla, BWI; Conf Web page * 3/27/00- 3/31/00: IETF, Adelaide, Austraila * 4/30/00- 5/ 3/00: IEEE S&P 00; Oakland no e-mail address available * 5/16/00- 5/19/00: 12th CITSS, Ottawa; no e-mail address available Key: * ACISP = Australasian Conference on Information Security and Privacy * ACM-MOBILE = ACM Mobile Computing and Communications Review ACM-MOBILE * ACM-MONET = Special Issue of the Journal on Special Topics in Mobile Networking and Applications ACM-MONET * ACM-TSEM-SEC = ACM Transactions on Software Engineering and Methodology, Special issue on Software Engineering and Security ACM-TSEM-SEC * ACSAC = Annual Computer Security Applications Conference 15th ACSAC * ADAlgs = Summer Course in Advanced Distributed Algorithms ADAlgs * AES = Advanced Encryption Standard Candidate Conference Second AES * AGENTS-EMCSR = From Agent Theory to Agent Implementation * AIPA = Advanced Information Processing and Analysis AIPA99 * ASIACRYPT = ASIACRYPT * ASIAN = Asian Computing Science Conference * AT = Workshop on Agent Technologies * ATMA = Advanced Transaction Models and Architectures ATMA * BDBIS = Baltic Workshop on DB and IS, BDBIS * CAiSE*98 = Conference on Advanced Information Systems Engineering * CCS = ACM Conference on Computer and Communications Security CCS-6 * CCSS = Annual Canadian Computer Security Symposium (see CITSS) * CFP = Conference on Computers, Freedom, and Privacy * CHES = Cryptographic Hardware and Embedded Systems CHES '99 * CIKM = Int. Conf. on Information and Knowledge Management * CISMOD = International Conf. on Information Systems and Management of Data * CITSS = Canadian Information Technology Security Symposium * CMS = Communications and Multimedia Security CMS '99 * COMAD = Seventh Int'l Conference on Management of Data (India) * COMPASS = Conference on Computer Assurance * COMPSAC = Int'l. Computer Software and Applications Conference * CoopIS = First IFCIS International Conference on Cooperative Information Systems * CORBA SW = Workshop on Building and Using CORBASEC ORBS CORBA SW * CPAC = Cryptography - Policy and Algorithms Conference * CQRE = [Secure] Exhibition and Congress CQRE * CRYPTO = IACR Annual CRYPTO Conference * CSFW = Computer Security Foundations Workshop * CSI = Computer Security Institute Conference * CVDSWS = Invitational Workshop on Computer Vulnerability Data Sharing CVDSWS * CWCP = Cambridge Workshop on Cryptographic Protocols * DAPD-SEC = Distributed and Parallel Databases: Special Journal Issue on Security DPD-SEC * DART = Databases: Active & Real-Time * DASFAA = Database Systems For Advanced Applications * DATANET = Datanet Security, Annual International Conference and Exhibition on Wide Area Network Security * DCCA = Dependable Computing for Critical Applications DCCA-7 * DEXA = International Conference and Workshop on Database and Expert Systems Applications * DEXA-ECS = Electronic Commerce and Security, A Workshop held in conjunction with DEXA DEXA-99-ECS * DEXA-SIDIA = DEXA Workshop on Security and Integrity of Data Intensive Applications * DIMACS Security Ver = DIMACS Workshop on Formal Verification of Security Protocols * DISC = International Symposium on DIStributed Computing DISC '99 * DMKD = Workshop on Research Issues on Data Mining and Knowledge Discovery * DOCSec = Second Workshop on Distributed Object Computing Security * DOOD = Conference on Deductive and Object-Oriented Databases * DSOM = Distributed Systems: Operations & Management DSOM '99 * ECC = Workshop on Elliptic Curve Cryptography * ECDLP = Workshop on the Elliptic Curve Discrete Logarithm Problem ECDLP * ECOMM = Business Process Reegineering and Supporting Technologies for Electronic Commerce ECOMM * EDOC = Enterprise Distributed Object Computing * Electronic Commerce for Content II = Forum on Technology-Based Intellectual Property Management URL * ENCXCS = Engineering Complex Computer Systems Minitrack of HICSS ENCXCS * ENM = Enterprise Networking * ENTRSEC = International Workshop on Enterprise Security * ESORICS = European Symposium on Research in Computer Security * ETAPS = European Joint Conferences on Theory and Practice of Software * Euro-PDS = European Conference on Parallel and Distributed Systems * EUROCRYPT = EUROCRYPT * EUROMED-NET = The Role of Internet and the World Wide Web in Developing the Euro-Mediterranean Information Society * FC = IFCA Annual Financial Cryptography Conference * FIRST = Computer Security Incident Handling and Response * FISP = Federal Internet Security Plan Workshop * FISSEA = Federal Information Systems Security Educators' Association * FME = Formal Methods Europe * FMLDO = Foundations of Models and Languages for Data and Objects FMLDO7 * FMP = Formal Methods Pacific * FMSP = Formal Methods in Software Practice * FSE = Fast Software Encryption Workshop FSE 6 * GBN = Gigabit Networking Workshop * HASE = IEEE Symposium on High Assurance Systems Engineering HASE '99 * HICSS = Hawaii International Conference on Systems Sciences; Electronic Commerce Technologies Minitrack HICSS-32 * HPTS = Workshop on High Performance Transaction Systems * HUC = International Symposium on Handheld and Ubiquitous Computing HUC '99 * IC3N = International Conference on Computer Communications and Networks * ICAST = Conference on Advanced Science and Technology * ICATM = International IEEE Conference on ATM ICATM99 * ICCC = International Conference for Computer Communications * ICDCS = International Conference on Distributed Computing Systems * ICDE = Int. Conf. on Data Engineering * ICDT = International Conference on Database Theory * ICECCS = International Conference on Engineering of Complex Computer Systems * ICEIS = International Conference on Enterprise Information Systems ICEIS '99 * ICI = International Cryptography Institute * ICICS = International Conference on Information and Communications Security ICICS '99 * ICNP = International Conference on Network Protocols ICNP '99 * ICOIN = International Conference on Information Networking ICOIN--12 * ICSSDBM = Int. Conf. on Scientific and Statistical Database Management * IDEAS = International Database Engineering and Applications Symposium * IDW = Information Domain Workshop IDW '99 * IEEE NM = IEEE Network Magazine Special Issue on PCS Network Management IEEE NM 1998 * IEEE-NetMag-NetSec = IEEE Network Magazine Special Issue on Network Security IEEE-NetMag-NetSec * IEEE S&P = IEEE Symposium on Security and Privacy S&P '99 * IEEE-ANETS = IEEE Network Magazine Special Issue on Active and Programmable Networks IEEE-ANETS * IEEE-COMP-NETSEC = IEEE Computer - Special Issue on Networking Security IEEE-COMP-NETSEC '98 * IEEE-INETCOMP = Special Issue IEEE: Internet Security in the Age of Mobile Code IEEE-INETCOMP * IEEECOMHYB = IEEE Communications Magazine Special Issue on Hybrid Networks IEEECOMHYB * IESS = International Symposium on Software Engineering Standards * IETF = Internet Engineering Task Force IETF * IFIP Mobile Commns = IFIP 1996 World Conference, Mobile Communications * IFIP WG11.3 = 13th IFIP WG11.3 Working Conference on Database Security IFIP WG11.3 * IFIP/SEC = International Conference on Information Security (IFIP TC11) * IH Workshop = Workshop on Information Hiding WOIH * IICIS = IFIP WG 11.5 working conference on Integrity and Internal Control in Information Systems IICIS99 * IIIS = Integrity and Internal Control in Information Systems: Bridging Business Requirements and Research Results IIIS * IJCSSE = Journal of Computer Systems: Science & Engineering. Special Issue on Developing Fault-Tolerant Systems with Ada IJCSSE * IMACCC = IMA Conference on Cryptography and Coding, 5th IMACC * IMACS-IEEE99 = Special Session on Applied Coding, Cryptology and Security IMACS-IEEE99 * IMC = IMC Information Visualization and Mobile Computing * INET = Internet Society Annual Conference * INTRA-FORA = International Conference on INTRANET: Foundation, Research, and Applications INTRA-FORA * IPIC = Integration of Enterprise Information and Processes * IPSWG = Internet Privacy and Security Workshop * IRISH = Irish Workshop on Formal Methods * IRW-FMP = International Refinement Workshop and Formal Methods Pacific * IS = Information Systems (journal) * ISADS = Symposium on Autonomous Decentralized Systems * ISCC = IEEE Symposium on Computers and Communications ISCC '99 * ISCOM = International Symposium on Communications * ISTCS = Fourth Israeli Symposium on Theory of Computing and Systems * ISW = Information Security Workshop ISW '99 * IT-Sicherheit = Communications and Multimedia Security: Joint Working conference of IFIP TC-6 and TC-11 and Austrian Computer Society * ITLIT = CSTB Workshop on Information Technology Literacy ITLIT * IWES = International Workshop on Enterprise Security IWES * JBCS = Journal of the Brazilian Computer Society * JCMS = Journal of Computer Mediated Communication * JCS = Journal of Computer Security * JDSE = Journal of Distributed Systems Engineering; Future Directions for Internet Technology JDSE * JOCSIDS = JCS Special Issue on Research in Intrusion Detection JOCSIDS * JSAC-WDM = IEEE JSAC Special Issue on Protocols and Architectures for Next Generation Optical WDM Networks JSAC-WDM * JSS = Journal of Systems and Software (North-Holland) Special Issue on Formal Methods Technology Transfer * JTS = Journal of Telecommunications Systems, special multimedia issue JTS * JWWW = World Wide Web Journal Web page * KDD = The Second International Conference on Knowledge Discovery and Data Mining * MCDA = Australian Workshop on Mobile Computing & Databases & Applications * MCN = ACM Int. Conf. on Mobile Computing and Networking. See MOBICOM * MDDS = Mobility in Databases and Distributed Systems * MDS = Second Conference on the Mathematics of Dependable Systems * METAD = First IEEE Metadata Conference METAD * MMD = Multimedia Data Security * MMDMS = Wkshop on Multi-Media Database Management Systems * MOBICOM = Mobile Computing and Networking MobiCom 99 * NBIS = Network-Based Information Systems * NCSC = National Computer Security Conference * NDSS = ISOC Network and Distributed System Security Symposium NDSS '00 * NGITS = World Conference of the WWW, Internet, and Intranet * NISS = National Information Systems Security Conference NISSC '99 * NSPW = New Security Paradigms Workshop NSPW '99 * OBJ-CSA = OMG-DARPA Workshop on Compositional Software Architectures OBJ-CSA * OOER = Fourteenth Int. Conf. on Object-Oriented and Entity Relationship Modelling * OSDI = Operating Systems Design and Implementation * PAKDD = First Asia-Pacific Conference on Knowledge Discovery and Data Mining * PISEE = Personal Information - Security, Engineering, and Ethics PISEE * PKC = Practice and Theory in Public Key Cryptography PCK2000 * PKS = Public Key Solutions * PTP = Workshop on Proof Transformation and Presentation * RAID = Workshop on the Recent Advances in Intrusion Detection * RBAC = ACM Workshop on Role-based Access Control * RIDE = High Performance Database Management for Large Scale Applications * RTDB = First International Workshop on Real-Time Databases: Issues and Applications * SAC = Workshop on Selected Areas in Cryptography SAC '99 * SAFECOMP = Computer Safety, Reliability and Security * SCRAPC = Smart Card Research and Advanced Application Conference * SDSP = UK/Australian International Symposium On DSP For Communication Systems * SECURICOM = World Congress on the Security of Information Systems and Telecommunication * SETA = Sequences and their Applications * SFC = Society and the Future of Computing * SFTC-VI = Symposium on Fault Tolerant Computing - VI (Brazil) * SICON = IEEE Singapore International Conference on Networks * SIGMOD/PODS - ACM SIGMOD International Conference on Management of Data / ACM SIGACT SIGMOD-SIGART Symposium on Principles of Database Systems * SOC = Biennial Symposium on Communications, SOC18 * SOSP = ACM Symposium on Operating Systems Principles * SPECNS = Software Practices and Engineering, Special Issue on Experiences with Computer and Network Security SPECNS * TAPOS = Theory and Applications of Object Systems, special issue Objects, Databases, and the WWW * TAPSOFT = Theory and Practice of Software Development * TPHOLs = Theorem Proving in Higher Order Logics TSEEH. IEEE Transactions on Software Engineering. Special Issue on Current Trends * TSMA = 5th International Conference on Telecommunication Systems - Modeling and Analysis * USENIX Sec Symp = USENIX UNIX Security Symposium, 8th Annual * USENIXIDS = USENIX Workshop on Intrusion Detection and Network Monitoring 1st USENIX IDS * VLDB = International Conference on Very Large Data Bases * WDAG = Int. Workshop on Distributed Algorithms WDAG * WebDB = International Workshop on the Web and Databases * WebNet = World Conference of the Web Society * WECS = Workshop on Education in Computer Security WECS '99 * WECWIS = Workshop on Advanced Issues of E-Commerce and Web-based Information Systems WECWIS '99 * WETICE = IEEE Workshops on Enabling Technologie, Infrastructure for Collaborative Enterprises * WFMSP = Workshop on Formal Methods and Security Protocols WFMSP '99 * WIAPP = IEEE Workshop on Internet Applications WIAPP99 * WIH = Workshop on Information Hiding WIH '99 * WITAT = Workshop on Information Technology - Assurance and Trustworthiness * WOBIS = Workshop on Satellite-based Information Services * WorkshopMV = Workshop on Modelling and Verification WorkshopMV * WSLSDS = Workshop on Security in Large-Scale Distributed Systems * WSS = Workshop on Self-Stabilizing Systems WSS '99 * WWWC = International World Wide Web Conference ________________________________________________________________________ Listing of Academic (Teaching and Research) Positions in Computer Security maintained by Cynthia Irvine (irvine@cs.nps.navy.mil) ________________________________________________________________________ Swiss Federal Institute of Technology, Lausanne (EPFL), Switzerland/Eurecom/Telecom Paris General Director Areas of particular interest: Education and research in telecommunications. Applications begin immediately. http://admwww.epfl.ch/pres/dir_eurecom.html Department of Computer Science, Naval Postgraduate School, Monterey, CA Junior and Senior Tenure Track Positions in Professorship Areas of particular interest: Computer Security, but applicants from all areas of Computer Science will be considered. Applications begin immediately and are open until filled. http://www.cs.nps.navy.mil/people/faculty/chairman.html Department of Computer Science, University of Karlstad, Sweden Full Professor in Computer Science Areas of particular interest: Data communications, distributed systems, intrusion security, multimedia applications. Closing Date for applications: February 15, 1999. http://www.cs.kau.se/cs/jobs/ Department of Computer Science, Purdue University, West Lafayette, IN Assistant, Associate or Full Professor in Computer Science Areas of particular interest: Computer graphics and scientific visualization, database systems, information security, operating systems and networking, and software engineering. Positions beginning August 1999, interviews beginning October 1998; open until filled. http://www.cs.purdue.edu/facAnnounce/ Swiss Federal Institute of Technology, Lausanne (EPFL) Communications System Section, Switzerland Assistant, Associate or Full Professor in Security of Communication and Information Systems Areas of particular interest: Cryptography, security protocols and systems (ex. authentication, confidentiality, protection of software resources, security aspects of electronic commerce. Closing Date for applications: January 9, 1999. http://sscwww.epfl.ch Department of Computer Science, Florida State University, Talahassee, FL Tenure-track positions. (6/99) Areas of particular interest: Trusted Systems, software engineering, provability and verification, real-time and safety-critical systems, system software, databases, fault tolerance, and computaional/simulation-based design. Emphasis on issues of certainty, reliability, and security. http://www.cs.fsu.edu/~lacher/jobs.html Department of Electrical and Computer Engineering, Iowa State University, Ames, Iowa Assistant, Associate, or Full Professor in Computer Engineering Areas of paricular interest: Distributed and parallel computing, computer netwroking, security, software engineering, computer architecture, VLSI CAD, computer graphics, and human/computer interface design. Date closed: December 19, 1998, or until filled. http://vulcan.ee.iastate.edu/~davis/job-ad.html Naval Postgraduate School Center for INFOSEC Studies and Research, Monterey, CA, Visiting Professor (Assistant, Associate, or Full Professor levels) (9/98) Areas of particular interest: Computer and information systems security. http://cisr.nps.navy.mil/jobs/npscisr_prof_ad.html This job listing is maintained as a service to the academic community. If you have an academic position in computer security and would like to have in it included on the Cipher web page and e-mail issues, send the following information : Institution, City, State, Position title, date position announcement closes, and URL of position description to: irvine@cs.nps.navy.mil ________________________________________________________________________ How to become <> a member of the IEEE Computer Society's TC on Security and Privacy ________________________________________________________________________ You do NOT have to join either IEEE or the IEEE Computer Society to join the TC, and there is no cost to join the TC. All you need to do is fill out an application form and mail or fax it to the IEEE Computer Society. A copy of the form is included below (to simplify things, only the TC on Security and Privacy is included, and is marked for you) Members of the IEEE Computer Society may join the TC via an https link. The full and complete form is available on the IEEE Computer Society's Web Server by following the application form hyperlink at the URL: http://computer.org/tcsignup/ IF YOU USE THE FORM BELOW, PLEASE NOTE THAT THE IT IS TO BE RETURNED (BY MAIL OR FAX) TO THE IEEE COMPUTER SOCIETY, >>NOT<< TO CIPHER. --------- IEEE Computer Society Technical Committee Membership Application ----------------------------------------------------------- Please print clearly or type. ----------------------------------------------------------- Last Name First Name Middle Initial ___________________________________________________________ Company/Organization ___________________________________________________________ Office Street Address (Please use street addresses over P.O.) ___________________________________________________________ City State ___________________________________________________________ Country Postal Code ___________________________________________________________ Office Phone Fax ___________________________________________________________ Email Address (Internet accessible) ___________________________________________________________ Home Address (optional) ___________________________________________________________ Home Phone ___________________________________________________________ [ ] I am a member of the Computer Society IMPORTANT: IEEE Member/Affiliate/Computer Society Number: ____________________ [ ] I am not a member of the Computer Society* Please Note: In some TCs only current Computer Society members are eligible to receive Technical Committee newsletters. Please select up to four Technical Committees/Technical Councils of interest. TECHNICAL COMMITTEES [ X ] T27 Security and Privacy Please Return Form To: IEEE Computer Society 1730 Massachusetts Ave, NW Washington, DC 20036-1992 Phone: (202) 371-0101 FAX: (202) 728-9614 ________________________________________________________________________ TC Publications for Sale ________________________________________________________________________ o Proceedings of the IEEE CS Symposium on Security and Privacy The Technical Committee on Security and Privacy has copies of its publications available for sale directly to you. Proceedings of the IEEE Symposium on Security and Privacy -------------------------------------- 1999 $25.00 1980-1999 on CD ROM $25.00 1998 $20.00 Special Offer: 1999 Procceedings AND CD ROM of 1980-1999 Proceedings for $45.00 For domestic shipping and handling, add $3.00. For overseas delivery: -- by surface mail, please add $5 per order (3 volumes or fewer) -- by air mail, please add $10 per volume If you would like to place an order, please specify * how many issues you would like, and * where to send them, and * the shipping method (air or surface) for overseas orders. For mail orders, please send a check in US dollars, payable to the IEEE Symposium on Security and Privacy to: Brian J. Loe Treasurer, IEEE TC on Security and Privacy Secure Computing Corp. 2675 Long Lake Rd. Roseville, MN 55113 U S A For electronic orders, in addition to the information above, please send the following credit card information to loe@securecomputing.com: - the name of the cardholder, - type of card (VISA, Mastercard, American Express, and Diner's Club are accepted) - credit card number, and - the expiration date. You may use the following PGP public key to encrypt any information that you're not comfortable sending as cleartext. -----BEGIN PGP PUBLIC KEY BLOCK----- Version: 4.0 Business Edition mQCNAy+T6TkAAAEEAN/fnVu7VCPtcmBQhXFhJbejSoZJkEmWNUYvx13yRwl/gyir 61ae+GUjgWjWs9O06C6dugRGrjFZpBhMosu7sgGJMz54hvKbBNrYBSHpH0yex6e/ +c2mzbCbh40naARgPAaAki2rCkV2ryETj2Z6w98/k5fMgOZDnEy6WVOs56vlAAUR tBtCcmlhbiBKLiBMb2UgPGxvZUBzY3RjLmNvbT6JAHUDBRAvlQ8qNU4dUKmt/G0B Aba2AwCu48Oq1DPElV16DNQb7SvQAwQPGYYM3zg9RT0AyFeXajBHb9O2GkOmai8y ryJt4t3Q8aQ2BckWUsck29TT2M/U7hOrC+hJPMbziqbw5juR906pjs9OzPSR5Pta AW66CUqJAJUDBRAvlQ56enbk/HH5npkBAfkwA/9zVKeAJh/X4qzUzYJt/w9Hi3mF AAzm0YUcDwnNLkv/c1k3Kg0APh+BGbrbGvy2sVa1PgFKZluheCqSVO/BtApaf3QS ygoS118k20mzBU2QsX9KMvJ6z8nocSCWU9RopUirk8zwAisqwAq8dmgNwNsMfxDK mdCx3FiE46FrSnEKlokAlQMFEC+UKJdMullTrOer5QEB2aID/16rqeJkcfKRH/bs /1yGSqFgu6r8TUKKsD5pg/vc51t9d5X6/APGv1nO/aJOtr8NQ3InNTsl6VZEWWi/ 6TvKI7o+vuNtZ6qazRZixBXfSMh6UGzrDfgDgILVue4fG3qArF3rcRkKqFWxlX4Y 3ekZ8vYJAFyatphhFvhDX6BKhywAtCVCcmlhbiBKLiBMb2UgPGJyaWFuLmxvZUBj b21wdXRlci5vcmc+tCZCcmlhbiBKLiBMb2UgPGxvZUBzZWN1cmVjb21wdXRpbmcu Y29tPg== =jEJA -----END PGP PUBLIC KEY BLOCK----- You may also order some back issues from IEEE CS Press at http://www.computer.org/cspress/catalog/proc9.htm. o Proceedings of the IEEE CS Computer Security Foundations Workshop (CSFW 1, 5 through 12) The most recent Computer Security Foundation Workshop (CSFW12) will have taken place the 28th through 30th of June in Mordano, Italy. Topics included formal specification of security protocols, protocol engineering, distributed systems, information flow, and security policies. Copies of the proceedings are available from the publications chair for $25 each after 1 July. Copies of earlier proceedings starting with year 5 are available at $10. Photocopy versions of year 1 are also $10. Checks payable to "Joshua Guttman for CSFW" may be sent to: Joshua Guttman, MS A150 The MITRE Corporation 202 Burlington Rd. Bedford, MA 01730-1420 USA guttman@mitre.org o Proceedings of the 1998 IEEE CS Symposium on Security and Privacy Copies are available directly from the TC on Security and Privacy for $25 per copy. This price includes domestic shipping and handling. For overseas delivery: -- by surface mail, please add $5 per order (3 volumes or fewer) -- by air mail, please add $10 per volume If you would like to place an order, please specify * how many issues you would like, and * where to send them, and * the shipping method (air or surface) for overseas orders. For mail orders, please send a check in US dollars, payable to the IEEE Symposium on Security and Privacy to: Brian J. Loe Treasurer, IEEE TC on Security and Privacy Secure Computing Corp. 2675 Long Lake Rd. Roseville, MN 55113 U S A For electronic orders, in addition to the information above, please send the following credit card information to loe@securecomputing.com: - the name of the cardholder, - type of card (VISA, Mastercard, American Express, and Diner's Club are accepted) - credit card number, and - the expiration date. For security, please use the following PGP public key to encrypt any information that you're not comfortable sending as cleartext. You may also order some back issues from IEEE CS Press at http://www.computer.org/cspress/catalog/proc9.htm. o Proceedings of the Computer Security Foundations Workshops (2 through 11, excluding 4) The most recent Computer Security Foundation Workshop (CSFW11) took place the 9th through 11th of June in Rockport, Massachusetts USA. Topics included formal specification of security protocols, protocol engineering, distributed systems, information flow, and security policies. Copies of the proceedings are available from the publications chair for $25 each. Copies of all earlier proceedings (except the first and fourth) are also available at $10. Checks payable to "Joshua Guttman for CSFW" may be sent to: Joshua Guttman, MS A150 The MITRE Corporation 202 Burlington Rd. Bedford, MA 01730-1420 USA guttman@mitre.org ________________________________________________________________________ TC Officer Roster ________________________________________________________________________ Chair: Past Chair: Charles P. Pfleeger Deborah Cooper Arca Systems, Inc. P.O. Box 17753 8229 Boone Blvd, Suite 750 Arlington, VA 22216 Vienna VA 22182-2623 (703) 908-9312 (voice and fax) (703) 734-5611 (voice) d.cooper@computer.org (703) 790-0385 (fax) c.pfleeger@computer.org Vice Chair: Chair, Subcommittee on Academic Affairs: Thomas A. Berson Prof. Cynthia Irvine Anagram Laboratories U.S. Naval Postgraduate School P.O. Box 791 Computer Science Department Palo Alto, CA 94301 Code CS/IC (650) 324-0100 (voice) Monterey CA 93943-5118 berson@anagram.com (408) 656-2461 (voice) irvine@cs.nps.navy.mil Newsletter Editor: Paul Syverson Code 5543 Naval Research Laboratory Washington, DC 20375-5337 (202) 404-7931 (voice) (202) 404-7942 (fax) syverson@itd.nrl.navy.mil Chair, Subcommittee on Standards: Chair, Subcomm. on Security Conferences: David Aucsmith Michael Reiter Intel Corporation Bell Laboratories JF2-74 600 Mountain Ave., Room 2A-342 2111 N.E. 25th Ave Murray Hill, NJ 07974 USA Hillsboro OR 97124 (503) 264-5562 (voice) (908) 582-4328 (voice) (503) 264-6225 (fax) (908) 582-1239 (fax) awk@ibeam.intel.com reiter@research.bell-labs.com ________________________________________________________________________ Information for Subscribers and Contributors ________________________________________________________________________ SUBSCRIPTIONS: Two options: 1. To receive the full ascii CIPHER issues as e-mail, send e-mail to (which is NOT automated) with subject line "subscribe". 2. To receive a short e-mail note announcing when a new issue of CIPHER is available for Web browsing send e-mail to (which is NOT automated) with subject line "subscribe postcard". To remove yourself from the subscription list, send e-mail to cipher-request@itd.nrl.navy.mil with subject line "unsubscribe". Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher CONTRIBUTIONS: to are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. For Calendar entries, please include a URL and/or e-mail address for the point-of-contact. For Calls for Papers, please submit a one paragraph summary. See this and past issues for examples. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. BACK ISSUES: There is an archive that includes each copy distributed so far, in ascii, in files you can download at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/cipher-archive.html =========end of Electronic Cipher Issue #32, 07 June 1999===============