Subject: Electronic CIPHER, Issue 6, May 30, 1995 _/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ==================================================================== Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 6 May 30, 1995 Carl Landwehr, Editor ==================================================================== Contents: [2277 lines total] Letter from the TC Chair [starts on line 42] Letter from the Editor [line 95] Security and Privacy News Briefs: [line 130] o NIST releases new secure hash standard, FIPS 180-1 o SSL/SHTTP approaches reported reconciled. o New Executive Order on Security Announced. o ARPA, NSA to combine INFOSEC research efforts o MLS Zombies? Articles and Conference Reports: o Sidewinder Challenge Report, by Dan Thomsen [line 203] o 1995 IEEE Symposium on Security and Privacy Summary, by Mark Heckman, Leslie Young, and Kirk Bittler [line 401] o Common Criteria Commenters' Workshop, by Dixie Baker [line 995] o ISOC '95 Summary, by Richard Graveman [line 1314] New calls for papers: 3 conferences [line 1734] Reader's guide to recent security and privacy literature Paper lists from 3 conferences: USENIX, EUROCRYPT, CRYPTO '95 [line 1773] Relevant papers from recent journals and periodicals[?][line 1985] Recent books: A handbook on cryptography [line 2010] Calendar: [line 2019] Who's Where: recent address changes [line 2136] Interesting Links - 6 new places to surf [line 2156] Publications for sale -- 1995 S&P Proceedings available! [line 2188] TC officers [line 2224] Information for Subscribers and Contributers [line 2244] ______________________________________________________________________ Letter from the (New!) TC Chair ______________________________________________________________________ Congratulations to the organizers of the 1995 IEEE Symposium on Security and Privacy for a job well done! The results of their efforts were evident in the success of the technical program and the smooth operation and profitability of this years symposium. Our thanks to Carl Landwehr, Dale Johnson, Catherine Meadows, John McHugh, Charles Payne and the members of the Program Committee. Last but not least, we extend our sincere thanks and appreciation to Terry Vickers Benzel, our outgoing Technical Committee Chair, for her dedication and skills in serving the committee. We also thank the participants whose attendance helps support the activities of the Technical Committee on Security and Privacy. The Technical Committee on Security and Privacy convened during the Symposium and voted in their new officers. My successor as the new Vice Chair for the Technical Committee is Charles (Chuck) Pfleeger. Chuck will serve as Vice Chair for two years and Chair for the succeeding two years. Steve Kent is the 1996 Symposium Vice Chair. Steve will support Dale Johnson, the 1996 Symposium Chair, and will be Dale's successor as Chair for the 1997 Symposium. George Dinolt joins John McHugh as the Program Co-Chairs for 1996. This year also saw the initiation of a new IEEE Security and Privacy Subcommittee on Academic Affairs. Karl Levitt of the University of California at Davis was elected to chair this new subcommittee. Future Cipher editions will contain more information about this important new activity. The search for a representative from our Technical Committee for the IEEE Standards activities continues. Nominations are welcome (dmcooper@ix.netcom.com). The identity of our new Standards representative will be announced in Cipher. In addition, the IEEE Technical Activities Board (TAB) Executive Committee and strategic planning meeting will be held the end of June. The purpose is to redefine mission, purpose and goals of TAB, and to develop a strategic plan for 3-5 years long-term, and the next 18 months short-term. As your representative, I welcome your thoughts, issues and concerns on these topics. As evident in this years program, security awareness is at an all-time high, particularly among the financial transaction and telecommunications industries. There will be many new opportunities for our Technical Committee to make a difference. Chuck and I encourage your involvement. We look forward to working with you to face the challenges of the future. Deborah M. Cooper ______________________________________________________________________ Letter from the Editor ______________________________________________________________________ Dear Readers, It's been almost two months since the last issue, and we have a backlog of excellent contributions for you. I am happy to report that the 1995 Symposium on Security and Privacy went (from my possibly biased perspective) very well indeed, with a mix of good papers covering the spectrum from practical to theoretical, and a significant upturn in attendance. I am most grateful to Dale Johnson, Cathy Meadows, John McHugh, Charles Payne, Cristi Garvey, and the many others who helped make the meeting a success. For those of you who were unable to attend, Mark Heckman, Leslie Young, and Kirk Bittler provide a summary in this issue. We also have summaries of the February 1995 Internet Society Symposium on Security and Privacy, by Richard Graveman of Bellcore, and the Common Criteria Commenters' Workshop by Dixie Baker. Thanks to all of our writers for contributing their time and effort to keep Cipher interesting. In closing, I would also like to thank Terry Vickers Benzel, who has served the TC so well as chair for the past two years (and in many other capacities prior to that), and to welcome our incoming officers, Deb Cooper as Chair, and Chuck Pfleeger as Vice Chair. At least for the next few months, we have an international executive committee! Carl Landwehr Editor, Cipher ______________________________________________________________________ Security and Privacy News Briefs ______________________________________________________________________ o NIST releases new secure hash standard The U.S. National Institute of Standards and Technology (NIST) released a new Secure Hash Standard, referred to as SHA-1, on April 17, 1995. SHA-1 is documented in FIPS PUB 180-1, which supersedes the previous FIPS 180, in which a flaw had been found a year ago. The algorithm generates a condensed representation of a message, called a message digest, and is to be used in conjunction with the Digital Signature Algorithm under the Digital Signature Standard for the sender and receiver of a message to compute and verify a digital signature. An unofficial version of the standard is available at URL http://csrc.ncsl.nist.gov/fips/fip180-1.txt. The standard becomes effective October 2, 1995. One wonders if NIST has considered using the algorithm to sign the file containing its (unofficial) description -- could it then be official? If not ... o SSL/SHTTP approaches reported reconciled. The different approaches to securing internet transactions led by Netscape Communications Corp (Secure Sockets Layer, or SSL) and Terisa Systems (Secure Hypertext Transfer Protocol, or SHTTP) may now be reconciled, writes Michal Parsons in Infoworld, April 17, p. 41. SSL provides secure channels at the connection layer, while SHTTP would provide transaction security at the application layer. The reconciliation of the two approaches is expected to come about as a consequence of investments by IBM, Compuserve, Inc., Netscape Communications Corp., and Prodigy Services Co. in both Terisa Systems, Inc., and RSA Data Security, Inc., to provide technology for secure internet transactions. The investments were announced at Internet World Expo '95 in San Jose last week, according to the report. o New Executive Order on Security Announced. The Washington Post reported on April 18 that the Clinton administration has issued a new executive order governing the classification of documents. The order will automatically declassify without review most documents more than 25 years old, and will require that most new documents remain classified for at most 10 years. According to the report "classifiers will have to justify what they classify, employees will be expected to challenge improper classification and be protected from retribution." Draft versions of the order are said to have been circulated, debated, and revised for the past two years. Reported aspects of the order concerning automatic declassification and sanctions for overclassification, recall similar provisions of the classification order issued under the Carter administration, some of which were then reversed by the Reagan administration. Highlights of the order and the President's message announcing its release can be obtained by sending e-mail to almanac@esusda.gov with the text "send white-house 3817" or "send white-house 3816", respectively, as the message body. o ARPA, NSA to combine INFOSEC research efforts ARPA and NSA will create an Information Systems Security Research Joint Technology Office to link their research for DoD, including the area of electronic commerce, according to a report by Elizabeth Sikorovsky in Federal Computer Week, 10 April 1995, p.8. Goals of a Memorandum of Agreement between the two agencies are reported to include strengthening responsiveness of programs to the Defense Information Systems Agency, avoiding duplication in research programs, providing long-range strategic planning for INFOSEC research, and incorporating new INFOSEC technology into prototype systems and test-beds. o MLS Zombies? Air Force Lt. Gen. Carl O'Berry described multilevel security as "a brain-dead idea based on the assumption that you can take responsibility for information security off the shoulders of man and put it in a machine," according to an article by Paul Constance of Government Computer News April 14, 1995, p. 3. O'Berry, the Air Force deputy chief of staff for command, control, communications, and computers, believes DoD has spent "billions upon billions" of dollars on MLS trying to get systems accredited, but he doesn't believe the process has served DoD well. His view is reported to be that INFOSEC should focus on data transmission, user authentication, and risk assessment of transmitted data. ______________________________________________________________________ The Sidewinder Challenge - Results So Far by Dan Thomson, SCC ______________________________________________________________________ Many readers of Cipher associate Secure Computing Corporation with the high assurance LOCK and SNS systems. Recently, we have developed a firewall product called Sidewinder(tm). Last fall we put a Sidewinder on the Internet and challenged hackers to break through the firewall to the system behind it. This article provides a brief discussion of Sidewinder and a quick update on the hacker attempts so far. Inside Sidewinder Traditional Unix security has been described as a hard crunchy exterior, surrounding a soft gooey center. This alludes to the fact that on Unix systems there is a super user account, called root, that has the privilege to do anything. If a hacker can get the computer to recognize them as root, the hacker can do anything, including remove all traces of the intrusion. Sidewinder can be described as a honeycomb. Important pieces of the system are placed in different cells. Users are placed in cells depending on their role in the system. Sidewinder provides this separation with the type enforcement security mechanism. Type enforcement assigns each process a domain, and each file a type. A statically defined table then defines how processes in each domain can access each type. Sidewinder is built on top of BSDi Unix. The type enforcement modifications have been made at a low level in the kernel. As a result, even when a process is running as root, it is constrained by type enforcement. If a hacker gets root access, his/her omnipotence is limited to the single domain he/she started in. Thus to compromise a Sidewinder a hacker must bypass both Unix and type enforcement. However, even compromising Unix is more difficult on Sidewinder. The type enforcement honeycomb structure allows us to place the configuration files and Unix tools used in most Unix compromises out of the hacker's reach. Sidewinder also supports placing tripwires on files, called triggers. If a user accesses a file with a trigger they generate an access violation in the kernel and the user is logged off. Triggers are set on files that the user does not need to access, such as system configuration files. Triggers are also domain specific. Thus administrators who need access to these files can be placed in a different domain that does not have triggers. Sidewinder also restricts the use of system calls on a domain basis. Sidewinder has a restricted execute property that can be associated with any domain. The kernel defines a list of disallowed systems calls for restricted execute domains. Processes running in a restricted execute domain cannot make any systems calls from the disallowed list. Even root access does not allow a process to make calls on the disallowed list. Development Strategy The Sidewinder development strategy focuses on getting secure functionality as quickly as possible. This is different than the trusted systems development strategy used on programs like LOCK and SNS. On the LOCK program the system was built up from the hardware, one layer at a time. Each layer was formally analyzed against the security policy. The problem with this approach is that it is time consuming and thus delays the implementation of many functions. One of the lessons learned on the LOCK program was to use type enforcement to provide separation of subsystems. This separation reduces the amount of security analysis required by factoring the overall problem into smaller more manageable pieces. For Sidewinder we incorporated type enforcement into a fully functional Unix system. As a result we have all the underlying support needed by complex existing applications. However, now security analysis and testing has to be done at the application level. While we do extensive security testing ourselves, nothing can replace field testing. This is why we created the Sidewinder Challenge. The Sidewinder Challenge When Secure Computing established the challenge site the goal was to encourage sophisticated attacks on type enforcement. Therefore, the standard Sidewinder firewall product was modified to produce a Sidewinder challenge system that gives the hackers three key advantages: 1. A login account on the firewall Normally users do not have accounts on Sidewinder. 2. Four access violations before they are logged out On Sidewinder one violation causes a user to be logged out. 3. Loose Unix administration Rather than remove every piece of software on the system and tighten security so hackers have nothing to work with, we left many Unix programs on the challenge system, including a compiler. On Sidewinder programs that are not needed are removed. We fully expected hackers to be able to break through the Sidewinder challenge system and "ring the bell" in the early stages of Sidewinder development. So far, we have been surprised. The Attacks The colloquial term "ankle biters" best describes most of the attacks against Sidewinder. Ankle biters are individuals that actively probe a system for security holes, but who are innovatively challenged. They are lazy and look for easy systems to break into, generally trying several well-known security holes and then leaving. The problem with ankle biters is that their behavior does not immediately distinguish them from serious hackers. A serious hacker is of course going to try all the well-known security holes before becoming innovative. The next level of attacker can be called the "Unix hacker." They are distinguished by the fact that they got Unix root access. So far we have had less than 10 people get root access. The exact number is hard to count, because some hackers left setuid binaries lying around that the ankle biters used. The Unix hackers got root access through the usual kind of Unix holes, for example by tricking system software to create setuid root binaries. Our policy has been to let each hole be used once and then close it. This prevents ankle biters from taking advantage of a Unix hacker's insight. Remember, getting root access on Sidewinder does not grant any extra privilege to a hacker. Probably the most fun I have had as a security professional is to watch Unix hackers who have just gotten root access on Sidewinder. One can almost sense their excitement melting into frustration as they try one privileged command after another to no avail. The next level of attacker earns the title hacker. These are people who have discovered an approach to potentially circumvent type enforcement. So far only one person has earned this title. The attack took place on December 26, 1994. First the hacker got root access by tricking an improperly configured mailer program to create a setuid root binary. The hacker then tried starting a command to search through the file system. The search command quickly accumulated access violations and the hacker was logged off. Note, even though the hacker was running as root, Sidewinder still kicked the hacker off the system for bad behavior. The hacker logged back onto the challenge site and got root access again through the same hole. Next the hacker made a new device that pointed at the existing disk device, using the mknod system command. Since the hacker created the new device the hacker had read and write privileges for the device. As a result, the hacker could read or write any file on the disk, regardless of the Unix or type enforcement policy. While the hacker had the ability to proceed further he/she did not, most likely, because he/she needed to create a tool to access the files through the non-standard interface that had been created. The hacker also needed to determine which files to modify that would allow him/her to execute programs in another domain. The solution to this issue is not intuitively obvious, and we are not telling. To prevent this attack from happening again requires taking the ability to execute the mknod command away from the hacker. To do this, we could move the mknod system command to a different domain. However, since hackers have access to a compiler on the challenge site they could always compile their own mknod command. To truly close the hole, the mknod system call to the kernel must be moved out of a hacker's reach. This is where the restricted execute property for a domain is needed. By placing the mknod system call on the disallowed list the attack is stopped. How long did it take to close this vulnerability which had the potential to compromise type enforcement on the Sidewinder challenge system? About two hours: a half hour for the engineer to make the modifications to the code and an hour and a half to recompile and install the kernel. So for now, no one has successfully met the Sidewinder Challenge. If an enterprising hacker successfully gets past type enforcement, the hole can be closed quickly using the features in Sidewinder. More Information The challenge site is called butler.sidewinder.com IP address [204.176.88.1]. The user account is "demo" with no password. Currently we are offering a black jacket with the Sidewinder logo on the back as a prize for breaking through the Sidewinder to the other side. For more information on the Sidewinder Challenge, such as how to claim the prize, there is a World Wide Web site at http://www.sctc.com. This page also contains a bibliography on type enforcement papers. A list of frequently asked questions about Sidewinder can be downloaded via anonymous ftp from ftp://ftp.sctc.com/pub. ______________________________________________________________________ 1995 IEEE Symposium on Security and Privacy Summary by Mark Heckman, UC-Davis, and Leslie Young and Kirk Bittler, Portland State University ______________________________________________________________________ The 1995 IEEE Symposium on Security and Privacy was held May 8-10 at the Claremont Hotel in Oakland, California. Over 216 computer professionals from at least a dozen different countries attended the sixteenth symposium in the series. In his opening remarks, conference General Chair Carl Landwehr said that the scope of the conference had been broadened this year to emphasize the practice and application of computer security as well as research (even the name of the conference has been changed from "Symposium on Research in Security and Privacy"). Reflecting this new focus, sessions were held on secure commerce, network security, secure operating systems, formal models, covert channels, analysis of security vulnerabilities and protocol analysis. In addition, two panels discussed "managing risk with imperfect technology" and "security for electronic commerce." Program Co-chairs Catherine Meadows and John McHugh set aside a special session for short talks, which made it possible for a large number of people, who either did not have full papers formally accepted or were too busy to prepare them, to present their advances in the field. Abstracts of the five-minute talks were distributed at the conference. Reaction to this innovative session was overwhelmingly positive, by both presenters and listeners. Two evening poster and discussion sessions allowed additional presentations as well as more in-depth and personal discussion of the topics presented earlier. As in past conferences at the Claremont, food and service were excellent, and the Technical Committee arranged for future symposia to be held at the Claremont through at least the year 2000. Representatives from IEEE publications and a San Francisco Bay area technical book store sold a wide variety of books and journals, ranging from proceedings of other conferences on related topics to collections of "Dilbert" comics. The IEEE publications representative said that sales of books and journals were good because "people at this conference are very interested in what's new." The award for best paper was presented to Olin Sibert, Phillip Porras and Robert Lindell for their work on "The Intel 80x86 Processor Architecture: Pitfalls for Secure Systems. Following is a summary of each presentation that includes questions asked by audience members, overviews of the panel discussions, and summaries of the evening poster and discussion sessions, in the order that they occurred at the conference. (Space and time considerations prevent us from summarizing each of the twenty short talks, however.) Though most presenters closely followed their papers, this was not always the case, and we strongly urge readers who wish a more complete summary of the papers to obtain a copy of the proceedings and refer to the paper abstracts or to the papers themselves. SECURE COMMERCE SESSION This session was chaired by Li Gong. "The Design and Implementation of a Secure Auction Service" was presented by co-author Michael Reiter of AT&T Bell Laboratories. The talk described the design of a distributed service for performing auctions. In particular, sealed auction bids were the focus of the presentation. It included techniques for ensuring protection for the auction house and bidders in the face of malicious behavior by some bidders or by some of the auction servers. A question from the audience about the feasibility, in the real business world, of putting digital cash "up front", a feature included to protect against refusal to pay, was deemed a valid concern by the author. The second paper, presented by S. Johann Bezuidenhoudt, described the prepayment electrical metering system used in impoverished regions of South Africa and other countries. Currently in use and serving over a million households, the system makes use of cryptographically based tokens. A unique aspect of the system is that no information flows from the meters to the electric company. Security issues include encryption of token instructions to reduce forgery and many concerns over the physical security of the in-home meters and vendors. A question followed regarding the system engineering methods used. The author replied that formal methods were abandoned as unrealistic given their situation. Another question ensued concerning the viability of the prepayment scheme. The response indicated that the social situation in the regions in question made this the only plan in which most payments were ever made. NETWORK SECURITY SESSION This session was chaired by Birgit Pfitzmann. The first paper, "Preserving Privacy in a Network of Mobile Computers" was presented by David Cooper of Cornell University. David described a replicated memory service which allows mobile computer users to read from memory without revealing which memory locations they are accessing. Through the use of private and public randomly chosen message labels, RSA and DES message encryption, and a MIX network, this service achieves content privacy, location privacy and unlinkability of sender and receiver. Prompted by a question from the audience, David pointed out that there may be a problem with message latency because the MIX server waits to batch messages. In order to minimize message latency with this service, there needs to be a high degree of message passing on the mobile network. As of yet, there has been no thought given to denial of service attacks. "Holding Intruders Accountable on the Internet," the second talk of the session was presented by Stuart Staniford-Chen of the University of California at Davis. Stuart talked about the work he and his colleague are doing at UC Davis to trace intruders who obscure their identity by logging into a target end system through a chain of multiple intermediary systems. Their method of detection is called a "thumbprint". Thumbprints are summaries (vectors of commonly used character frequencies) of the content of each connection with protocol details abstracted out. Principal component analysis is used to determine which character frequencies to use to derive the thumbprint. Ideally thumbprints should have the following properties: very small, sensitive, robust and additive. Stuart detailed how successive thumbprints that do not provide a clear comparison can be combined (added) to produce a better signal. This additive property also allows thumbprints of different but congruent lengths to be compared. He pointed out though, that thumbprinting is pretty much useless in the presence of encryption. Experimental testing has been confined to departmental LANs at UC Davis. Scalability currently poses a problem. A member of the audience asked how this method might be used. The reply was that it could be used as part of any or all of the following: DIDS (Distributed Intrusion Detection Systems); detection of pass-through intruders; correlation of activity between different sites; and law enforcement probes. The multi-authored last paper of the session was "Integrating Security in CORBA Based Object Architectures," presented by Dr. Robert Deng of the National University of Singapore. Dr. Deng and his colleagues got involved with computer security through their expertise in networking issues. The Common Object Request Broker Architecture (CORBA) is a candidate object management standard for client/server computing which has received considerable attention from the commercial sector and international community. He described a distributed security architecture that can be incorporated into CORBA-based object architectures at the object level. Their research revolves around the notion of a secure ORB node enhanced with interchangeable system security objects. The security objects provide services such as authentication, access control, and message integrity and confidentiality. He pointed out that there currently is no commercial CORBA application that fully integrates security. PANEL: "What to do While Waiting for the Millenium: Managing Risk with Imperfect Technology" Moderated by H.O. Lubbes, panelists included Martha Branstad of TIS, Carl Landwehr of the NRL, and Col. Joe Sheldon. The question posed to the panel was, given the requirements for multilevel secure systems, how can risk be defined in terms such that it can evaluated against operational need? Branstad spoke first, addressing the relationship between operational risk and Orange Book criteria. The overriding theme of the talk was the need for stronger metrics for risks, i.e. a numerical security confidence factor. The Orange Book is inadequate in this respect. Though there is an assumption that the risk decreases as the rating increases, it is not precise or intuitive enough. Other points of emphasis were the need to focus on the issued of composing systems from rated products, and the demand for a better understanding of the tradeoffs of operational risk vs. OB criteria. Next Col. Joe Sheldon spoke on managing risk in multilevel secure implementations. In an engaging, rapid-fire monologue he stated that, in some situations, currently available MLS technologies are adequate to meet operational DoD requirements. Using today's technologies the US Army has found security works well from the side of classifications, but that trusted users are a necessity since physical access to a workstation may still result in a compromise of security. Carl Landwehr then summarized the problems of managing risk given the imperfect state of technology. He called the main problem "leaks". Ratings refer, intuitively, to the amount of leaks present in a system - whether traditional or covert channels. Rating systems should take into account, and give a better feel for, exactly what the leaks are. Following the talks there was a question and answer session. Many of the questions related to the effectiveness of a security metric or rating. More precise metrics may bring about an even slower speed of development, which is already a problem. Perhaps more education, so that clients understand the specific security flaws within the target environment would be of more use than a formal ratings system. A response from the panel suggested that maybe pieces of the system should be rated with a strong metric, but the system as a whole should only be evaluated within its environment. Branstad brought up the point that the "Orange Book" does, in some manner, deal with precisely this issue. SECURE OPERATING SYSTEMS This session was chaired by Cristi Garvey. The first paper, "Practical Domain and Type Enforcement for UNIX," was presented by Lee Badger of Trusted Information Systems, Inc. (TIS). Badger described an enhanced version of domain and type enforcement (DTE) that can be used to increase security in UNIX systems (although the ideas may have wider applicability) while preserving backward compatibility to a large degree with existing UNIX software. The DTE system associates file security attributes "implicitly", following the natural UNIX file system hierarchy, which reduces the burden of physically storing security attributes for objects. A working prototype has been implemented. Many of the questions asked at the end of this presentation concerned the ability of the TIS approach to deal with non-hierarchical aspects of the UNIX file system. What about links, for example? For hard links, the DTE system automatically preserves the file's existing type. Symbolic links, Badger explained, are a different type of object and have their own type. Given that the system preserves a file's type, is the binding of a file name and type static? No, they can be changed by making a copy of the file and by deleting the original. Cynthia Irvine, of the Naval Postgraduate School, presented the session's second paper, "A Multilevel File System for High Assurance." Irvine explained that, in order to comply with TCB minimization requirements, a TCB is unlikely to implement more than basic object management operations. Thus, it is likely that developing applications for a bare high-assurance system would be much more difficult than on a low-assurance system. It is necessary to develop a usable applications support environment on a high-assurance base that relies on the TCB's policy enforcement mechanism, and does not itself require any ``trust''. Irvine presented an example of such an environment, the Gemini Application Resource and Network Support (GARNETS) system, which is a file manager built out of untrusted subjects that implements a multi-level file system. Features of GARNETS includes support for a potentially large number of access classes (> 1 million) that may be allowed by the TCB. Irvine referred to this as the ``gizillion'' problem. GARNETS also implements symbolic links to objects, which allows files and directories to be upgraded in security class without copying the objects and allows the creation of ``virtual multilevel directories''. GARNETS runs on the GEMSOS TCB. Questions about GARNETS concerned its speed and storage efficiency relative to a low-assurance system. Irvine said she has no experimental data on the speed of the GARNETS system, and that a distinct hierarchy of GEMSOS storage objects is required for each of the up to a ``gizillion'' access classes. An incredulous listener asked if no trusted subjects were required at all to implement the multi-level file system. The answer was that, except for the special case of some operations on upgraded directories, none are required. "Formal Methods in the Theta Kernel," the final paper presented in this session, described work done at Odyssey Research Associates, Inc. on formally specifying and implementing in Ada a new version of the Theta operating system. Matt Stillerman explained that the goal of this project is to improve the security assurance and portability of the system. A listener asked if the security policy implemented by the system (restrictiveness) was a functional requirement of the system. No, but it was ``the starting point for their thinking'' and they could choose to specify other policies. Of concern to some listeners was that only a relatively small part of the overall system was formally specified, and that the actual code was not verified. Another listener pointed out that verifying an entire commercial-grade operating system is still beyond the state-of-the-art, but that it is accepted practice to use formal methods on only security-critical modules in order to increase assurance. Stillerman pointed out that it might be sufficient merely to examine the unverified code and establish an (informal) connection to the lowest level of specification, but in any case a larger gap is the Ada compiler and run-time system, which are themselves unverified. POSTER SESSIONS: Monday, May 8 Colin Bowers of NSA described research in the area of "INFOSEC metrics." The goal is to develop a way of objectively measuring security in systems, with regard to the security services required for a system, known attacks and system security attributes of the services that deal with those attacks. This research is at a very early stage and Bowers would like anyone with research or ideas in this area to contact him at clbower@alpha.ncsc.mil. Janet Cugini of NIST presented "Role-Based Access Control", which is the result of research that she is doing with David Ferraiolo. The RBAC model is designed to simplify management of enterprise-specific security policies but is not itself policy-specific. Somewhat surprisingly, it is based on the SeaView formal model for secure databases, which is a unique application of the SeaView research. "An Open Trusted Enterprise Network Architecture" was the title of a presentation by Gary Grossman of Cordant, Inc. Grossman described work done by Cordant and Novell, Inc. on an architecture for enterprise networking at the C2 level. The goal is to support heterogeneous and geographically distributed networks that are built out of components evaluated under the Trusted Network Interpretation of the Orange Book. Brenda Timmerman of the Information Sciences Institute of USC presented "Traffic Flow Confidentiality: Goals and Networking Issues of Traffic Masking Mechanisms." The goal of this work is to mask the frequency, length and origin-destination patterns of communication so that analysis of the network traffic would detect no usable information. The technique proposed is to introduce noice on connections between network nodes in the form of padding and delays. FORMAL MODELS FOR MULTILEVEL SECURITY This session was chaired by Peter Ryan. Sylvan Pinsky, of NSA, opened the session by presenting "Absorbing Covers and Intransitive Non-Interference." This paper describes an algorithm for determining if a system satisfies intransitive non-interference. The method is based on analyzing possible execution traces of a system from the point of view of a particular domain to determine which other domains are non-interfering. A listener asked if this method applied only to finite state sequences. The answer was that the system depended on the system having a finite number of states (or else the algorithm wouldn't terminate), but that the execution traces could be infinite. Another listener asked about the relationship between this method of determining non-interference and other methods, such as shared resource matrixes. Pinsky said that there is a close relationship between the methods. The next paper presented in the session was "CSP and Determinism in Security Modelling," by A.W. Roscoe of Oxford University. The paper deals with how to express security properties of a system abstractly represented using CSP. In his talk, however, Roscoe focused on one aspect of this problem, which was why it is possible to have a process that is considered secure, but whose refinements are considered unsecure. Refinement, Roscoe says, reduces non-determinism in the less abstract model, but this is unacceptable when the non-determinism at the more abstract level is based on probability. If the non-determinism of the system is probabilistic then the refinement must not change the probability distribution. A question was asked if it was sufficient merely to preserve the probability distribution of low-level behavior, because high-level behavior is supposed to be invisible to low-level subjects. The answer was ``sometimes yes,'' but it may be that an assumed uniform distribution of high-level behavior is what makes the high-level behavior invisible at the lower level, and changing that distribution destroys security. "The Semantics and Expressive Power of the MLR Data Model," presented by Fang Chen of George Mason University, closed the session. Chen described a new secure database model that borrows from earlier models and eliminates data ambiguities and downward information flows that were present in those earlier models. Among the key features of the new model is ``data-borrow integrity,'' which allows for upward information flow yet prevents data ambiguity at the higher level. Chen described how the MLR model can be applied to commercial multilevel database products. A question from the audience asked why uniform key classification was necessary. The answer is that a key must be entirely visible or entirely invisible at a particular access class. COVERT CHANNEL CONTROL SESSION This session was chaired by John McLean. The first paper in this session "A Network Version of The Pump" was presented by Myong Kang of NRL. Myong described extensions to the existing NRL data Pump for a certain MLS network architecture. The "network Pump" is a secure version of a standard store and forward buffer that balances congestion control, fairness, good performance, and reliability against those of minimizing threats from covert channels and denial of service attacks. The network Pump is a trusted process that forwards traffic from Lows to Highs and only operates between two levels. In the case of Lows sending messages to both Mediums and Highs, two Pumps would be used. The possibility of covert channels with this mechanism is minimal due to the noise from multiple users which tends to frustrate attempts to derive meaningful information from any communication channels. A comment was made that the network Pump could be effectively used for congestion avoidance. Myong's reply was yes, that they are looking into using it for flow control. Randy Browne presented the second paper "Extended Abstract: An Architecture for Covert Channel Control in RealTime Networks and MultiProcessors". He described a Foreground (non-preemptive)/Background (preemptive) realtime system and how to control covert channels within the Foreground portion of it. Two properties are necessary for covert channel control: elastic separability, and phase-delayed timing of realtime system input-output with non-preemptive scheduling. Elastic separability refers to the stretching/shrinking effects on the run time of a task. Systolic timing means that while hosts are performing their computation functions for a particular frame, all data output to the network is from computation already completed in past frames, and all data input from the network is for computation yet to be done in future frames. He goes on to present a systolic elastically separable (SES) realtime network for controlling covert channels in the Foreground component of multilevel realtime networks. All SES realtime networks satisfy a weak confinement of covert channels. To satisfy a strong confinement (and suppression of all covert channels) one would only have to adjust the frame-by-frame timing at external interfaces. During the presentation, life imitated science as Randy switched slides and demonstrated the "elastic separability" of characters, when a mysterious extra `M' appeared on one slide, and a later slide was missing the same letter. (You had to be there.) Andrew Warner of Penn State University presented the third paper "Version Pool Management in a Multilevel Secure Multiversion Transaction Manager". He described a prototype based upon a multiversion scheduling protocol. The prototype consists of an untrusted server at each security level providing service to untrusted clients. The multiversion scheduling protocol removes conflicts between transactions (a series of reads and writes followed by either a commit or abort) at different security levels by serializing a transaction before all strictly dominated, active transactions. The goal of the version pool manager is to provide efficient access to block versions. And the version pool consists of all versions except those with the greatest write timestamps. The version pool size is controlled by the write rate. PANEL: "Selling Cubic Zirconia on the Internet: Security for Electronic Commerce" Moderated by Steve Kent, the panel included Cliff Neuman of Information Sciences Institute, and Bill Powar of Visa International (Steve Crocker of CyberCash was absent from the panel, but later gave a very brief talk). Neuman was the first speaker, providing on overview of the main three types of electronic commerce today: secure presentation -- which simply uses the medium as on ordering vehicle, electronic currency, and credit-debit instruments. If these media are secure, reliable, flexible, scalable, efficient, and unobtrusive any number of different services might be provided. In particular, Netcheque, a credit-debit instrument, and Netcash, a form of electronic currency were described. Following this, Powar described the general structure of Visa International and how it can be extended for the electronic commerce environment. It makes use of CMU's NetBill project which provides a structure for the actual transactions. He emphasized that technology is only one aspect of the infrastructure, and that responding to user needs is critical to market acceptance. There ensued a lively discussion with a variety of questions. How would publishing electronically work, with the inherent ease of duplication? The response was that this is a problem which cannot be solved technically, but through the same legal methods that protect software rights currently. How do these methods handle the current lack of distinction between merchant and consumer? Both of the implementations described have provisions for the consumer to also act as merchant, in some capacity. If bandwidth needs to be paid for, how could it be done? The basic price would be supplemented by "shipping costs". What about privacy? Would Visa have access to extensive records of individuals on the network? For the most part, this information is kept between the consumer and his/her bank. Finally, how does Visa evaluate risk in this new venture? Basically this is done by educated guesses followed by adjustments as the situation demands. Steve Crocker, a little later, very briefly discussed the secure internet payment services of CyberCash, including a currently operational credit system. The peer-to-peer cash system is not yet running. POSTER SESSIONS: Tuesday, May 9 Lee Badger and Daniel Sterne of Trusted Information Systems gave a session entitled "A Demonstration of Practical Domain Type Enforcement for Unix". They demonstrated a prototype of the DTE-enhanced UNIX operating system that they had alluded to in their presentation in the Secure Operating Systems session on Monday. Dieter Gollman of the Information Security Group called his session "Idealization = Abstraction + Extension". The BAN family of logics do not contain rules pertaining to state information, names, or confidentiality. Therefore they are limited with respect to some protocols. Some blame this on inadequacies of the idealization process. This process should be made more precise by including a "rather mechanical" abstraction, and the introduction of new axioms to extend BAN. "Open Network INFOSEC Research and Development" was the title of the presentation by John Norseen of Martin-Marietta Laboratories. He described a series of modular security products they have created for creation, storage, retrieval, and transmission of encrypted data files. Dr. Tom Haigh of Secure Computing Corporation presented "Flexible Security in a Prototype Mach System". SCC has developed a Mach-based UNIX system. Security policy is allowed great flexibility by having an external security server so that policy enforcement and policy decision are separated. The prototype is scheduled to be released this summer. The session "From Formal Security Specifications to Executable Assertions - A Distributed Systems Study", by Cristina Serban and Bruce McMillin, focused on a method for checking for security policy compliance at run-time using assertions which are embedded in the software. Methods employed are in the family of information flow and trace based models. DISCUSSION SESSION On the second night of the conference a small group got together for a "Birds-of-a-Feather" discussion session led by William Herndon of MITRE. The topic was security in distributed object systems. He presented an overview of the area and discussed how security is becoming an increasingly pressing concern for vendors, users and application builders. At the moment the Object Management Group (OMG) has something like 400 different entities attempting to define standards for object oriented systems. The issues surrounding this area are complex, and the security solutions of the past do not always map well to the object paradigm. Users want to have seamless integration, plug and play capabilities, multiple granularities of control, and the option of application level control. Vendors, on the other hand, want to improve revenue and leverage off of existing security mechanisms where possible. How can these varying desires be reconciled? It soon became apparent that there are a lot more questions in this area than answers. The discussion tended to revolve around what a user's perception of security in a distributed environment is. Most participants agreed that the concept of system assurance is the hardest to discuss with users, and that there is a need to sensitize them to the TCB. ANALYSIS OF SECURITY VULNERABILITIES This session was chaired by George Dinolt. The first paper "Capacity Estimation and Auditability of Network Covert Channels" was presented by Nemo Newman-Wolfe of the University of Florida. Nemo characterized network covert channels as communication channels that do not manipulate the resources local to a single machine. Rather they are exploited by the manipulation of communication resources or transmission characteristics. An audience member made the comment that network covert channels are really storage channels and that the transitory nature of a network does not really change anything. Nemo agreed that storage channels are the closest thing to the type of covert channel that he is describing. He also detailed an adaptive scheduling policy used to estimate network covert channel capacity and the results of using a proportional handling policy to minimize them. "Supporting Security Requirements in Multilevel Real-Time Databases" presented by Ravi Mukkamala from Old Dominion University was the second paper in the session. Ravi, whose expertise is in realtime systems, stated that this is the first time he and his colleagues have done research related to security. To date there has been very little development of multilevel secure DBMSs that also support realtime requirements. The problem becomes how much security are you willing to trade off for realtime requirements? He described how the amount of security being satisfied can be quantified by the capacity of exploited covert channels. He expressed a need for a metric for covert channel capacity. A member of the audience pointed him to literature that may help shed light on the problem. The last paper was "The Intel 80x86 Processor Architecture: Pitfalls for Secure Systems" presented by Olin Sibert of Oxford Systems Inc. Olin talked about his nagging concerns for the state of hardware testing. There appears to be an imbalance in scrutiny for hardware protection mechanisms relative to software. He discussed why this imbalance is increasingly difficult to justify as hardware complexity increases. The project (funded by NSA's TPEP) to identify architectural pitfalls and categorize flaw reports in 80x86 processors is finished, but the work to ensure secure hardware is not done. He expressed the need to perform thorough penetration testing of 80x86 and other micro-processors. Olin and his colleagues received the 1995 IEEE Symposium on Security and Privacy best paper award. PROTOCOL ANALYSIS The final session was chaired by Raphael Yahalom. First up this session was John Millen of the Mitre Corporation, presenting his paper "The Interrogator Model." The Interrogator is a tool for analyzing cryptographic authentication and key distribution protocols. Implemented in Prolog, the Interrogator consists of two parts: a model of communicating ``parties'', which together constitute a composite machine, and a system for solving equations that represent the protocol for how the parties communicate. A listener asked if the Interrogator can be used for protocols without building an understanding of algorithms and other mathematical theory into the system. Millen answered that we don't really reason about the mathematical theory when analyzing protocols but, instead, reason about their security-related properties. Next, Stuart Stubblebine of AT&T Bell Labs presented "Recent-Secure Authentication: Enforcing Revocation in Distributed Systems". Stubblebine described a method of formally specifying and reasoning about a distributed authentication architecture that permits revocation of authentication statements. Communications latency in large distributed systems provides a window of opportunity for authentication failures because revocation cannot occur instantaneously when errors are detected. Recent-secure authentication provides for ``freshness constraints'' on statements. Stale (past their freshness limit) statements are suspect. Essentially, a trade-off can be made between performance and protection in the system by choosing appropriate freshness contraints. The final paper presented in the session was "Reasoning about Accountability in Protocols for Electronic Commerce," by Raja Kailar of SecureWare, Inc. Kailar presented a new basis for evaluating communications protocols that is oriented around the ability of an entity to prove accountability. Whereas most protocol analysis methods use ``belief'' as a basis for establishing the correctness of a protocol (e.g., key distribution), in electronic commerce it is essential to not only believe who sent information, but to be able to prove who it was who sent it. A key difference between belief and accountability is that belief may be time-dependent (see the discussion of recent-secure authentication, above), but accountability is essentially time invariant. The question was asked if the accountability framework can also deal with freshness of information. Kailar replied that they haven't dealt with this issue, and that their logic needs to be applied with others for a more complete analysis of a protocol. Mark Heckman Kirk Bittler Leslie Young Univ. of California, Davis Portland State University heckman@cs.ucdavis.edu {bittlerk | leslie}@cs.pdx.edu ________________________________________________________________________ Common Criteria Commentors' Workshop by Dixie Baker, Aerospace ________________________________________________________________________ The international Common Criteria Editorial Board (CCEB) sponsored a two-day workshop for reviewers of version 0.9 of the Common Criteria for Information Technology Security Evaluation (CC) 11-12 May, in Ottawa, Canada. Approximately 40 people from Europe, Canada, the U.S., and Japan participated in the workshop, the stated purposes of which were: o To allow the CCEB to provide general information on the comments received and the planned changes to the document based on these comments; and o To allow the CCEB to receive added clarifications on the reviewers' comments on the document so that they can best update the document to reflect the expert opinions. The workshop began and ended with plenary sessions, with work group break-out sessions filling the remainder of the time. During the opening plenary, Dietra Kimpton, Manager INFOSEC Standards and Initiatives for the Director General INFOSEC, Canadian Communications Security Establishment (CSE), summarized the review of Version 0.9. Over 6000 comments were received from 111 reviewers. The following global issues were identified from the comments: o Document organisation: The organization needs to be improved to increase understandability and usefulness. o Complexity of the general model: Decomposition of requirements and construction of a prescriptive Protection Profile (PP) are complex. o Extensibility of Requirements: The CC claims to be "extensible," but no process for extending it by adding functional requirements is provided. o Extensibility of CC: The distinction between requirements for a Target of Evaluation (TOE) and requirements for a PP is unclear. o Effectiveness: The principles of the European Information Technology Security Evaluation Criteria (ITSEC) are not clearly observable. o Strength of mechanism: Support for this concept needs to be clarified. o Protection Profile/Security Target: The relationship between a PP and a Security Target (ST) is unclear. o Glossary and terminology: Definitions are incomplete or inconsistent. o Security objectives/threats/countermeasures: The mapping among these is not clear. o Protection Profile/Security Target: The process for selecting requirements for a PP or ST is not clear. o Evaluation results: This concept is not well understood. o Dependencies and binding: Dependencies are incompletely and incorrectly specified. o Interpretations of requirements: Many requirements are subjective. Following this opening plenary, eleven technical discussion sessions were conducted, after which the recorder for each session reported the results. The session on "Source Criteria Concepts and Paradigm" was moderated by Steve LaFountain (US National Security Agency (NSA)) and reported by Carol Moore (NSA). During this session, Chris Ketley (UK Communications Electronics Security Group, CESG), Aaron Cohen (CSE), John Arnold (Admiral Commercial Licensed Evaluation Facility (CLEF), UK), Grant Wagner (NSA), and Andrew Robison (BDS, Canada) spoke on the fundamental concepts of the CC and their origin in source documents. Wagner noted that, although the CC does not mandate a demonstrable Reference Monitor RM or Trusted Computing Base (TCB), it does include components from which requirements for a TCB or RM can be constructed, and he suggested that requirements packages for a TCB and RM be added. Robison discussed the roles played by subjects, objects, processes, and RMs in the U.S. Trusted Computer System Evaluation Criteria (TCSEC), the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC), and the CC. The group concluded that: (1) more thought should be given to how to capture the ITSEC concepts of security enforcing, security relevant, and security irrelevant; and (2) a linkage between architecture and assurance should be added. The "Component Structure and Content" session was led by Yvon Klein (SCSSI, France) and reported by Maria King (Consultant, US). Components are the atomic building blocks of the CC, and components are composed of elements. This session concluded that if component dependencies are included, they should be accurate and complete (i.e., "normative"); else they should be noted for information only (i.e., "advisory"). With respect to flexibility, because elements do not stand along, but are grouped into components, PPs will contain some superfluous elements. The group discussed whether users of the CC need to use individual elements, and if so, whether the elements should be changed. They also discussed whether a controlled way to modify the CC is needed, and they concluded that the CC needs to be extensible in three ways: (1) to fix bugs carried over from source material (e.g., Trusted Path equivalent to Trusted Connection for distributed systems); (2) to fix components discovered to be incorrect, after experience with the CC; and (3) to add new components derived from lessons learned and new threats. The group also speculated on who would develop PPs and how, and concluded that either PPs would be written only by experienced experts or by anyone, with experts evaluating them. An industry concern expressed was the possibility of their developing PPs that would not be approved by the experts. The session on "CC Organisation" was led by Ulrich van Essen (Germany German Information Security Agency, GISA) and Murray Donaldson (CESG), and reported by Julian Straw (SISL CLEF, UK). One of the primary issues discussed was the reviewers' comment that the CC is too big and too complex. The group recommended that all of the material contained in Version 0.9 be retained, but that it be reorganized, perhaps to be compatible with the ISO structure. They noted that the rationale seems to have been lost and needs to be resurrected. Donaldson described one proposal ("CC Lite") received for restructuring the CC. The group concluded that the CC should contain a small core criteria to be used by PP builders and that it should concentrate on how to evaluate criteria and should contain supplemental materials for other users. The session on "Applicability of the CC to the Current Market Place" was led by Steve LaFountain and reported by Dixie Baker (Aerospace, US). Clark Weissman (Consultant, US) presented a briefing that discussed issues relating to networks and distributed systems, which concluded that the CC does not address these types of systems very well. In particular, Weissman discussed a number of issues relating to timing and asynchronous states in distributed systems because of delays, errors, and component failures in these message-passing systems: indeterminacy access revocation, backup and recovery, cryptographic key management, and linking of audit events. Baker concluded that the CC does a fairly good job of specifying requirements for stand-alone systems, but does not address networks or cryptographic integration well. She also noted that it does not address the need to specify the trusted application interface. The group concluded that the following issues need to be addressed in the CC: (1) access revocation; (2) synchronization of events; (3) specification of the trusted application interface; (4) cryptographic support and integration; and (5) device identification and authentication. The session on "Assurance Levels and Flexibility" was led by Mario Tinto and reported by Carol Moore. The group concluded that the notion of relative "goodness" of the assurance levels is absent and needs to be added. The attributes of the seven assurance levels need to be clarified, as well as how the levels are used and interpreted. They also noted that not all levels are defined, the requirements are not easily understood, and not always consistent. They suggested that a partial ordering should be implemented, but that substitutions should not be allowed since there is no technology for establishing equivalence. The group expressed concern regarding how to prevent the development of unreasonable assurance packages; for example, an A7 (i.e., highest-assurance) firewall that is inherently weak functionally, but highly assured. Assurance cannot be evidence-driven unrelated to the functionality provided. The group noted that when systems are composed of components at different assurance levels, no mechanism exists for expressing the composite level of assurance (e.g., it contains no composition rules such as those contained in the Trusted Network Interpretation). The session on "Protection Profile Experiences" was led by Gene Troy (US National Institute of Standards and Technology (NIST)) and Chris Ketley (CESG), and reported by Ron Ross (Institute for Defense Analysis (IDA), US). During this session, Santosh Chokani (Consultant, US) reported his experience in developing a PP for a trusted CMW-like workstation; Bernard Roussely (North Atlantic Treaty Organization (NATO)) reported on his development of a Firewall PP; and Ken Elliott (Aerospace, US) reported on his development of a B2 PP. The group noted that of the PPs that have been developed to date, some have reflected a "conservative" philosophy, in which the PP developer "played by the rules" and used only components contained in the CC and as they were written, while others represented a "liberal" philosophy, in which the developers edited out unnecessary elements from the components. (The basic paradigm for PPs is that they may contain only components that are contained in the CC and that are not modified. Specifications that create new components or modify the CC components are STs, not PPs.) The group concluded that: (1) building PPs is not easy; (2) PP developers need flexibility; (3) the building blocks contained in the CC are too constrained, resulting in overspecification or underspecification; (4) no process is in place for evaluating PPs; and (5) a "user friendly" executive summary for PPs is needed. The session on "Testing and Other Assurance Practices" was led by Mario Tinto (NSA) and Murray Donaldson, and reported by Julian Straw. The group addressed the questions of whether the assurance packages are complete, feasible, and adequate with respect to testing; and they answered "no" to all three questions. They concluded that the test criteria need to be revised with respect to structure, content, test ordering, and module testing. The CC does not adequately address penetration testing, or the use of existing standard tests and known vulnerabilities. They concluded that the developer must bear the burden for identifying flaws. The session on "PP/ST Concepts and Relationships" was led by Chris Ketley and Gene Troy, and reported by Carol Moore. This group addressed the questions of whether the PP and ST concepts are clear and useful, and whether the relationship between them is clear and useful. They concluded that the differentiation between the PP and ST is difficult. Karin Taylor (CSE) reported her experiences in constructing a Firewall ST, and she concluded that a Vendor would have difficulty constructing an ST without expert guidance. She also noted that where the team deemed dependencies to be incorrect or incomplete, technical rationale was provided in the form of an appendix to the ST. Nick Bleech (EDS CLEF, UK) proposed a method for constructing PPs. The group concluded that guidance on the development of both PPs and STs is needed and that the dependencies included in Part 2 may be overspecified. The session on "The Role of Threats in the CC" was led by Dave Ferraiolo (NIST) and Yvon Klein (SCSSI, France) and reported by Chris Ketley. David Ferraiolo (NIST), Marvin Schaeffer (ARCA, US), and Nick Bleech gave presentations. The CC starts with the threat as the component selection criterion, but threats are not well expressed in the CC components. The group concluded that threats may not be the best starting point and that policy objectives may be better. The session on "Development and Refinement of ADV" (where ADV is Developmental Assurance in the CC) was led by Mario Tinto and Murray Donaldson, and reported by Andrew Robison. Tinto began by presenting models of the development and security life cycles. The group expressed no strong feeling as to whether any changes are needed to the ADV criteria other than to clarify or eliminate ambiguities. They noted that the development life cycle that Tinto presented is more applicable to high-assurance systems than for low-assurance systems. John Arnold judged ADV_INT to be subjective (what is a "module") and noted that ADV_TIS is not in the ITSEC. The group concluded that "module" needs to be more explicitly defined. Donaldson questioned whether all internal consistency checks and effectiveness checks have been done. Clark Weissman presented a chart showing that the correspondence relationships of CC evidence generated during development is an imperfect mathematical model of implication. A perfect evidence chain would show that the TOE's implementation implies the policy model (TSP). This figure was offered as a replacement for all of the "justification" sections of the CC components. The session on "Backward Compatibility to Source Criteria" was led by Ulrich von Essen and Aaron Cohen, and reported by Bill Shockley (Consultant). The group concluded that the high-level goal of being backward compatible with the TCSEC, CTCPEC, and ITSEC had been achieved in general. However, they were not sure it was worth the effort. The disassociation of assurance levels and functional components is difficult to grasp conceptually. Yvon Klein led the closing plenary session, at which a number of questions were debated by the full group of workshop participants. These questions evoked several lively debates. Question: Is assurance level AL1 already too high for commercial systems? Response: Murray Donaldson pointed out that the goal was to ensure harmonization with existing criteria and not to produce a vendor-declaration level of assurance, like the XOPEN "brand," where one signs a trademark license that his/her product meets brand standards of functionality and assurance. Deficiencies result in loss of the trademark. However, the CC is designed to be extended, so a vendor-declaration level of assurance (AL 0.5 perhaps?) could be added (but not in the next version). Gene Troy noted that NIST has taken the position that evaluations need not necessarily be done by a "third party" so long as they're "independent." Question: Why isn't the Federal Criteria's "augmentation" concept in the CC? Response: The Federal Criteria's concept of augmentation allows one to augment a component at level x with an element taken from level x+1. Dave Ferraiolo explained that they had opted for components to be the atomic level of granularity, not elements, with functional dependencies instead. Moving elements among components would change dependencies. This prompted much discussion of the dependencies, and the group concluded that the dependencies are useful, but "broken" and need to be fixed before the next release and that they need to be kept normative, not advisory. No decision was reached as to whether "augmentation" would be added. Question: What is the technical soundness of "extensibility?" Response: The concept of "extensibility" was a "hot topic" throughout the workshop. The extensibility issue deals with questions relating to the levels of security engineering and science that would allow adding and subtracting components while maintaining consistency and assurance. During this discussion, Chris Ketley noted that a distinction must be made between adding requirements to an ST and extending the CC itself. He noted that that ITSEC and its scheme have demonstrated that extensibility works. However, many people attending the workshop disagreed with him on both counts. Mario Tinto, Bill Shockley, and others challenged anyone to point to any papers showing that assurances are applicable regardless of the set of functional requirements, and Tinto astutely noted that "You don't get to violate physics because you have a procedure." Nick Bleech and several other participants also disagreed with the notion that one can "extend" PPs, STs, and/or the CC in an ad hoc manner. Question: If a TOE is being evaluated against an ST only (rather than against an ST based upon a PP), the fitness for use analysis must establish a comparison between the threats to be covered and the threats actually covered. Where is this addressed in the CC? Response: Mario Tinto responded that the Top-Level Security Objectives (TLSO) for an ST specifies the objectives. The questioner then pointed out that since the threats to be countered and the threats covered are both in the ST, a circular dependency is created and the PP evaluation process can be easily bypassed. The CC writers will address this concern. Question: What is the relationship among PP "vetting," an ST, and the evaluation of a TOE? Response: Gene Troy noted that the CCEB has proposed to ISO that they standardize the registry of PPs and the PP evaluation process. PP evaluation and registration has nothing to do with ST evaluation. An ST is evaluated against a PP, and a TOE is evaluated against the ST. If the TOE changes during an evaluation, the ST must be updated, so that they always map. However, if an ST claims compliance with a PP, this compliance must be maintained. Troy noted that applying the term "evaluation" to PPs, STs, and TOEs creates confusion; they may want to resurrect the term "vetting." Evaluation is a technical analysis, whereas vetting is a social process of trades between effectiveness, value, cost, and usefulness. Question: The size of the document is not an issue; the presentation is. Wouldn't converting the document to hypertext solve this problem? Response: Converting to hypertext won't solve the basic problem, which is that the document contains much repetition, and it is hard to follow and difficult to navigate. The CCEB is aware of these problems and the need for a "roadmap." Grant Wagner appealed to the CCEB not to take out materials, but to retain the historical context and source documentation to facilitate understanding of the CC. Perhaps the material needs to be reorganized, but it should all be retained. Dietra Kimpton noted that the sponsor representatives want to remove some of the background material. Summary This was a very productive two-day workshop where the participants clearly came to participate and to make worthwhile contributions. They clearly expressed some concerns, but it appears that the CC is on the right track. The next release of the CC, version 1.0, is targeted for release in January of 1996. Restructuring of the CC based on the comments is scheduled for the fall of 1995, and the results will be presented at the meeting associated with the 1995 National Computer Security Conference. ________________________________________________________________________ Report on the ISOC Symposium on Security and Privacy San Diego, February, 1995 by Richard F. Graveman, Bellcore ________________________________________________________________________ In February, 1993, the Privacy and Security Research Group of the Internet Research Task Force sponsored an Internet Security Symposium. It was successful enough, thanks especially to the organizing role played primarily by Dan Nessett, that in 1994 the Internet Society (ISOC) assumed the role of sponsor. After the Winter of 1993-1994 in the Northeast, attendance became mandatory. Two emerging, important topics discussed at the 1995 Symposium were (1) secure protocols for the network infrastructure and (2) electronic commerce on the Net. Other sessions covered authentication, network monitoring, and secure object retrieval. Some of the 1995 highlights were: (1) security for routing protocols became a major topic for the first time; progress securing both distance vector (RIP) and link state (OSPF) intradomain protocols was reported; (2) several ventures for electronic payments continued to be debated, and the debate itself seems now to have moved from research to the business world; (3) the advantages and practicality of adding public key methods to Kerberos-like systems became clearer for a range of reasons: inter-realm trust, client and server signatures, less trust required in on-line servers, and convenience of signed authorization certificates; (4) a panel discussion covered different aspects of MOSAIC and WWW security. General Chair Jim Ellis announced that attendance was about 350. A show of hands revealed that about 80% of the attendees heard about the conference from the Net; about 20% from paper mailings; hardly any from [expensive] journal advertisements. Program Chairs Dave Balenson and Rob Shirey reported that submissions were up to 37 from 19 and 23 previously. The remainder of this report covers the symposium session by session. It is written from notes the author took during the symposium and may not be consistent with all of the papers in the Proceedings. Session 1 was named Diverse Approaches to Security at the Network Layer. Tony Ballardie presented the first paper titled Multicast-Specific Security Threats and Counter-Measures, in which he addressed eavesdropping and denial of service (e.g., flooding the video bandwidth). His proposed improvements are group access controls (to restrict group membership and defend against unauthorized listening) and transit traffic control (to detect unauthorized sending); these use a public key based infrastructure and a linked set of multicast authorization servers. Multicast group certificates are specified by one group member and signed by an authorization server; they contain group names, an inclusion or exclusion list, and a sender list. Daniel Stevenson then described the complexities of providing cryptographic protection for high speed ATM running over 622 megabit per second SONET. Their prototype, described in the February, 1995, Communications of the ACM, addresses privacy through cell encryption, limited access controls, transparency with respect to ATM standards, support for a large numbers of active virtual circuits per crypto operation, sufficient call capacity, and handling of different call types. They use DES, 3- DES, and a public key infrastructure. Performance numbers included four second call setup and 30 calls per second with 3-DES. Session 1 concluded with IpAccess: An Internet Service Access System for Firewall Installations by Steffen Stempel. This work focused on securing outbound TCP connections through a firewall, e.g., with SOCKS or with their system, IpAccess. They implemented both application layer forwarding, e.g., proxy servers, and network layer forwarding (modifying clients and forwarding at TCP layer). The client says "ipacc service," and given permission, receives a contained environment. Logging, ACLs, quotas and timeouts are provided. Information is available from http://avalon.ira.uka.de/eiss. Session 2 consisted of a panel discussion on Security Architecture for the Internet Infrastructure chaired by Rob Shirey, who opened by stating that a lot of piecemeal security exists at the edges of the Internet, but routing, directory (DNS), and network management (SNMP) are for the most part not secured today. Next, Paul Lambert spoke about Security for the Internet Protocol (IPv4) and IP Next Generation (IPv6) as protection against active and passive attacks. IPv6 includes encapsulation and authentication protocols. Parsing at intermediate routers is needed, so the authentication header is transparent, while the encapsulation protocol is opaque. Also, 64 bit alignment is an IPv6 requirement. The ipsp Working Group is also specifying IPv4 security. All applications require key management, which will likely include Diffie-Hellman based distribution of session keys, anti-clogging protection, multicast support, and mechanism negotiation. Network layer security must work with firewalls; multiple encapsulation will sometimes be desirable; naming and public key infrastructure are currently unresolved. Jim Galvin covered Security for the Internet Domain Name System. DNS is a tree structured, distributed data base designed to rely on peer trust. For example, responses can contain helpful pointers, which may be bogus, or they may have irrelevant "cache poisoning" information. Also, DNS can flood an innocent host. The high-level goals of DNS security are verifiable bindings without loss of robustness. The IETF dnssec Working Group's goals were: (1) interoperation with unsecured clients (no flag day cutovers); (2) support for at least internal users when partitioned; (3) operation without additional queries; (4) no protection of DNS data from disclosure (no DES, no export controls). The choice was RSA digital signatures to bind names to resources and also to provide a method to distribute the public keys (along with, say, A records). The Eastlake and Kaufman proposal included a key management discussion and only two new resource records with embedded algorithm identification. Authoritative "does not exist" records can also be returned. A new version of bind (4.8.3) contains some of these enhancements. Galvin concluded with a status report on SNMPv2 revisions. Gary Malkin gave the first of two presentations on Security of Routing Protocols in the Internet. Within an autonomous system, one or more intradomain routing protocols may run; between them, at least one exterior protocol runs. Border routers must run interior and exterior protocols. Interior protocols are RIP (the oldest, RFC 1058), RIP-2 (RFC 1724, now Draft Standard), OSPF (RFC 1583), and IGRP (Cisco proprietary). Exterior protocols are BGP-4 (RFC 1654) and IDRP, its replacement. Simple security may be added as a cleartext password, while cryptographic checksums can additionally ensure authentication and integrity. Replay is avoided by using timestamps and sequence numbers. Encryption is not a generally proposed solution. RIP has no security; RIP-2 has cleartext passwords, but cryptographic MD5 based checksums are being defined. The same choices are being defined for OSPF. Sandra L. Murphy continued the discussion of securing routing protocols. Today, routing messages are accepted as received. Unauthorized access to links, erroneous router configurations, or a subverted router are problems that cannot completely be solved by IP security. Major outages have been due to router failures. Distance vector protocols do not know why a path exists, so distance vector security may need nested signatures, based on the number of faulty routers. They may insure a correct decision, but not the best decision (completeness is not guaranteed). For link state protocols, signatures at original sources simplify operations, so they chose to work on OSPF. Digital signatures will protect link state advertisements. Routes must be advertised by both sides (a feature already in OSPF, which is good for stability as well as security). They use digital signatures or keyed MD5 to protect neighbor- neighbor protocol exchanges and advertisement flooding to distribute public keys. Trust in area border routers and aging are still problems. Session 3 consisted of two talks on the security of objects independent of where they are stored. Avi Rubin described Betsi, a system to protect software distributed by FTP, WWW, newsgroups, etc. Kerberos, PGP, tripwire, SOCKS, and so forth are distributed through third party redistribution sites with no guarantees. Release updates and patches are also distributed over the Net from vendors. Betsi has three components: authors, users, and a trusted third party. Users trust Betsi and authors, but they do not have to trust the distribution site. MD5 and PGP are the cryptographic primitives. Look at http://info.bellcore.com/BETSI/betsi.html for more information. John Lowry reported on a one-year-old ARPA project to investigate concepts, services, tools, and prototypes for object security. Confidentiality, authentication, non-repudiation, and access control (e.g., copyright) were considered, as were syntax and encoding. A UNIX-based prototype consisting of a toolkit called ESTS was built, and a third party timestamp and signature service was added. The various components hold, for example, signatures, certificates, and annotations. PEM provides the trust model. ESTS's services include registration, validation, non-repudiation, proof of submission, status inquiry, cancellation, and information inquiry. The ASN.1 compiler and all reports are available from ftp.bbn.com. The service resides at ests@ests.bbn.com. Session 4 on Internet Payments consisted of a paper titled Electronic Cash on the Internet by Stefan Brands of CWI followed by a panel discussion. Brands' model consists of banks, service providers, and customers, each with a computing device, e.g., a smart card; users also have a PC. A tamper resistant card keeps track of a user's balance during withdrawal and payment protocols. The system goals are off-line operation, privacy, prior restraint on double spending, security in the face of broken cards, currency conversion, low transmission cost, and efficient computation. The PC provides a user interface, computational engine, and network connection. Each payment uses a new key pair. At withdrawal time, a user downloads a key pair and monetary limit, but the user does not get the private key (only the tamper resistant smart card gets it). Signatures are blinded upon spending, but double spending reveals the identity of the party who cheated. The service provider needs no tamper resistant technology. The payment data structure is 125.5 bytes long; withdrawals require one small modular multiplication on-line for the bank and user; payments do not have to compute on-line, so they can easily be attached to e-mail. Ravi Ganesan cited the diversity of activity in this field and introduced the panel, which consisted of Einar Stefferud, Cliff Neuman, and Dave Crocker. Stefferud spoke about First Virtual, which is working with EDS, First USA Bank, Dallas, and many other business partners. Their goals are providing information commercially on the Internet (anyone can be a buyer or a seller), security without cryptography, no requirement for special software, something that works with WWW, FTP, and e-mail, some degree of privacy, low cost ($0.29 + 2% per transaction), and immediate availability. Everything is MIME-based. Sellers receive orders, may validate account numbers, and if transactions are approved, ask buyers for consent to be billed. A buyer's answer is yes, no, or fraud. Sellers bear the risk of non-payment. Buyers who see the information and say "no" too often may have their accounts revoked. First Virtual's stated reasons for not using cryptography are that it lacks an installed base, user education, sufficient scalability, and protections like revocation. Mail to info@fv.com for more information. Neuman described NetCheque and NetCash, which are layered on top of Kerberos, restricted proxies, and a network information service model. These need to be secure, flexible (not necessarily anonymous, since businesses may want a record), and scalable (without bottlenecks and single points of failure). They must be cheap, efficient, and unobtrusive. No prior agreement is needed between end parties, multiple models of payment are supported, replicated servers run a clearinghouse operation, multiple currencies may be used, and tight controls exist at interfaces to financial systems. NetCheque was released December 1, 1994. Integration with MOSAIC is in preparation. NetCash (with anonymity) will be available mid-1995. Mail to NetCheque@isi.edu for additional information. D. Crocker is working on EDI on the Internet. Businesses are now on-line, access costs are down, and the Internet has global reach. Performance, reliability, unfamiliarity, and security are concerns. Authentication, data integrity, access control, privacy, non-repudiation, and accountability are the necessary security properties. EDI is structured communication of various types: orders, invoices, shipments, purchases, and authorizations. It is governed by a contract called the trading partner agreement. EDI over e-mail (MIME) is their approach (viewing MIME as an object exchange standard). Security could either be MIME-based or separately defined. A trivial specification now exists; the security is obtained through PEM or PGP. Seventy companies are participating in CommerceNet and are potential users. Session 5 consisted of three papers on security monitoring tools. The first was given by David Simmons on NERD: Network Event Recording Device: An Automated System for Network Anomaly Detection and Notification. NERD is a network based logging system that uses the standard UNIX syslogd and also shadows the log files. Nerdlogd provides notification for network events, a real-time video display to identify events as they are happening, public address, pager, and e-mail interfaces, and authorization controls on logging by hosts according to notification services used. Host IP addresses are verified with reverse handshakes. Standard syslog priorities are incorporated. Command line, shell, and C library interfaces are provided. More information can be obtained from nerd@nerd.lanl.gov. Next, Jim Alves-Foss gave An Overview of SNIF: A Tool for Surveying Network Information Flow. Maintaining individual privacy, organizational secrets, data integrity, and system availability are their security challenges. Users want to set up Web sites on desktops without firewalls. They chose to monitor TCP between local and remote hosts. They can flag certain remote sites for immediate notification. A GUI analysis tool provides real-time feedback on events, and the system is scalable, distributed, extensible, and fault tolerant. For now, SNIF keeps TCP start and stop times and byte counts in each direction. Alarms and signals have four priority levels. Their future directions are expert systems, data-base monitoring, an enhanced user interface, additional protocol support, large networks (thousands of nodes), and hooks to other tools. Baudouin Le Charlier then presented Distributed Audit Trail Analysis. Getting accurate pictures of events requires considering what is happening on multiple systems, but audit trails are generated on single hosts. They wanted a single point of administration, ability to do local or global analysis, high availability, flexibility, and scalability in a heterogeneous environment. A host- level architecture controls formats and granularity. A global architecture filters and evaluates what is collected. Communications is by RPC. Analysis is specified in a rule-based, condition-action, bottom- up language without backtracking. They hope to integrate Kuang, standard protocols (TCP/IP, RPC), and packet filtering firewalls. Session 6 contained three talks on authentication and authorization. Piers McMahon updated us on SESAME, Version 2. SESAME is a collaboration among ICL, SNI, and Bull, with 50% EC funding. It extends Kerberos with public key techniques. Low maintenance, efficiency, scalability, support for authorization, and backward compatibility with Kerberos are the design goals. ECMA TR/46, ECMA-138, ECMA-209, and ECMA-219 describe the system. Mail to helpdesk@ecma.ch for information about these documents. SESAME provides client-server and peer-to-peer authentication and authorization. User data is (optionally) protected during transport; authorization is privilege based. Privilege data (PACs) are transmitted securely using digital signatures. Version 2 contains public key based distribution of inter-realm session keys (intra-realm works as in Kerberos), separate authentication and ticket verification keys, separate integrity and confidentiality keys, and secure transmission of PACs. Applications may access SESAME though the GSS-API, and different cryptographic algorithms may be installed. Ticket granting tickets and PACs are matched up by an identifier included in each. Signatures protect PACs against modification. Delegates must know a secret value to use PACs, and other methods prevent replay or unauthorized use of credentials. Version 2 is completed and tested, in trial, and issued on a CD ROM. Information is available from ftp.enst.fr:/pub/sesame or ftp.esat.kuleuven.be:/pub/COSIC/vdwauver/sesame. Ravi Ganesan then described Yaksha: Augmenting Kerberos with Public Key Cryptography. This system uses RSA, joint signatures, and Kerberos. Boyd invented a joint signature scheme whereby two parties share pieces of a secret encryption exponent. Yacobi and Ganesan reduced breaking this to breaking RSA. A user has one piece of the private key, and an on-line server has the other piece; neither can spoof the other, and signatures, once created, are indistinguishable from any normal RSA signature. Revocation can be immediate, real- time audit is available, but the server must be on- line. The protocol is identical in structure and rounds with Kerberos. The client and authentication server jointly sign a temporary certificate in the first exchange. The Yaksha server, therefore, explicitly authenticates the client, unlike in Kerberos. The second exchange gets specific tickets by using Yaksha's component of the server's key. No single party can spoof the others. A separate signature protocol is provided. Barry Jaspan presented GSS-API Security for ONC RPC. Kerberos was the obvious way to add security to ONC RPC (a previous secure ONC RPC based on 192 bit discrete logs had been broken). XDR is the data encoding standard used, and authenticated headers find the right remote procedure. They added a data integrity wrapper for the arguments that calls GSS_SEAL. Sequence numbers in the headers prevent replay. Clients must establish a context, at which time servers recognize the security tokens. Headers themselves are not protected, sequence numbers are single-threaded, and servers have to maintain security context state. This will evolve into a commercial product. Session 7 returned to the topic of certification infrastructures with three talks. First, Alan Davidson described CMS, a certification infrastructure being built for the Swedish Post Office. CMS is an autonomous certificate management system for X.509 certificates as described in RFC 1422 (for use in PEM), secure EDI, WWW, and Kerberos interdomain authentication. It handles generation, storage, distribution, and verification. CMS is intended for large organizations like national health care or the post office. CMS is the naming authority, certificate repository, and root of the hierarchy. A PCA defines the policy, certifies CAs, and keeps track of all CRLs. UCAs are the CAs that certify users and act as certificate repositories. A version 2.0 beta exists; it is not generally available, but will likely become a commercial product. They are now running a low-level assurance policy with CMS user agents interfaced to PEM. Ali Bahreman presented PEMToolKit, a way to build a top-down certification hierarchy for PEM from the bottom up. He claimed that a bottom up approach is easier: infrastructure, user convenience, politics (e.g., patents and export controls), costs, and timeliness have been stumbling blocks. Trust models vary from hierarchical, as in PEM or X.509, to a web of trust, as in PGP. PEM enforces many restrictions, but PGP certificates can follow these rules as well. A bottom up approach can start with self-signed certificates. Later, organizational CAs may emerge, and larger islands of certification will exist. Cross certification can occur between arbitrary CAs. Certificate retrieval may be on-line or off-line. Names may have to be mapped between different systems. PEMToolKit is a prototype that allows bottom up construction of a hierarchy, islands of certification, and a flexible trust model. E-mail and FTP based certificate retrieval are supported, and a menu based interface exists. It is based on TIS/PEM 6.1, MH 6.7 , pemail 1.0, and Metamail 2.7. It runs on SunOS 4.1 and is found at ftp.bellcore.com in /pub/ali/PEMToolKit/PEMToolKit.tar or ftp://ftp.bellcore.com/pub/ali/PEMToolKit/home.html. Michael Roe presented a paper by Suzan Mendes and Christian Huitema titled A New Approach to the X.509 Framework: Allowing a Global Authentication Infrastructure without a Global Trust Model. This work came out of the PASSWORD Project, which implemented X.509 and X.400. X.509 had certain deficiencies, for instance, in specifying attributes as well as names. Policy and a defined authorization to sign certificates are needed, but overloading names is a poor solution. CA administrators are not directory administrators, and one CA cannot serve multiple organizations (it takes on the name of one of them). It is much better to encode information explicitly in the certificate as in X.509 Version 3. Different infrastructures (PEM, ISO, PGP) must be supported. An entity's role as user, CA, etc., the relationship between issuer and subject, and a CA's jurisdiction need to be defined. Policy should specify quality of assurance, frequency of update, and intended use of the certificate (because of liability issues). Explicitness enhances security, understandability, management, and verification. Session 8 was a panel discussion on security issues for MOSAIC and the World Wide Web chaired by Fred Avolio. The panelists were Peter Churchyard, Allan M. Schiffman, and Greg Bergren. Avolio remarked that at last year's conference the firewalls panel ran simultaneously with a CERT alert on firewalls. A Web server alert came out the same week as this year's. He stated that the Web or MOSAIC is a perfect opportunity for almost anyone to infect someone else's machine. Bergren described HTTP: a stateless, distributed hypermedia information system with data representation negotiation. Transactions set up connections and tear them down for each request and response pair. Security considerations include authentication of requests, authentication of servers, privacy of requests and responses, abuse of servers, server bugs, unwitting actions on the Net, abuse of log information, and client software holes. Churchyard wrote the gopher and HTTP proxies for TIS's Firewall Tool Kit. Firewalls use either implied (host address based) or explicit authentication. Auditing and filtering may occur. They provide a well-defined interface and (smaller) proxies for services. HTTP does not use sustained connections; users are often anonymous; state is stored in the client. He added cryptographic tokens to URLs for user authentication, but this is still vulnerable to replay attacks. Both S-HTTP (which uses a MIME construct) and SSL cause problems for firewalls, since anything may go through such a tunnel. They considered a number of alternatives combining cryptography, authentication, and anonymity on both sides of the firewall. Schiffman is from CommerceNet and wrote S-HTTP. Payment systems provide only one motivator for security: others are privacy, property rights, and choice issues. Approaches to WWW security have included IP address or DNS based systems, passwords in the clear, PEM or PGP encapsulation, a CERN proposal (Shen), and Kerberos with or without GSS- API integration. Host, perimeter, and network security are needed. Layer separation, symmetry, and flexible choices of mechanism like key management were the S-HTTP design goals. Also, applications may have different requirements (e.g., non-repudiation). Many options exist for each mechanism, so user interfaces must negotiate their security requirements accurately. Information is available from ftp.commerce.net or via mail to info@terisa.com. Richard F. Graveman, Bellcore Piscataway, NJ 08854 rfg@ctt.bellcore.com ________________________________________________________________________ Calls for Papers (new listings since last issue only) ________________________________________________________________________ (see also Calendar) o Conferences Listed earliest deadline first. See also Cipher Calendar and NRL CHACS CFP list. Listed earliest deadline first o Seventh International Conference on Management of Data, 27-30 December, 1995, Pune, India. Papers sought on Database in Engineering and Databases in Education; topics of interest incolude data integrity and security. Papers (5 copies, not to exceed 5000 words (20 pp. double spaced) due 16 June 1995 to Anand Deshpande (anand@pspl.ernet.in) or Ravi Krishnamurthy (krishnam@hplabs.hp.com). Abstracts (200 words, including keyword list) must be sumitted electronically in ASCII to comad95@hpl.hp.com. Tutorial proposals also invited. o Eighth International Conference on Scientific and Statistical Database Management, 18-20 June 1996, Stockholm, Sweden. Papers solicited on novel ideas and research resulsts relevnat to katabase and knowledge base design from theoretical and applied viewpoints. Topics of interest include integrity and security. Submit papers from the Americas to James French (U Virginia), from elsewhere to Per Svensson (per@sto.foa.se), by 15 November 1995. o Twenty-second International Conference on Very Large Data Bases, 3-6 September 1996, Bombay, India. Papers solicited in the general field of databases, including paper and panel proposals on "futuristic topics." Topics of interest include data consistency, integrity, and security. Submit six copies of original papers not longer than 5000 words to the appropriate program chair (Asia/Australia: Nandlal L. Sarda, nls@cse.iitb.ernet.in; Americas: C. Mohan, mohan@almaden.ibm.com; Europe: Alejandro Buchmann, buchmann@dvs1.informatik.th-darmstadt.de), by 23 February 1996. The announcement emphasizes that there will be no grace period for late submissions. ________________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 1: Conference Papers ________________________________________________________________________ Papers to be presented at 5TH USENIX UNIX SECURITY SYMPOSIUM June 5-7, 1995, Salt Lake City, Utah ============================================================================= o Keynote Address: Why are our Systems Insecure? Must they always be? Stephen T. Walker, President and Founder, Trusted Information Systems, Inc. o Information Security Technology? Don't Rely on It. A Case Study in Social Engineering Ira S. Winkler & Brian Dealy, SAIC o A Simple Active Attack Against TCP Laurent Joncheray, Merit Network Inc. o WAN-hacking with AutoHack: Auditing Security Behind the Firewall Alec Muffet, Sun Microsystems, UK o Kerberos Security with Clocks Adrift Don Davis, Systems Experts, Inc.; Daniel E. Geer, OpenVision Technologies o Design and Implementation of Modular Key Management Protocol and IP Secure Tunnel on AIX; Pau-Chen Cheng, Juan A. Garay, Amir Herzberg and Hugo Krawczyk, IBM, Thomas J. Watson Research Center o Network Randomization Protocol: A Proactive Pseudo-Random Generator Chee-Seng Chow and Amir Herzberg, IBM, Thomas J. Watson Research Center o Implementing a Secure rlogin Environment: A Case Study of Using a Secure Network Layer Protocol Gene H. Kim, Hilarie Orman and Sean O'Malley, University of Arizona o STEL: Secure TELnet David Vincenzetti, Stefano Taino and Fabio Bolognesi, Computer Emergency Response Team Italiano (CERT-IT), University of Milan o Session-Layer Encryption Matt Blaze and Steven M. Bellovin, AT&T Bell Laboratories o File-Based Network Collaboration System Toshinari Takahashi, Atsushi Shimbo and Masao Murota, Communications and Information Systems Research Labs, Toshiba R&D Center o Safe Use of X WINDOW SYSTEM Protocol Across a Firewall Brian L. Kahn, The MITRE Corporation o An Architecture for Advanced Packet Filtering and Access Policy Andrew Molitor, Network Systems Corporation o A Domain and Type Enforcement UNIX Prototype Lee Badger, Daniel F. Sterne, David L. Sherman and Kenneth M. Walker, TIS o Providing Policy Control Over Object Operations in a Mach-Based System Spencer E. Minear, Secure Computing Corporation o Joining Security Realms: A Single Login for NetWare and Kerberos William A. Adamson, Jim Rees and Peter Honeyman, University of Michigan o Independent One-Time Passwords Aviel D. Rubin, Bellcore o One-Time Passwords in Everything (OPIE): Experiences with Building and Using Strong Authentication Daniel L. McDonald and Randall J. Atkinson, NRL; Craig Metz, KSC o Improving the Trustworthiness of Evidence Derived from Security Trace Files Ennio Pozzetti, Politecnico di Milano; Vidar Vetland, Carleton University o Using the Domain Name System for System Break-ins Steven M. Bellovin, AT&T Bell Laboratories o DNS and BIND Security Issues Paul A. Vixie, Internet Software Consortium o MIME Object Security Services: Issues in a Multi-User Environment James M. Galvin and Mark S. Feldman, Trusted Information Systems, Inc. ============================================================================== Papers to be presented at Crypto '95 , August 28-30, Santa Barbara, California ============================================================================== o MDx-MAC and Building Fast MACs from Hash Functions, Bart Preneel, Paul C. van Oorschot o XOR MACs: New Methods for Message Authentication using Block Ciphers, Mihir Bellare, Roch Guerin, Phillip Rogaway o Bucket Hashing and its Application to Fast Message Authentication, Phillip Rogaway o Fast Key Exchange with Elliptic Curve Systems Richard Schroeppel, Hilarie Orman, Sean O'Malley, Oliver Spatscheck o Security and Performance of Server-Aided RSA Computation Protocols Chae Hoon Lim, Pil Joong Lee o Fast Server-Aided RSA Signatures Secure Against Active Attacks Philippe Biguin, Jean-Jacques Quisquater o Efficient Commitment Schemes with Bounded Sender and Unbounded Receiver, Shai Halevi o Precomputing Oblivious Transfer Donald Beaver o Committed Oblivious Transfer and Secure Multiparty Computation Claude Cripeau, Jeroen van de Graaf, Alain Tapp o On the Security of the Quantum Oblivious Transfer and Key Distribution Protocols; Dominic Mayers o How to Break Shamir's Asymmetric Basis Thorsten Theobald o On the Security of the Gollmann Cascades Sang-Joon Park, Sang-Jin Lee, Seung-Cheol Goh o Improving the Search Algorithm for the Best Linear Expression Kazuo Ohta, Shiho Moriai, Kazumaro Aoki o On Differential and Linear Cryptanalysis of the RC5 Encryption Algorithm, Burton S. Kaliski Jr., Yiqun Lisa Yin o A Resilient Clipper-like Key Escrow System Silvio Micali, Ray Sidney o A Key Escrow System with Warrant Bounds Arjen K. Lenstra, Peter Winkler, Yacov Yacobi o Fair Cryptosystems Revisited Joe Kilian, Tom Leighton o Escrow Encryption Systems Visited: Attacks, Analysis and Design Yair Frankel, Moti Yung o Robustness Principles for Public Key Protocols Ross Anderson, Roger Needham o Cryptanalysis of the Matsumoto and Imai Public Key Scheme of Eurocrypt '88 Jacques Patarin o Cryptanalysis Based on 2-Adic Rational Approximation Andrew Klapper, Mark Goresky o A Weakness in SAFER K-64 Lars R. Knudsen o Cryptanalysis of the Immunized LL Public Key Systems Yair Frankel, Moti Yung o Secure Signature Schemes based on Interactive Protocols Ronald Cramer, Ivan D mgard o Efficient Arguments Above P Joe Kilian o Honest Verifier vs. Dishonest Verifier in Public Coin Zero-Knowledge Proofs Ivan D mgard, Oded Goldreich, Tatsuaki Okamoto, Avi Wigderson o Proactive Secret Sharing or How to Cope With Perpetual Leakage, Amir Herzberg, Stanislaw Jarecki, Hugo Krawczyk, Moti Yung o Secret Sharing with Public Reconstruction Amos Beimel, Benny Chor o On General Perfect Secret Sharing Schemes G. R. Blakley, G. A. Kabatianski o NFS with Four Large Primes: An Explosive Experiment Bruce Dodson, Arjen K. Lenstra o Some Remarks on Lucas-based Cryptosystems Daniel Bleichenbacher, Wieb Bosma, Arjen K. Lenstra o Threshold DSS Signatures Without a Trusted Party Susan K. Langford o t-Cheater Identifiable (k,n) Threshold Secret Sharing Schemes Kaoru Kurosawa, Satoshi Obana, Wakaha Ogata o Quantum Cryptoanalysis of Hidden Linear Functions Dan Boneh, Richard J. Lipton o An Efficient Divisible Electronic Cash Scheme Tatsuaki Okamoto o Collusion-Secure Fingerprinting for Digital Data Dan Boneh, James Shaw =============================================================== Papers presented at Eurocrypt '95, May 22-24, St. Malo, France =============================================================== [Available in Advances in Cryptology -- Eurocrypt '95, Louis Guillou and Jean-Jacques Quisquater, Eds., Lecture Notes in Computer Science, volume 921, Springer-Verlag, NY, 1995. ISBN 3-540-59409-4.] [Apologies for inadequately displayed diacritic marks. CEL] o Attacking the Chor-Rivest cryptosystem by improved lattice reduction Claus P. Schnorr and H. H. H"orner (U. Frankfurt) o Convergence in differential distributions Luke O'Connor (QUT, Brisbane) o A generalization of linear cryptanalysis and the applicability of Matsui's piling-up lemma Carlo Harpes, Gerhard G. Kramer and James L. Massey (ETH, Z"urich) o On the efficiency of group signatures providing information-theoretic anonymity; Lidong Chen (U. Texas A&M) and Torben P. Pedersen (U. Aarhus) o Verifiable signature sharing Matthew K. Franklin and Michael K. Reiter (AT&T Bell Labs) o Server (prover/signer)-aided verification of identity proofs and signatures; Chae Hoon Lim and Pil Joong Lee (U. Pohang) o Counting the number of points on elliptic curves over finite fields: strategies and performances Reynald Lercier (CELAR, Bruz) and Fran\cois Morain (X, Palaiseau) o An implementation of the general number sieve to compute discrete logarithms mod p; Damian Weber (U. Saarlandes) o A block Lanczos algorithm for finding dependencies over GF(2) Peter L. Montgomery (San Rafael, Ca) o How to break another ``provably secure'' payment system; Birgit Pfitzmann, Matthias Schunter (U. Hildesheim) and Michael Waidner (U. Karlsruhe) o Quantum oblivious mutual identification Claude Crepeau and Louis Salvail (U. Montreal) o Securing traceability of ciphertexts -- Towards a secure software key escrow system; Yvo Desmedt (U. Milwaukee) o Secure multiround authentication protocols Christian Gehrmann (U. Lund) o Verifiable secret sharing as secure computation Rosario Gennaro and Silvio Micali (MIT) o Efficient secret sharing without a mutually trusted authority Wen-Ai Jackson, Keith M. Martin and Christine M. O'Keefe (U. Adelaide) o General short computational sharing schemes Philippe B'eguin (ENS) and Antonella Cresti (U. Rome) o Arithmetic coprocessors: The state of the art David Naccache (Gemplus) o Arithmetic coprocessors and security mechanisms Michel Ugon (Bull CP8) o Area of applications of the arithmetic coprocessors Peter Landrock (CRYPTOMAThIC) o Fair blind signatures; Markus Stadler (ETH, Z"urich), Jean-Marc Piveteau (UBS, Z"urich) and Jan Camenisch (ETH, Z"urich) o Ripping coins for a fair exchange Markus Jakobsson (UCSD) o Restrictive blinding of secret-key certificates Stefan Brands (CWI) o Towards fast correlation attacks on irregularly clocked shift registers Jovan Dj. Golic (QUT, Brisbane) o Large periods nearly de Bruijn FCSR sequences Andrew Klapper (U. Kentucky) and Mark Goresky (U. Northeastern) o On nonlinear resilient functions Xian-Mo Zhang (U. Wollongong) and Yuliang Zheng (U. Monash) o Combinatorial bounds for authentication codes with arbitration Kaoru Kurosawa and Satoshi Obana (TIT, Tokyo) o New hash functions for message authentication Hugo Krawczyk (IBM) o A^2-codes from universal hash classes J"urgen Bierbrauer (T. U. Michigan) o A new identification scheme based on the perceptron problem David Pointcheval (ENS) o Fast RSA-type schemes based on singular cubic curves y^2+axy = x^3 (mod n) Kenji Koyama (NTT) o Relationships among the computational powers of breaking discrete log cryptosystems; Kouichi Sakurai (U. Kyushu) and Hiroki Shizuya (U. Tohoku) o Universal hash functions & hard core bits Mats N"aslund (RIT, Stockholm) o Recycling random bits in composed perfect zero-knowledge Giovanni Di Crescenzo (UCSD) o On the Matsumoto and Imai's human identification scheme Chih-Hung Wang, Tzonelih Hwang and Jiun-Jang Tsai (U. Cheng-Kung) o Receipt-free mix-type voting scheme -- A practical solution to the implementation of a voting booth; Kazue Sako and Joe Kilian (NEC) o Are crypto-accelerators really inevitable? 20 bit zero-knowledge in less than one second on simple 8-bit microcontrollers; David Naccache, David Mraihi (Gemplus), William Wolfowicz and Adina di Porto (F. U. Bordoni, Rome) ________________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 2: Journal and Newsletter Articles, Book Chapters ________________________________________________________________________ o Note: Contents of the Journal of Cryptology are available via the IACR home page (URL http://www.iacr.org/~iacr) o Computer Networks and ISDN Systems, Vol. 27, No. 6, April 1995: (thanks to Anish Mathuria for this entry): - T. Norderhaug and J.M. Oberding. Designing a web of intellectual property. pp 1037-1046. - S. Anderson and R. Garvin. Sessioneer: flexible session level authentication with off the shelf servers and clients. pp. 1047-1053. - J. Kahan. A capability-based authorization model for the World-Wide Web. pp. 1055-1064. o IEEE Trans. on Knowledge and Data Engineering, Vol. 7, No. 2 (April 1995): B. Thuraisingham and W. Ford. Security constraint processing in a multilevel secure distributed database management system. pp.274-293. o Computers & Security Volume 14, Number 1 (1995). (Elsevier) Refereed Papers: - Marshall D. Abrams and Michael V. Joyce. Trusted system concepts. pp.45-56. - Marshall D. Abrams and Michael V. Joyce. Trusted computing update. pp.57-68. - Marshall D. Abrams and Michael V. Joyce. New thinking about information technology security. pp.69-82. ________________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 3: Books ________________________________________________________________________ o Stinson, D. Cryptography Theory and Practice. CRC Press, Inc., 2000 Corporate Blvd., Boca Raton, FL, 33431-9868, ISBN 0-8493-8521-0, 1995, $64.95. Details available at URL http://bibd.unl.edu/~stinson/CTAP.html ________________________________________________________________________ Calendar ________________________________________________________________________ The Internet Conference Calendar, URL: http://www.automatrix.com/conferences/ is also worth a look. Dates Event, Location Point of Contact/ more information ----- --------------- ---------------------------------- ==================================================================== See Calls for Papers section for details on many of these listings. ==================================================================== 5/31/95: ICDE '95 submissions due; icde96@cis.ufl.edu 5/31/95: ACSAC '95 submissions due; smith@arca.va.com 6/ 1/95: JCMC iss.:elect.commerce; papers due; steinfield@tc.msu.edu 6/ 5/95- 6/ 7/95: 5th USENIX Sec Symp, Utah; conference@usenix.org 6/13/95- 6/15/95: CSFW-8, Ireland; s.foley@cs.ucc.ie 6/15/95: COMAD 96 papers due; anand@pspl.ernet.in or krishnam@hplabs.hp.com 6/26/95- 6/30/95: COMPASS '95; BONNIE.DANNER@trw.sprint.com 6/27/95- 6/30/95: INET '95; http://www.isoc.org/inet95.html 7/ 1/95: CCS-3 papers due; gong@csl.sri.com or Jacques.Stern@ens.fr 7/ 3/95- 7/ 5/95: CPAC '95, Australia cpac@fit.qut.edu.au 7/15/95: IFIP/SEC '96 papers due; sec96@aegean.ariadne-t.gr 7/29/95- 8/ 4/95: SFTC-VI, Canela, Brazil; VISCTF@inf.ufrgs.br 7/31/95: 5th IMACCC papers due; colin.boyd@man.ac.uk 8/13/95- 8/16/95: IFIP WG11.3,New York(RPI); ting@eng2.uconn.edu 8/22/95- 8/25/95: NSPW '95 San Diego (UCSD);meadows@itd.nrl.navy.mil 8/27/95- 8/31/95: Crypto'95 Santa Barbara; tavares@ee.queensu.ca 8/28/95- 8/30/95: MMDMS, Blue Mt. Lake, NY; nwosuck@harpo.wh.att.com 9/ 5/95- 9/ 6/95: MDS-95, York, England ; IMACRH@V-E.ANGLIA.AC.UK 9/13/95- 9/15/95: WDAG-9, Le Mont St. Michel,France; raynal@irisa.fr 9/17/95- 9/20/95; HPTS 95, Asilomar, CA; neowens@vnet.ibm.com 9/20/95- 9/21/95: IT-Sicherheit '95, Graz; rposch@iaik.tu-graz.ac.at 9/20/95- 9/23/95: IC3N '95, Las Vegas kia@unlv.edu 9/21/95- 9/22/95: ICI '95, Washington DC; denning@cs.georgetown.edu 9/27/95- 9/29/95: DCCA-5, Champaign, IL; no e-mail address available 10/ 2/95: JBCS spec issue on DBMS papers due; laender@dcc.ufmg.br 10/10/95-10/13/95: NISS-18, Baltimore, MD; NISS_Conference@Dockmaster.ncsc.mil 11 1/95: IS iss. on disaster recov.; papers due; agrawal@cs.ucsb.edu 11/ 6/95-11/10/95: ICECCS '95, Fort Lauderdale; alex@vulcan.njit.edu 11/14/95-11/15/95: ACM MCN '95 Berkeley, CA; mcn95-submission@cs.columbia.edu 11/15/95-11/17/95: CISMOD '95 Bombay; bhalla@u-aizu.ac.jp 11/29/95-12/ 2/95:CIKM '95, Baltimore; nicholas@cs.umbc.edu 11/30/95: ACM Computer Security Day; computer_security_day@acm.org 12/ 4/95-12/ 7/95: DOOD '95, Singapore; mendel@db.toronto.edu 12/11/95-12/15/95: ACSAC '95, New Orleans; smith@arca.va.com 12/13/95-12/15/95: OOER '95, G.C., Australia; mikep@icis.qut.edu.au 12/18/95-12/20/95: 5th IMACCC, Cirencester, UK; colin.boyd@man.ac.uk 12/27/95-12/30/95: 7th COMAD, Pune, India; anand@pspl.ernet.in or krishnam@hplabs.hp.com 2/23/96: VLDB '96 submissions due; nls@cse.iitb.ernet.in 2/26/96- 3/ 1/96: ICDE '96, New Orleans; icde96@cis.ufl.edu 3/14/96- 3/16/96: CCS-3, New Delhi; gong@csl.sri.com or Jacques.Stern@ens.fr 4/30/96- 5/ 3/96: 8th CCSS, Ottawa; no e-mail address available 5/ 5/96- 5/ 8/96: IEEE S&P 96; no e-mail address available 5/21/96- 6/24/96: IFIP/SEC 96 - Greece; no e-mail address available 9/3/96- 9/6/96: VLDB '96, Bombay, India; nls@cse.iitb.ernet.in 11/??/96: ESORICS '96, Rome, Italy; no e-mail address available 5/ 4/97- 5/ 7/97: IEEE S&P 97; Oakland no e-mail address available 5/13/97- 5/16/97: 9th CCSS, Ottawa; no e-mail address available 5/ 3/98- 5/ 6/98: IEEE S&P 98; Oakland no e-mail address available 5/12/98- 5/15/98: 10th CCSS, Ottawa; no e-mail address available 5/ 2/99- 5/ 5/99: IEEE S&P 99; Oakland no e-mail address available 5/11/99- 5/14/99: 11th CCSS, Ottawa; no e-mail address available 5/ 1/00- 5/ 3/00: IEEE S&P 00; Oakland no e-mail address available 5/16/00- 5/19/00: 12th CCSS, Ottawa; no e-mail address available Key: ==== ACSAC = Annual Computer Security Applications Conference CCS-3 = 3rd ACM Conference on Computer and Communications Security CCSS = Annual Canadian Computer Security Symposium CIKM = Int. Conf. on Information and Knowledge Management CISMOD = International Conf. on Information Systems and Management of Data COMAD = Seventh Int'l Conference on Management of Data (India) CPAC = Cryptography - Policy and Algorithms Conference CSFW = Computer Security Foundations Workshop DCCA = Dependable Computing for Critical Applications DOOD = Conference on Deductive and Object-Oriented Databases ESORICS = European Symposium on Research in Computer Security FISSEA = Federal Information Systems Security Educators' Association HPTS = Workshop on High Performance Transaction Systems IC3N = Int. Conference on Computer Communications and Networks ICDE = Int. Conf. on Data Engineering ICI = International Cryptography Institute ICECCS = Int. Conference on Engineering of Complex Computer Systems IEEE S&P = IEEE Symposium on Security and Privacy IFIP/SEC = International Conference on Information Security (IFIP TC11) IFIP WG11.3 = IFIP WG11.3 9th Working Conference on Database Security IMACCC = IMA Conference on Cryptography and Coding INET = Internet Society Annual Conference IS = Information Systems (journal) ISOC-Symp = Internet Society Symposium on Network and Distributed System Security IT-Sicherheit '95 = Communications and Multimedia Security: Joint Working conference of IFIP TC-6 and TC-11 and Austrian Computer Soc. JBCS = Journal of the Brazilian Computer Society JCMS = Journal of Computer Mediated Communication MCN '95 = ACM Int. Conf. on Mobile Computing and Networking MDS '95 = Second Conference on the Mathematics of Dependable Systems MMDMS = First Int. Wkshop on Multi-Media Database Management Systems NCSC = National Computer Security Conference NISS = National Information Systems Security Conference NSPW = New Security Paradigms Workshop OOER = Fourteenth Int. Conf. on Object-Oriented and Entity Relationship Modelling SAC '95= 2nd Annual Workshop on Selected Areas of Cryptography SFTC-VI = Symposium on Fault Tolerant Computing - VI (Brazil) USENIX Sec Symp = USENIX UNIX Security Symposium VLDB = Int'l Conf. on Very Large Databases WDAG-9 = Ninth Int. Workshop on Distributed Algorithms ________________________________________________________________________ Who's Where: recent address changes ________________________________________________________________________ Mario Tinto Jerzy W. Rub Aerospace Corporation Intel Corporation 8840 Standford Blvd. Mailstop: JF3-103 Suite 4400 2111 NE 25th Avenue Columbia MD 21045 Hillsboro, OR 97124 tel. (410) 312-1400 voice: (503) 264-2288 email: tinto at aero.org fax: (503) 264-1055 [above address effective 19 June] email: jerzy_rub@ccm.jf.intel.com Jonathan Trostle 18540 NE 58th Ct. Apt. L2090 Redmond, WA 98052 phone: (206)558-1083 email: jtt@CyberSAFE.COM Jonathan is now employed by CyberSAFE Corporation of Redmond, Washington. ________________________________________________________________________ Interesting Links [new entries only] ________________________________________________________________________ Format: Description (first lines) followed by URL (last line) Government sources/information: ------------------------------- Nothing new to report this issue. Contributions solicited. Professional societies and organizations: ----------------------------------------- International Association for Cryptologic Research http://www.iacr.org/~iacr USENIX & SAGE -- UNIX - related information, conferences, publications http://www.usenix.org/ Other places for interesting research papers, announcements, assistance ----------------------------------------------------------------------- Archive of Security Reviews issues and other interesting things from Ross Anderson's ftp space ftp://ftp.cl.cam.ac.uk:/users/rja14 InterNIC Directory and Services -- to help find people in cyberspace http://www.internic.net The RISKS Forum http://catless.ncl.ac.uk/Risks The PRIVACY Forum gopher://gopher.vortex.com/11/privacy ________________________________________________________________________ TC Publications for Sale ________________________________________________________________________ Yes! The fresh, green Proceedings of the 1995 IEEE Symposium on Security and Privacy are now available, along with those old favorites in blue, orange, and pink. Yes! They are available for purchase by TC members at favorable rates. Current issues in stock and NEW LOW PRICES are as follows: Price by mail from TC IEEE CS Press IEEE CS Press Year TC members IEEE member price List Price ---- ---------- ----------------- ------------- 1992 $10 Only available from TC! 1993 $15 Only available from TC! 1994 $20 $30+$4 S&H $60+$5 S&H 1995 $25 $25+$4 S&H $50+$4 S&H For overseas delivery: -- by surface mail, please add $5 per order (3 volumes or fewer) -- by air mail, please add $10 per volume to the prices listed above. If you would like to place an order, please send a letter specifying o which issues you would like, o where to send them, and o a check in US dollars, payable to the 1995 IEEE Symposium on Security and Privacy to: Charles N. Payne Treasurer, IEEE TC on Security and Privacy Code 5542 Naval Research Laboratory Washington, DC 20375-5337 U S A Sorry, we are (still) not ready for electronic commerce! ________________________________________________________________________ TC Officer Roster ________________________________________________________________________ Chair: Vice Chair: Deborah Cooper Charles P. Pfleeger P.O. Box 17753 Trusted Information Systems (UK) Ltd. Arlington, VA 22216 41 Surbiton Road (703)908-9312 voice and fax Kingston upon Thames KT1 2HG dmcooper@ix.netcom.com ENGLAND pfleeger@tis.com Newsletter Editor: Standards Subcommittee Chair Carl Landwehr [watch this space!] Code 5542 Naval Research Laboratory Washington, DC 20375-5337 (202)767-3381 Landwehr@itd.nrl.navy.mil ________________________________________________________________________ Information for Subscribers and Contributors ________________________________________________________________________ SUBSCRIPTIONS: To subscribe, send e-mail to (which is NOT automated) with subject line "subscribe". To remove yourself from the subscription list, send e-mail to cipher-request@itd.nrl.navy.mil with subject line "unsubscribe". Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher CONTRIBUTIONS: to are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. For Calendar entries, please include an e-mail address for the point-of-contact. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. ARCHIVES: Available at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/cipher-archive.html ========end of Electronic Cipher Issue #6, 30 May 1995================