Subject: Electronic CIPHER, Issue 22, July 12, 1997 _/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ==================================================================== Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 22 July 12, 1997 Carl Landwehr, Editor Bob Bruen, Book Review Editor Hilarie Orman, Assoc. Editor ==================================================================== Contents: [2765 lines total] o Letter from the Editor Security and Privacy News Briefs: [omitted] Commentary and Opinion: o Review of Birman's Building Secure and Reliable Network Applications, by Bob Bruen Conference Reports: o Report on Panels (and Debate) at 1997 IEEE Security and Privacy Symposium, by Mary Ellen Zurko o Report on Papers presented at 1997 IEEE Security and Privacy Symposium, by Josh Guttman Cipher Research Registry (one new entry) New reports available via FTP and WWW: [omitted] Interesting Links: [omitted] Who's Where: recent address changes: Pfleeger, Reiter, Zhou, Pase Calls for Papers: Reader's guide to recent security and privacy literature o Conference Papers: ICCC '97, SOSP, MobiCom, SAFECOMP, IDEAS 97, Crypto '97, IFIP WG 11.3 WC, SSDM, NGITS, INET, CSFW, SPICE, TAPSOFT o Journal and Newsletter articles: SIGOPS, COMPUTER, CACM, Dr. Dobb's, BYTE, IEEE Networks, ToC, Crosstalk, and more o Books: two Calendar Data Security Letter subscription offer Publications for sale -- new low prices! TC officers Information for Subscribers and Contributors ____________________________________________________________________ Letter from the Editor ____________________________________________________________________ Dear Readers (and especially contributors), Thanks for your patience in awaiting this issue, which is much later than I would have wished. I am pleased to be able to send you comprehensive coverage of this year's IEEE Security and Privacy Symposium, courtesy of Mary Ellen Zurko and Josh Guttman. And Bob Bruen returns with another fine book review. Hilarie Orman and Jim Davis have recently updated Cipher's calendar and call for paper web pages, as reflected in those sections, and I located many conference and journal papers for the Reader's Guide. Symposium attendance was up significantly this year, thanks to the efforts of Chair Steve Kent, Vice-Chair Mike Reiter, Program Co-chairs George Dinolt and Paul Karger, the program committee, and most of all, to the authors and participants. Whether you were able to attend or not, I think you will enjoy the account of the Symposium this issue provides. In order not to delay the issue further (and to avoid making it longer than it is already) I have not attempted select the most relevant news items from the many that have accumulated in my files since the last issue. For similar reasons, I've not updated the sections for new reports available on the Web and Interesting Links. I plan to bring out another, shorter issue within a month that will cover more current news. As always, your contributions are welcome. Carl Landwehr Editor, Cipher Landwehr@itd.nrl.navy.mil ____________________________________________________________________ SECURITY AND PRIVACY NEWS BRIEFS ____________________________________________________________________ [omitted this issue] ____________________________________________________________________ COMMENTARY AND OPINION ____________________________________________________________________ Review of Kenneth Birman's Building Secure and Reliable Network Applications by Bob Bruen, Cipher Book Review Editor ____________________________________________________________________ Birman, Kenneth. Building Secure and Reliable Network Applications. Manning, 1996 ISBN: 1-884777-29-5 , Hardbound, 591 pages, $58.00 Bibliography (381 items), index, appendix with 68 problems. http://www.browsebooks.com; orders@manning.com Available through Prentice Hall. Power Point slides at ftp.cs.cornell.edu/pub/ken/slides/ -------------------------------------------------------------------- Well known Professor of Computer Science Ken Birman from Cornell University, who is also Editor-in-Chief of ACM Transactions on Computer Systems, has written an excellent book on network applications security. The twenty-six chapters are divided into three main sections: a very good introduction to distributed computing; the second a section on the Web; the third, and largest section of the book, is on reliable distributed computing. The problem addressed by the book is the grounded in the growth of the internet where distributed computing makes sense, but also where the sheer size of the distribution causes problems beyond the problems that might be inherent in a given application. For example, the routing strategy of the early days was intended to solve the problems of getting packets to and from distributed locations. A hierarchical approach was logical and satisfactory, but today the scale has overwhelmed this approach, requiring a new strategy that looks at things as somewhat flat, a more distributed view. This view brings its own set of problems with it that must be addressed. The scale of difficulty in guaranteeing properties like response time and performance are significantly increased. According to Birman the "engineering discipline of reliable distributed computing is still in its infancy." The situation is unlike the situation of the nations of the world today. For many decades, the USA had allies that were in opposition to the Soviet Union and its allies, two hierarchies with a meeting place just short of war. Now the Soviet Union is broken into many pieces, each with its own agenda. US allies in Europe are developing their own agendas, that will not always agree with the US. Moreover the nations of the Middle East and the Far East are developing their own economies and power bases. The world has lost its own hierarchical approach and is now a distributed system with many peering power bases in several arenas that all must function together. The world has already developed a few distributed systems that seem to work, such as the air traffic control system. For the most part, I can fly to anywhere in the world. The telephone system is another one that appears to work reasonably well across international boundaries. Birman is trying to push along the serious work of making computing across a wide area into another successful example. The Web has taken the net to its next level of usage because it gives the individual user and interface of great value. It also provides a mechanism for an organization to distribute information both internally and externally, although internal may on another continent and external the office next door. Birman, as have many others, mentions that it was not predictable in 1985 that the internet would be so widespread with so much traffic. I have never understood this belief. All one needs to do is count the number of people on the planet (about 6 billion) to see a minimum base for net activity. Sooner or later a large majority of people will have some sort of connectivity to everyone else, not unlike telephones, radios and televisions. This is a large number of addresses and a lot of traffic, especially if one considers applications such as full motion video. Very large distributed systems will be very demanding. The basic services underlying these systems will need to provide reliability and security in a hostile environment. The international world of stock trading has already stepped into this environment. New approaches were needed and still more new approaches are needed to continue the growth of such systems. A good grasp of the fundamentals is required for those who work in developing new approaches, which can be acquired through this book. There are not many books like this one, although there are many journal articles and proceedings on the topic, which makes this book a good resource. The first eight chapters cover the communications, CORBA and client server computing in a detailed and coherent manner. The Web gets three chapters which addresses briefly the components, such as HTML, VRML, plug-ins, etc, then the security and reliability aspects. A brief chapter follows on related technologies. I expect a future edition would expand these chapters. The next fifteen chapters are real heart of the text, reliable distributed computing. Topics presented range from retrofitting systems, transactional systems, distributed management, probabilistic protocol and reasoning about distributed systems. The last chapter covers a number of languages, toolkits and systems that try to meet various challenges of distributed and transaction systems, such as the Isis toolkit, Locus, and Argus. I view this as a useful book and an important book. The topic is timely, but more to the, point distributed computing will be a major piece of the future of computing and it is being built now. If you are interested in tomorrow's systems today, I recommend reading this book. ______________________________________________________________________ CONFERENCE REPORTS ______________________________________________________________________ Report on Panels at the 1997 IEEE Symposium on Security and Privacy, Oakland, CA, May 5-7, 1997 by Mary Ellen Zurko, The Open Group Research Institute (m.zurko@opengroup.org) ______________________________________________________________________ This year's symposium contained three panels, each one opening one of the days of the conference. The conference proceedings contain writeups of the panels as well, for anyone interested in information from the horses' mouths. I apologize to those questioners whose names I did not know and therefore cannot associate with their excellent questions. ----------------------------------------------------------------------- The first panel was a debate on the viability of the TCB concept ----------------------------------------------------------------------- This panel, which opened the symposium, was modeled after an Oxford-style debate. The resolution being debated was: The concept of Trusted Computing Base as a basis for constructing systems to meet security requirements is fundamentally flawed and should no longer be used to justify system security architectures. Arguing in favor were: Lead: Bob Blakley (IBM) Second: Darrell Kienzle (U. of Virginia) Opposed were: William R. Shockley Second: LT (USN) James P. Downey (Naval Postgraduate School) John D. McLean (Naval Research Laboratory) moderated. While John tried to get the audience to vote their inclination with their feet throughout the debate, he was reduced to taking regular polls. The initial vote on the resolution was pretty evenly split three ways between in favor, opposed, and neutral. John pointed out that the definition of TCB being used in this debate is from the Orange Book. Bob led off, attired in black hat and boots as befit the imagine of the resolution. Bob pointed out that consumers are buying Microsoft Windows 95 (the upgrade being the top selling in the retail business software category in a PC magazine survey), and not a TCB (Windows NT). While surveys indicate that consumers are concerned with security, they're buying anti-virus programs (which were second and third in the same survey). Even buying Windows NT does not produce an evaluated TCB, since most users won't install it in the C2 configuration. In that case, all assurance bets are off. Bob then discussed the four fatal flaws of TCBs: 1. We can't sell trust 2. We can't build trust 3. Customers can't use trust 4. TCBs can't support a consumer software market 1. "The NSA trusts it so you should too" has not proved to be a compelling evaluation argument to consumers. How do they understand why NSA trusts the system when all they are given is a large amount of highly technical, censored documentation. In addition, consumers read in the media that even trusted systems are broken into. So, Bob's first challenge to the opposing said was: define what it means to trust a system so users can understand it and feel safe with trusted systems. 2. Bob claimed that assurance is impossibly hard and very expensive in a world that is very cost competitive. It slows development. A C2 OS2 didn't look financially promising to IBM. 3. The certification process requires a certifier at the site where the TCB is deployed. The certifier needs to know what it will be used for, but customers aren't good at certification, because they don't know what it will be used for. Trust in the system depends on trust in the administrator. Many systems administered by the end user. The designer, administrator, and certifier must all agree on security objectives, but this is not the case in corporate environments. Internal politics can fragment security goals. Bob's next challenge was to dispute the definition: "a TCB is a set of mechanisms that cause a security exposure when they violate expectations." 4. A TCB needs to define operations and objects for mediation. Each newly installed application adds to these, and so each new application extends the TCB (as the Purple Book extends it to cover databases). This compromises the minimality goal, and forces applications to be designed for a particular TCB and policy. During the five minutes for follow-up questions, Bob quoted Darrell as saying "If it doesn't work in practice, it doesn't work in theory." Bill stated that if it's not a reference monitor, it can't be effective. Bob suggested the immune system as a counter example. As an independent consultant, Bill was able to start his portion of the debate off with the statement "the opinions I express are entirely those of my employer." Bill asked if we were talking business or science? His strategy in was to refute the economic and psychological arguments and propose a counter plea to the application proxy argument, to prove that the TCB can be effectively applied to modern technology. He stated that the economic argument was scientifically unsound, reducing it to "If you can't sell it, it must be broke." Many systems have and continue to be produced to meet the TCB definitions. No proof was offered that the costs outweigh the benefits. It can't cost millions of dollars to produce a C2 TCB, yet people spend billions on virus checkers and firewalls. The user community is spending more on security than the vendors are spending on TCBs. If you live in a cardboard city, you will spend money on fire extinguishers. Bill suggested that the cost driver is the level of assurance not the basic (C2) TCB. He was not debating if the NSA is right or the evaluation process is the right one. An alternative paradigm to look to might be an underwriters laboratory underwritten by insurance industry Bill characterized the affirmative economic argument as: 1. the industry doesn't produce TCBs 2. the industry is a free market 3. the action of a free market is optimal therefore it is optimal not to produce TCB However, he claims that the market is oligopolistic. Point 3 implies that you can get optimal allocation of utility. A free market does not optimize wages, provide social security, or spend money on R&D. We have social goals that need to be met nationwide, and you won't meet those with a free market. To counter the claim that trusted component must induce trust, Bill pointed out that you can produce trust completely in the marketing department. Instead, we should define trust by evaluation by a laboratory. Bill said there was a need for more inter-domain work, to support architectures such as lots of little single domain PCs for a multi domain system. He concluded by saying that from "everything is inside some TCB" it does not flow that "there is some TCB that everything is inside". During the 5 minutes of discussion, Bill reiterated that he also does not believe in the monolithic TCB approach; he has been spreading the gospel for application-level policy for many years. Darrell pointed out that even people aware of the risks do not always choose to use a TCB. Bill suggested it was due to the value of the information needing protection, Darrell suggested it was due to trade-offs with other goals. Since a poll on who had changed their minds was taken at each transition, James led off with the somewhat joking statement, "We have not changed our mind." James discussed their responses to the challenges outline in the proceedings. In response to the challenge to define trust for normal end users, James asks why should they feel safe? Trust should be technical, so that its merits can be judged. Institutions can be used for protecting the end user (such as airline safety). Independent organizations can audit and accredit secure systems. In response to the challenge that TCBs cause security exposures, James points out that the TCB must include the administrative input of policy information. There is more user/administrator interface design work needed, but a TCB must be configurable to be usable Finally, like Bill, James suggests work on composite, multiple, or extensible TCBs or TCB subsets to support networks and applications. During the 5 minutes of questioning, Bob suggested that there is no competent authority for making TCBs suitable for their environment given the complex, quickly evolving environment of the Internet or large computer networks. Darrell pointed out that he had never seen two PCs with the same configuration, and James suggested that while end users may configure their home systems for surfing, they might not be allowed to configure their workstations at work. Darrell started by stating that the TCB contains everything necessary for security, which can ease the verification burden. While this is appealing in theory, is it practically viable? The strengths of the TCB is that it defines a single policy per system, to which all users comply. However, this is overly restrictive for a general model. While no defense is perfect, applications have to trust the TCB. Applications defending themselves put their security outside the TCB. The old environment of custom built systems, stand alone systems, and a philosophically homogeneous user base masked the TCB's flaws. The TCB's habitat is shrinking. Systems are large and complex, and verification is a virtual impossibility. COTS gives attackers known ways to attack. A single policy may not work when a company is may be forced to cooperate with a competitor. In response to the question of "but what have you got that's better?", Darrell urged research on new paradigms such as those explored in the New Security Paradigms Workshop. During the 5 minutes of questioning, Bill pointed out that the definition of TCB says it must enforce "a security policy", which is a loophole. TCBs are per-policy. James suggested that the alternative being put forward for a trusted computing base is a comfy computing base. In response, Darrell warned that it is dangerous to say that users don't know what's best for them. At this point, the floor was opened up to audience questions and reactions. Marv Schaefer said that the affirmative team had defeated itself by equating TCB and security with C2, which is easily defeatable by an attacker. Bob replied that they were trying to make the point that even the easy things we sort of know how to do are impractical. Paul Karger pointed out that to security practitioners around before the OB, that OB was revisionist security. There were only three levels of security: hopeless and don't even bother, limited, really secure. The original theory states that these (C2) systems are hopelessly insecure in the first place. He likened the security industry today to the medical profession in the late 19th century; snake oil and patent medicine. Bob agreed that there were plenty of applications being deployed today that a sensible person would veto. Terry Benzel rephrased the affirmative's argument as, if can't protect against a free for all, TCBs are no good. When a questioner suggested that we need the equivalent of safety codes for building boilers, Bob pointed out that boilers are not a general purpose device. You can't anticipate all the things a user might do with a computer. A questioner asked if can you reproduce or protect everything that might happen. Bill stated that the battle of wits approach can't work, that was established early on. The reference monitor defines the security perimeter with well defined semantics and makes sure each operation is secure; an a priori approach. Bob suggested that this was equivalent to the old joke of a drunk searching for his keys under a lamppost because it was so much brighter there. We define the problem that we know how to fix. Peter Neumann asked with 60 million lines of code in Star Wars, with a security policy that states bad things don't happen, with injections of electromagnetic interference into smart cards, with runtime libraries, compilers, and networking - where is the TCB? Bill replied that if we build first and look for the TCB second, the TCB will be the whole thing. Having well modularized software is difficult; we often have code that escapes rather that being released. Bob concluded that the TCB's designer's stance was that the designer is omnipotent, omniscient, and omnipresent, and most of us aren't. John called for a final vote determined by which door each of us exited by (thereby disallowing any neutrals). The vote as for - 66 and against - 117. (With 256 registered conference attendees, the neutrals may have just stayed in the room.) ---------------------------------------------------------------------- The second panel was Ensuring Assurance in Mobile Code. ---------------------------------------------------------------------- Moderator: Marv Schaefer (Arca) Panel Members: Sylvan Pinsky (NSA), Drew Dean (Princeton University), Jim Roskind (Netscape), Li Gong (JavaSoft), Barbara Fox (Microsoft) Marv introduced the panel with an architectural picture of mobile code executing on a host OS. He asked "What evidence do we have from producers that using these won't leave one vulnerable?" Even the documented paradigm doesn't always work as advertised. What protection claims can be provided to the owners(s) of resources that reside on (or that can be manipulated from) the Java-enabled workstation? How can the user (or resource owner or knowledgable referee) be convinced that it is safe to download and execute applets on his platform? Can claims be made (and supported with credible evidence) about the enforcement of claims made if there exist persistent objects shared among two or more distinct applications or applets? How do you know? How do I know? How long do we think we know? What if the interlopers don't know? Drew Dean started with The Good, the Bad and the Ugly. He was only looking at the closed system of Java, claiming we have to understand that first. The Good is there's been a lot of progress in a year. There work on Java type-safety, the JVM bytecode verifier, and dynamic linking. The Bad (aka not so good) is that we write and reason about Java, but execute JVM bytecode. JVM bytecode strictly richer than Java, and we don't really understand the connection. What are sufficient restrictions on the bytecode to ensure Java semantics? The language and bytecode specifications are incomplete. Surprising questions come up in cleanroom implementations. The bytecode format not designed for verification. It is needlessly hard to specify and needlessly hard to implement. The Ugly is that penetrate and patch is still the dominant mode of operation. The Java runtimes are huge. JDK 1.1 (for Solaris) is 130,000 lines of C (almost all of this security relevant due to the potential for buffer overflow, etc.) and 380,000 lines of Java. JDK 1.1 has grown a lot compared to JDK 1.0.2. The C portion is approximately 2x as large; the Java portion is approximately 4.5x as large. When a patch comes out, there is all sorts of new code to explore, and of course there's going to be a new bug. Drew concluded by saying that he doesn't know what to do with .5 million lines of code. Barbara Fox started off saying that the gap we were about to experience is between research and product. She showed a graph of what she called the code safety paradox: while the sandbox is high on the enforcement side, native code is high on functionality. How do we get a good balance? The game is Risk Management not Risk Avoidance. Using the internet/surfing is a dangerous sport. She outlined a confidence continuum. It starts with Authenticode, which uses digital signatures to get one point of assurance (the signer, with no revocation). In the middle is Java, which uses digital signatures, an explicit "OK" from the user on what the code does, and a VM. This offers three points of assurance. W3C digital signatures, where detachable labels can be used by anyone to make signed statements about code, offers n points of assurance. Her goals include: 2. ahead of time checks with detachable labels and filtering, and 3. deterrents such as audit logging, high-assurance certificates (those certified must put some skin in the game), high-visibility revocation, cops and courts. Li Gong stated that, while Barbara mentioned important sociological aspects, he would focus on the engine and construction. He asked us "how many people does it take to make a high assurance system?" The answer was 1, to make sure that no system is ever built. You have to have a very good, simple, understandable security model for assurance, and map that to an architecture. Then, the implementation must be of high quality. There is no practical way of formally verifying these relationships. You have to sympathize with marketing people; this process doesn't work as a slogan. Li said that while Drew mentioned that software is getting bigger, its also getting smaller (personal Java, meta Java). It's hard to have a good handle on a quantifiable level of security of system. The proof of correctness will lag behind 5 - 10 years. Li's team is focusing today to make sure that the new architecture helps to reduce the chance of making a mistake. They're trying to help people to work on known problems or issues and to stick to basic principles. They're taking the past 30 years of published work, distilling the best practices, and applying them. They're hiring good people who know security, and giving out source code for other people to evaluate. They're doing everything they can do as a product team today, and hoping researchers like Marv and Sylvan can help out with long term problems. I have seen Jim Roskind give essentially the same talk three times now, and two out of three he was "tricked into thinking [it] was a high tech conference" which would support showing slides from a laptop (the two were security conferences; the real high tech conference was the last International WWW conference). Luckily, he did not have to resort to pantomime this time; he had hand written slides. His job is responding to bugs posted in the Wall Street Journal and the New York Times. He's trying to put himself out of a job by making things cleaner and more possibly secure. He's looking at ergonomics and security. To be secure, the system must be usable. Their target audience is the Internet user with minimal savvy. As with real keys, users don't need to know about internal mechanisms. They don't provide too many dialogs and reuse previous decisions (per the user's request). Dialogs present summary information such as high, medium, or low risk, with the option to see more details. The user must be given fine grain control of grants so they can distinguish between giving out the time of day and power of attorney. They must allow the user to edit and review grants. They are also looking at programmer ergonomics. They will allow programmers to request the least privilege with a fine grain capability model. They are avoiding the binary trust (all or nothing) model. Their language support encourages holding privilege for the shortest duration by lexically restricting that duration. They can define the security relevant portions of code based on this temporally restricted duration, and automatically remove the privilege for the programmer. Sylvan Pinsky has been working with Marv on the security context for the Java environment, in a cooperative research program with Javasoft. The security context is a layered approach to software mechanisms: the language and compiler, the bytecode verifier, and the class loader. They are using a communications analogy: the old fashioned "party line" phone system. Back then we knew the members of the party line. Today, with global connectivity, any user is a potential member of the party line. As Bob Morris said back in '88, to a first approximation, every computer is connected with every other computer. We are becoming massively interconnected. We must coexist with people and systems of unknown and unidentifiable trustworthiness. As Peter Neumann said, our problems have become international as well as national. Any meaningful security analysis for mobile code must include the underlying platform OS and the Internet. They are examining security from a system prospective: applets, application, native methods, VM, OS. They would like to provide evidence that a valid JVM cannot be replaced or corrupted though the JVM or underlying platform. There has been a divergence between the security and Internet communities. The security community focuses on security relevant features encapsulated with a small analyzable mechanism. The Internet provides multi-million lines of code, and the hardware and software boundaries blurred. He is trying to merge these two directions together, using trust technology, security modeling, formal specifications, and formal verification. At the Java security panel at NISSC, addressing the need to apply security technologies to the 90s, Paul Karger said "this is rocket science!" Marv adds, "with a booster". Paul shouted out "and O rings!". Sylvan's group is looking at Secure Alpha (a distributed system) and capability-based systems. They want to produce: 1. a high level of security policy 2. identification of environmental assumptions 3. formal specification 4. implementation specification Marv opened up the floor for questions, beginning with reiterating some of his own from the start. Li mentioned that a new security architecture is due out in early summer. They are trying to do everything in Java. Jim said his strategy is trying to keep his head down lower than the other people's heads. That way, all of our heads will be lower over time. Sylvan pointed out that multiple, diverse systems running across the Internet with diverse security policies pose a big problem in terms of composability of policies. Marv asked what if sometimes I run one application, and another time run something else? Some set of global invariants might be required. Jim stated that when you allow arbitrary other systems (like sendmail) to adjust file system, you're in a world of hurt. But he questioned, is the consumer interested in ultimate safety? Does it make sense to put a 50lb lock on the front door and still have windows that are very thin? Barbara continued with this theme when she quoted Butler Lampson as saying that he had been been worrying about locks on doors, and should have been working with the cops. Jim suggested a more distributed analogy: aspirin with tamper resistant covers. Barbara agreed that the idea is to make decisions about things before they happen. Paul Karger pointed out that the problem of downloadable, executable content is not restricted to Java. There is Postscript, Word documents with viruses, Lotusscript, and Javascript. Marianne Mueller asked Drew to say specifically, what is hard about verifying the bytecode? Drew said that to a first approximation, a lot of what bytecode verification is doing is type checking. This information could be saved at compile time instead of thrown away. Bob Blakley asked, if dancing hippos are a 10 and realtime control of nuclear plant is 100; what's the risk we can take with today's technology? Barbara joked that it was 11, then stated that as long as you can give people information for their decisions, they can make an intelligent choice. Marv pointed out that an intelligently coded attack can detect the environment and who its running on behalf of, and wait until right combination of circumstances to do damage. One questioner noted that when he locks his front door, he can check the doorknob to verify that its really locked. He can't verify that his access controls are really working. How we give the user confidence? Jim pointed out that ordinary locks are easy to pick, and Barbara emphasized consumer confidence. John McHugh pointed out that there is a fine line between misfeasance and malfeasance. He's been burned more by the former than the latter. Should we use cops and jails for both? Marv replied, "You'd sell more debuggers." The conversation drifted to the use of audit logs as evidence. Drew pointed out that you just need to attack logs as well, and Marv added that resetting the real time clock is not a privileged operation. Jim agreed that a mistake is more often the problem. He asked, "When your child does bad; do you beat them or set them up with a potential to succeed?" The latter is the intense focus at Netscape. Li concluded by stating that these are browser companies, and Javasoft is a platform company (which evoked some laughter). Schlumberger has a smart card with 5 meg of memory; it can't pop up a window to solve a problem. The Java security architecture has to work with no file system too (in which case, where is the audit log?). ----------------------------------------------------------------------- The third panel was Security in Innovative New Operating Systems, ----------------------------------------------------------------------- Cynthia E. Irvine (Naval Postgraduate School) moderated this panel, and the panel members and their projects were: Robert Grimm, SPIN Project (University of Washington) George Necula, FOX Project (Carnegie Mellon University) David Mazieres, Exokernel Project (MIT) Oliver Spatscheck, Scout Project (University of Arizona) Jeff Turner, Fluke Project (University of Utah) Robert Grimm started by describing SPIN. It provides a minimal core of trusted services and an infrastructure for extensions. It is written in Modula-3. The extensions interact in two ways; they call upon existing services or are called by other services. A problem is that the extensions form one integral system (they all run in kernel mode). The extensions generally untrusted and used by applications. They need to specify security policies and a way to express and enforce those policies. They settled on Domain and Type Enforcement (DTE) as a flexible MAC system. The access modes are explicit and it allows for controlled changes of privilege. They are working on a formal model of DTE. The performance benchmark looks good: without security 1.18 sec, with security 1.2 sec. The null call very fast: 0.1 microseconds in the hot cache case. The access check increases the time; a domain transfer increases it even more. George Necula discussed his "Research on Proof-Carrying Code for Mobil Code Security," with Peter Lee. The problem they are addressing is safety in the presence of untrusted code. Examples of this problem are OS kernel extensions and safe mobile code. They have focused on the intrinsic behavior of untrusted code. They enforce safe access to resources (e.g. memory safety) and resource usage bounds (e.g. bounded termination). They have not focussed on issues like denial of service The key idea is to shift the burden of safety to the code producer. The code producer must convince the code consumer that the code is safe to run. The producer knows more about the program than the consumer, and it is easier to check the proof than to create it. The consumer defines a formal safety policy and publicizes it. The producer must produce a safety proof and package it with the untrusted application code. The consumer validates the proof with respect to the object code. The consumer scans code and produces first order predicate logic (like a disassembler). The proof is checked against the transformed code. The proof checking and proof rules work for many kinds of applications. The benefits of Proof-Carrying Code (PCC) are static checking, onetime cost for safety, and it is more expressive than runtime checking alone. There is early detection of failures (no abort complications), the process is tamper proof, and the consumer code is simple, fast and easy to trust. It can check a wide range of safety and liveness policies. It can be used even for hand-optimized assembly language. You don't have to sacrifice performance for safety. Their current goals are to test the feasibility and measure the costs of this approach. They chose simple but practical applications written in hand-optimized assembly language. One example was PING. PING's safety policy included statements that the total memory allocated is less than maximum size of the heap, that the buffer to send a packet is a readable buffer of length len, that len <= max packet length, etc. David Mazieres talk on the Exokernel was called "Secure Applications Need Flexible OSes". They tried to take as much of the code as possible and push it up into untrusted libraries. They end up with raw hardware and a very small Exokernel which tries to interfere as little as possible. They need minimum support for good, discretionary access control. Names for everything are physical (disk blocks, etc.). They consider the state of multi-user OS security currently hopeless, which is not surprising. Systems must trust way too much software. Trusted software is unreasonably hard to get right. We can make security easier by making it easier to get things right, with better OS interfaces. As an example of unreasonably complex software they give SSH 1.2.12. To fix problems with switching between root and user access, the program was separated into three new programs. Some of the reasons they give for so many security holes are insecure network protocols, the use of C and libc, violation of least privilege, namespaces decoupled from underlying files (you can be tricked with these), system calls that use all available privileges, no process to process authentication (no way to transfer privileges between processes), and combinations of all of these. To fix these problems, the OS should allow authenticated IPC and credential granting, strip many programs of root privilege (e.g. login), provide access control on low level objects, operate on immutable (e.g. physical) names to avoid races, and a flexible notion of identity. As an example he discussed their logging process. Each instance of login generates new ID on the fly. Their authentication server runs with a 0 length capability which is effectively root. It generates capabilities for the new ID, granting them back to the login process. Login was never maximally privileged, and a single authentication server replaces many privileged programs. Oliver Spatscheck presented "Escort: Securing Scout Paths." Scout was designed for running network appliances (firewalls, web servers, etc.). Security policy is determined at the time you build the system, before you compile. Security in networks is based on explicit information flows restricted by a filter (firewalls). Policy is enforced by endhosts and filters. Security in conventional OSes models each user separately. Each user trusts all the servers. Scout extends the network connection metaphor into the OS. It's called a path. Paths are first class objects. Escort separates paths (information flows) into protection domains. A security filter limits the flow of information on a path if it is traversing multiple protection domains. Escort's goals are to protect information flow (paths), to support the principle of least privilege, to support most security policies, to support central policy management, to keep globally trusted code small, and to be efficient. Escort is currently being implemented on the DEC Alpha in a single address space with multiple protection domains. The protection domains are enforced by hardware. It has fast user-to-user domain crossing. Its policy is represented as a decision tree, which can be used to represent arbitrary policies. Jeff Turner presented "Flask: A Secure Fluke Kernel." He is on a one year visit from the NSA to the University of Utah. Fluke is a microkernel designed to take Mach even further. It supports fine-grained decomposition of services, customizable OS functionality, a recursive VM model, hierarchical resource control, state encapsulation, and flexible composition. Every application has a TCB, down to realtime applications on the kernel. Building on the work of Synergy/DTOS, they focus on flexibility and customizability by separating policy from enforcement. They decompose the OS as much as possible. They began with Mach 3.0 as the foundation in 1992. Flask goals focus on flexible, customizable security. They continue to rely on hardware protection. They support fine-granularity access control and least privilege, and they provide a system-wide solution for downloaded code (such as Java, Activex, Inferno, Juice, Tcl, and plug-ins). This work required changes to Fluke. All servers need to bind security information to their basic objects, use the new interfaces to create objects given a SID, and interact with the security policy. Microkernel specific changes include adding controls over object transfer, attaching the sender SID to all messages, and allowing the specification of a receiver SID. All servers are policy neutral or policy flexible. They are done with the microkernel and security server. They have designed secure file and network services and the secure network service. The process manager, VMM, and Flask specification are in progress. They're planning on a code escape in late July of this year. The research issues of interest include determining what yardstick to use for assurance, good performance, a trust model and policy extensions in sandboxes to support secure containers for downloaded code, determining the interaction of differing policies, and support for new "WebOS" ideas (e.g. rent-a-computer). At this point the floor was opened up to questions and comments. Paul Karger asked how the PCC system knows that the predicate they're trying to prove means its secure. George replied that the code consumer determines what's secure with the published safety policy. Paul also noted that the Exokernel has a flavor of the early days of Multics. David agreed that most of the ideas are common sense. Bill Shockley pointed out that the idea in FOX that you can proof-lock something as a general notion of spraypainting or cryptographic locking is an important theoretical idea that deserves more attention. One questioner stated that notions such as hardware enforced protection domains are very conventional security models. He asked, is that the right for network appliances? Can you expect the hardware to provide protection domains? Oliver pointed out that the notion of information flow is what Scout protects. If there is no stable storage on a web TV, that is the only thing worth protecting. They use hardware-based protection domains for a higher level of assurance for less effort. Maureen Stillman asked what theorem prover and what languages does FOX support? FOX uses its own theorem prover and only supports DEC Alpha assembly language. Cynthia Irvine asked, "Where is the TCB? Can you identify the security perimeter statically?" FOX's TCB is the proof checker, proof rules, VCgenerator that transforms the code. In SPIN, the people who configure it define the boundary. The Exokernel itself is probably the TCB. In Scout, you have to trust the code that does the separation of policy enforcement of paths. Each information flow defines its own TCB. Fluke has a different TCB per application or per situation. John McHugh asked, "How does the end user know what the policy is? How does the user gain assurance of proper enforcement?". In Fluke, when you add a new piece, you add a new set of access control permissions. For any application, you specify the policy and plug it in. In Scout, how policy is specified is not part of the kernel; there are no fixed rules there. It's defined by the person who builds the system. In the Exokernel, the application can do whatever it wants with its data or state. In SPIN, the standard TCB is provided and includes assurance that this is secure. You can do something different if you want to. In FOX, each extension comes with a safety policy. Jeff pointed out that another class of decisions are based on history or time, and so need either dynamic checking or the ability to revoke and reverify. One questioner suggested that these systems shift burden of developing complex security mechanisms to the programmer who is likely to mess it up. Scout plans to ship tools and guidelines that make sense for most of population. David pointed out that you can provide the same interfaces with the Exokernel with a library. Sometimes what the kernel offers is inadequate for what the application needs. Paul Karger proposed the scenario where an end user goes to the store and buys an Exokernel and bunches of applications. The end user can barely spell policy; why do Acme's stuff and Ajax's stuff running together provide a consistent security policy? How can I compose them and expect them to do anything? Robert asked how you can expect anything consistent when you do the same thing on a Mac system, and Paul said you don't. Virgil Gligor asked if FOX supported mutual suspicion. Can the producer have requirements on the consumer? It does not. Carl Landwehr asked what authentication each of these systems use, and whether they could authenticate each other. PCC avoids authentication. SPIN has but their effort into access control, not on authentication. The Exokernel does password authentication because its simple to do. Scout authenticates the default path. Flask allows login with a password or across the net. Carl went on to ask How much trust is vested in the login process? Jeff replied, I'd say lots. Peter Neumann suggested that they were redefining trust technology to trussed technology: belt, suspenders, smoke and mirrors. He then asked, with his Risks hat on, How would you subvert your system? George would try to find a bug in the implementation of his FOX proof checker. He pointed out that the architecture is proven, and that the checker is 5 pages of C code. Oliver would try to subvert Scout's boot loader or firmware. Jeff would confuse people about Flask's policy. David would attempt trojan horse attacks through NFS on the Exokernel's development. Robert would try to find a way to change SPIN's DTE access matrix. ______________________________________________________________________ Report on Papers Given at IEEE 1997 Symposium on Security and Privacy by Joshua D. Guttman, The MITRE Corporation guttman@mitre.org ______________________________________________________________________ This year's IEEE Symposium on Security and Privacy contained 20 papers distributed in six sessions. No session had a particularly narrow technical focus, and between them, the papers touched on a broad range of topics. Some years the symposium seems to have an informal center of interest, but this year it had a fairly balanced spread across the field. I will group the papers into rough subject-oriented clusters, even though they do not neatly match the division of the symposium into sessions. These clusters are: 1. Authentication, Cryptography, and Cryptographic Protocols (4 papers) 2. Network Security Issues (4 papers) 3. Database Security Issues (3 papers) 4. Intrusion Detection and Prevention (3 papers) 5. Security Theory (2 papers) 6. Security Policies and Authorization (4 papers) I will discuss them in this (arbitrary) order. In some cases I have more to say than in others, and in some cases I have opinions that I will mention after giving a brief summary. This summary also has an Index of Titles and an Index of Authors. All opinions expressed here are purely personal positions, and no organization should be construed as endorsing them, in particular, not my employer, nor any agency of the US Government, nor the IEEE Cipher. 1. Authentication, Cryptography, and Cryptographic Protocols Toward Acceptable Metrics of Authentication Michael K. Reiter and Stuart G. Stubblebine (AT&T Labs--Research) In using public-key cryptography, a primary problem is to determine reliably when a person owns a public key. A variety of schemes now exist for allowing authorities to certify bindings between people (or other entities) and their keys. However, one needs to decide which authorities to trust, particularly in cases where one authority may certify a binding for another authority, who in turn helps to certify a target binding. Clearly one wants to view this process as generating a directed graph of bindings leading from oneself to the target binding. An authentication metric, as defined by the authors, is a recipe for annotating such graphs with whatever supplementary information may be relevant, and deciding when the resulting annotated graph yields good enough evidence for a target binding. The authors criticize a number of earlier authentication metrics, and demonstrate that some have highly counterintuitive properties. The authors do not exempt a scheme they proposed themselves, which appeared barely a month before the symposium. Although major aspects of the paper are matters of judgment, which can appear subjective, this even-handed criticism of the authors' own recent work will prevent anyone from thinking that the paper merely records their own opinions or prejudices. The authors develop a sequence of plausible design principles for authentication metrics, and illustrate where the previous proposals fall short. The principles require that parameters to the models should have clear meaning and should be determinable independently of the output of the metric itself; that the metric should take into account as much relevant information as possible, and not hide relevant decisions; that the metric should be resilient to manipulation by misbehaving entities and provide meaningful results from partial information. They also require that the output of the metric should be intuitively understandable. The authors use these principles to motivate a new metric. This metric is based on the idea that when an authority certifies a binding, it should provide an amount of insurance that the binding is correct. This insurance may be regarded as an attribute of each directed edge in the graph. Then a natural property of the graph is the minimum capacity cut set in the graph. A cut set is a set of edges that if removed from the graph leaves no path from the source to the target binding. A cut set has minimum capacity if the sum of the values on all of its edges is less than or equal to the sum on any other cut set. This metric has the advantage that its meaning is extremely clear in commercial terms. It is a lower bound on the amount of money to which one will be entitled if the target binding is found to be erroneous and one has to collect the insurance. This paper combines a searching analysis of previous work (including the authors' own) with thoughtful general principles to guide future work and a particular, well-motivated proposal. It also seems likely that the same principles will apply to trust evaluation problems other than validating certificates. Automated Analysis of Cryptographic Protocols J. Mitchell, M. Mitchell, and U. Stern (Stanford University) Cryptographic protocols are (generally short) sequences of messages involving cryptographic operators such as secret or public key encryption or signature schemes. Cryptographic protocols are used to authenticate one party to another in a network; to agree on a key; or to communicate some other value that will serve as a shared secret. Despite their apparent simplicity, cryptographic protocols are remarkably slippery, and published protocols may be considered sound for a decade or more before disastrous flaws are discovered. Several analysis approaches are available for avoiding this problem, one of which is to model the protocol and its attacker as a finite state system, and to systematically explore the states in the system to discover whether flaws of various kinds arise in accessible states. The authors apply a tool called Murphi to a number of example protocols with encouraging results. This line of work is conceptually quite similar to other on-going efforts, although the Murphi tool is claimed to be more efficient than its competitors. Deniable Password Snatching: On the Possibility of Evasive Electronic Espionage A. Young and M. Yung (Columbia University) Suppose that you could install a program on a shared computer that would monitor keystrokes. Obviously, you could collect other people's passwords and later impersonate them on the system. Could you also do so without disclosing the fact that you were collecting this information? Well, yes, this paper argues that, using public key cryptography, you can, and it provides a particular scheme that accomplishes this goal. Public key cryptography allows one to store the passwords to disk without anything on disk or in the recording program disclosing that these data are in fact encrypted passwords. The paper, like the same authors' paper last year on cryptovirology -- combining cryptography with software viruses to provide a new mechanism for extortion attacks -- is written with verve and a sense of mischief. However, someone familiar with what can be done with modern cryptography would have been likely to predict that the authors' conclusion would be true, and that various schemes would suffice to achieve their goal. Number Theoretic Attacks On Secure Password Schemes Sarvar Patel (Bellcore) In this paper, the author explores a number of attacks on various forms of the Encrypted Key Exchange (EKE) protocol invented by Bellovin and Merritt. EKE is intended to allow parties to agree on a key or some similar secret, assuming that they already share a secret password P, which is used to encrypt an initial value e. This raises a special problem because passwords are much more predictable than other sorts of secrets. As a consequence, password-based security is often vulnerable to a dictionary attack, in which the attacker uses all the members in a large collection (such as a dictionary) to decrypt an observed message P(e), and looks for a recognizably correct result. EKE is designed to use the password only in contexts where the attacker will not be able to recognize whether the result of a trial decryption is correct, for instance because any number e is a possible plaintext value. However, Patel shows that this is not sufficient, and that many variants of EKE will succumb to a more active sort of dictionary attack. This sort of attack arises because the responder in the various forms of EKE returns a value of the form P(f(e)) when presented with P(e), where f preserves some number-theoretic properties. Thus, an attacker may send a value x and receive in response a value x', which the responder computes by decrypting x using P, applying f, and encrypting the result with P. Now the attacker can consider all the passwords in the dictionary, decrypting x and x' and checking whether they have the number-theoretic relation preserved by f. Because statistically a fixed proportion of the passwords in the dictionary will happen to satisfy the test in the case of a particular value x, in successive tests the number of surviving passwords will go down logarithmically. Hence, a relatively small number of trials may suffice to identify a unique password. Some forms of EKE can be repaired so that this form of attack no longer succeeds. It was pointed out by a member of the audience that a special aim of the EKE group of protocols was to prevent off-line attacks, and to force any attack to involve active responses by the victim. Patel's attacks are not off-line, so they undermine the claims of EKE less than might be thought. However, Patel emphasized that the number of exchanges in which the attacker needs the cooperation of the victim is quite small, in realistic cases on the order of a couple of dozen. Hence the attacks are practically quite difficult to defend against. The paper exemplifies the importance of combining protocol analysis with number-theoretic reasoning; keeping this connection in mind should provide many fascinating headaches for the designers of security protocols. 2. Network Security Issues Filtering Postures: Local Enforcement for Global Policies Joshua D. Guttman (MITRE) In this paper, the author (also the present reviewer) describes a simple language for expressing global access control policies. These policies are designed to be of a kind that filtering routers are capable of enforcing. An algorithm is then introduced to determine what role the individual routers in a (possibly complex) network topology should play in order jointly to enforce a global policy. A related algorithm determines whether a proposed division of access checking among the routers will correctly enforce a global policy, and if not, exactly what violations may arise. Analysis of a Denial of Service Attack on TCP Christoph L. Schuba, Ivan V. Krsuland, Markus G. Kuhn, Eugene H. Spafford, Aurobindo Sundaram and Diego Zambon (Purdue University) A recently observed attack on Internet Service Providers (ISPs) is the so-called SYN-flooding attack. This attack exploits a characteristic of the TCP protocol for setting up connections, namely the three-way handshake. When a source wants to set up a connection with a destination, it sends a datagram with just the SYN bit in the TCP header set; the destination responds with a datagram in which both the SYN and ACK bits are set. Normally, the source then responds with a datagram in which just the ACK bit is set. This datagram uses a sequence number following that given in the destination's SYN-ACK datagram. To check that an ACK datagram belongs to a handshake in progress, the destination machine must maintain state for the incomplete ("half-open") connection. The SYN-flooding attack exploits this by sending a large number of SYN datagrams, but no ACK datagrams, so that the destination machine must maintain state for a large number of half-open connections. Since the timeout period for ACK datagrams is long (e.g. a minute), it is relatively easy to exhaust the memory the destination host has available for new connections. Legitimate users are then unable to establish connections to obtain service from the host under attack. The authors describe several potential approaches to defending against SYN-flooding. One could tinker with configuration parameters, or buy more memory so that more half-open connections can be stored. One could try to protect oneself via a firewall. The firewall could be a relay, transferring datagrams from a TCP connection from the external source to another connection with the internal destination. Alternately, the firewall could be a semi-transparent gateway, generating the ACK datagram to complete the three-way handshake, and sending a RESET datagram if a real ACK datagram from the source is not observed. But the authors' main proposal is a form of active monitoring called synkill. In this approach, a workstation running synkill monitors the network for SYN packets. If a SYN packet has a source address that is impossible or considered suspicious based on past behavior, then the synkill workstation immediately emits a RESET for this connection, thereby allowing the destination to free its resources immediately. Otherwise, it emits an ACK packet so that the connection moves from the half-opened state to the fully opened state. If a real ACK is not observed, synkill will later send a RESET. The synkill program also monitors which addresses complete their connection requests normally, so that it learns which source addresses to treat with well-earned suspicion. Anonymous Connections and Onion Routing Paul F. Syverson, David M. Goldschlag and Michael G. Reed (Naval Research Labs) In this paper, the authors give a more complete account than previous papers of their "onion routing" mechanism. Onion routing is intended to provide anonymous TCP connections. The intent is to ensure that neither eavesdropping nor traffic analysis will disclose who is connected to whom, nor what they are saying. Since the onion routing network offers a mechanism for making secure socket connections, this mechanism will support many different applications. The essence of the idea is to have a set of devices called onion routers available on the Internet. Onion routers maintain long-term connections between themselves. These connections are used to transfer data for users or applications. The goal is to ensure that user data cannot be traced through the network, whether by passive monitoring or even by compromising a small number of onion routers. A particular user connection will traverse a number of different onion routers (currently five), in a pattern determined at the time that the connection is established. By contrast, the underlying IP datagrams may traverse many more IP routers in the course of getting onion-routed data through the network. The first onion router in an onion connection serves as a proxy for the initiator of the onion connection; the last serves as a proxy for the recipient. When user data is transferred through an onion connection, the data is first tagged with the address of the intended recipient and then encypted using a key that will be shared only with the last onion router. This object is then tagged with the address of the last onion router and encrypted using a key that will be shared only with the second-to-last onion router. This layered process proceeds backwards through the intended onion route. Thus, only the initiator's proxy needs to know the whole route. The successive routers will know only to which router they forward a cell. Because keys need to be established for each new connection, a special object called an onion must be passed down the path that the connection will take. The routers along this path extract their keying information from this onion. RSA is used to allow the layers of the onion to be shared only with the intended router. An MBone Proxy for a Firewall Toolkit Kelly Djahandari and Dan Sterne (Trusted Information Systems) The MBone is a network of multicast routers in the Internet; it is used to support multimedia broadcasts and conferencing using the IP multicast mechanism. However, MBone traffic can cause security problems, and therefore most firewalls do not pass MBone datagrams. A multicast group is determined by an IP address in a particular reserved range of IP addresses. When a host joins a multicast group, the MBone arranges to forward all datagrams with that destination address (call it x) to the host. The key security problem with the MBone is that these datagrams may be directed to any port number. Suppose the host offers service such as (e.g.) NFS. If a firewall allows MBone datagrams to enter, an attacker could synthesize datagrams addressed to the NFS port on host x. When these datagrams were delivered, they could exploit well-known NFS vulnerabilities. The authors describe an MBone proxy that operates on a firewall machine. This proxy will receive requests from internal hosts to join MBone groups. When the proxy receives such a request, it arranges with the MBone to join the multicast group. Whenever a datagram arrives for that group, the proxy unicasts it to the requesting internal host. The internal host and the proxy must agree on a single port to be used for these unicast packets. If several internal hosts request participation in the same MBone group, then the proxy will unicast separate UDP datagrams to each. In this way, MBone services can be made available to internal hosts, without MBone datagrams needing to be delivered to them. An application wrapper is needed to support this service; the wrapper handles the initial set-up conversation with the proxy before spawning the standard (unaltered) MBone application software. When the MBone application exits, it presumably informs the proxy so that the proxy can tear down its membership in this group, in case this is the last internal user to exit the conference. 3. Database Security Issues The Design and Implementation of a Multilevel Secure Log Manager Vikram R. Pesati, Thomas F. Keefe and Shankar Pal (Penn State University) This remarkably detailed paper studies how to provide a log manager for multilevel secure database management systems. The essential problem is that a database log manager has very stringent performance requirements because the changes written by any transaction must be logged before it commits. The log data is needed in case of system failure. As a consequence, the authors want to adopt strategies for controlling the way that the log records are flushed onto disk, which gives the paper a distinctively low-level focus. It is hard to evaluate the significance of the algorithms for flushing records to disk, and the timing data that are presented, without a clearer conception of how systems incorporating this technology will realistically be used. This paper has the most complicated diagram in the proceedings, a Petri net with 27 places, 14 transitions, and 58 arrows between places and transitions. I particularly enjoyed one delicate touch; 32 additional arrows with slightly differently shaped arrowheads have been added to the diagram to connect labels to the places they describe. Catalytic Inference Analysis: Detecting Inference Threats due to Knowledge Discovery John Hale and Sujeet Shenoi (University of Tulsa) This paper concerns the database inference problem. The authors point out that many things may be inferred from the knowledge actually contained in databases, using common knowledge as supplementary premises. Previous researchers (Su and Ozsoyoglu) cited by the authors show that disclosures due to inference in multilevel databases are common, and that discovering how to eliminate the disclosures is an NP-complete problem. The present paper differs in two main ways if I understand it correctly. First, it applies the terminology of fuzzy logic to the problem, presumably with the intention of modeling the fact that in many cases, the common knowledge that serves as a supplementary premise is not a universal implication, but a correlation with high frequency. Second, it offers a particular algorithm for detecting inference compromises. I assume that repairing these compromises is algorithmically at least as hard as it would have been if the supplementary premises had been non-fuzzy. My impression is that most readers will find the terminology of fuzzy sets (and the rhetoric of fuzzy sets) to be an obstacle to understanding. Probably the same or similar points could have been made in a more widely understandable form had they been expressed using traditional probabilistic inference. Surviving Information Warfare Attacks on Databases Paul Ammann and Sushil Jajodia (George Mason University), Sushil Jajodia, Catherine D. McCollum and Barbara T. Blaustein (MITRE) This paper, three of the authors of which are colleagues of the present reviewer, consists of three somewhat contrasting portions. The first presents a polished introduction to the issues of information warfare attacks on databases, along with some approaches to preventing, detecting, and containing such attacks, or recovering from them. The second portion describes an approach to implementing "damage markings" for maintaining the usefulness of a database despite some damage from an information warfare attack. Some such mechanism is needed because it may be operationally impossible to do without the database, even if it is known that some portions of it have sustained an attack. One goal is to sequester data that is irreparably damaged. A second goal is to allow some uses of data that is damaged but that can still play a useful role. Data may also be partially repaired, meaning that the values it contains may be incorrect, but they are approximately right. These sorts of data are called red (irreparably damaged), off-red (damaged but useful), and off-green (approximately correct); undamaged data is called green. These colors are treated as damage markings, and are assumed to be maintained by the database management system as metadata. The paper then describes a weakened notion of database consistency that may be applied to databases in the face of some damaged data, applying effectively to the green and off-green portions of the database. The notion of weak consistency then serves as a springborad from which the the authors define when a transaction preserves weak consistency. The authors also describe a particular protocol for manipulating the damage markings, which they call the Normal Transaction Access Protocol, and demonstrate that this protocol preserves weak consistency. In addition to normal transactions, the authors also introduce a notion of countermeasure transactions; these are transactions that examine the damage markings in a database and try to detect damage that may have been done, or to restore it to a more stable state. The third portion describes a separate algorithm for making snapshots of a database; this is potentially useful in deciding how to recover from an information warfare attack, but seems only loosely integrated with the remainder of the paper. The paper initiates a new area, concerning systematic ways to analyze the effects of information warfare attacks and to recover from the attacks. We are likely to see many papers in this area in the coming years. Nevertheless, I think that some of the results in this paper should be regarded as tentative or preliminary. For instance, the paper's Normal Transaction Access Protocol is not the only candidate that could be imagined for this role. Many readers will be surprised that it does not treat the damage markings in a way consistent with the Biba integrity model, ensuring that information does not flow from more damaged into less damaged objects. Thus, we will need a sharper understanding of exactly what security (or integrity) goals the paper's protocol is serving. The paper's protocol also appears to have some counterintuitive consequences as to which transactions would be consistency-preserving. 4. Intrusion Detection and Prevention How to Systematically Classify Computer Security Intrusions Ulf Lindqvist and Erland Jonsson (Chalmers University of Technology, Sweden) The authors present a taxonomy of intrusions; it elaborates on a taxonomy proposed by Neumann and Parker in 1989. Neumann and Parker distinguished external misuse, hardware misuse, masquerading, bypassing intended controls, active misuse, passive misuse, etc. Lindqvist and Jonsson further subdivide by intrusion technique (e.g. password capture or spoofing a privileged program) and intrusion results (e.g. disclosure, or unauthorized service, or erroneous output to authorized users). They illustrate their taxonomy by classifying the intrusions perpetrated by the students in an undergraduate course on computer security. Lindqvist and Jonsson begin their article by citing their countryman Linnaeus on the value of classification. Linnaeus's classification in botany has largely stood the test of time, presumably because it concentrates narrowly on a single botanically basic aspect of plants, namely the anatomy of the flowers and fruits, which determines how they reproduce. It seems to me that for a taxonomy in intrusion detection to play a corresponding scientific role, it must also seize on a narrow but central aspect of the subject. The Neumann/Parker classification and the authors' extensions to it seem not yet to have the uniformity that a robust, explanatory taxonomy should. A further criterion of success for a proposed taxonomy, I think, will be whether it can leads to uniform answers to questions such as how to detect attacks in a particular taxonomic category; how to prevent attacks in a particular taxonomic category; and how to respond to attacks in a particular taxonomic category. Execution Monitoring of Security-Critical Programs in a Distributed System: A Specification-based Approach Calvin Ko (Trusted Information Systems), Manfred Ruschitzka and Karl Levitt (University of California Davis) This paper appears to summarize the Ph.D. thesis of the first author. It develops an approach called specification-based detection. This approach is based on the observation that many attacks proceed by abusing a known collection of privileged programs. These privileged programs, which under Unix generally execute as the superuser root, are known to have various flaws. When fed mischievously chosen data or run in odd combinations or at the wrong moment, they allow the attacker to capture some unintended privilege. Specification-based detection is based on the idea that, although it may not be easy to identify or fix all such flaws, it is frequently easy to characterize the kinds of actions that the program is supposed to engage in, and to constrain the order in which they can occur. An attack can (apparently) occur through such a program only when it behaves in a way that contravenes the specification of its normal behavior. The paper describes a language in which these specifications may be written, and a system to monitor program execution to detect whether the process is staying within its specification. The paper causes some terminological trouble to this reader. Also, it is difficult to understand the notion of parallel execution grammars from the presentation given in Section 4. These grammars are apparently intended to be analogous to context-free grammars, but they appear (as far as I can tell) to form a Turing-complete notation. Perhaps it would be more understandable -- and equally good as a basis for the execution monitor -- to use an extension of a process algebra such as CSP or CCS, which are well-understood notations suited to specifying sets of traces. A Secure and Reliable Bootstrap Architecture A. Arbaugh and David J. Farber and Jonathan M. Smith (University of Pennsylvania) Many systems have a security gap at the very start, when the system is bootstrapping. The hardware that initiates the bootstrap typically cannot ascertain whether the software to which it passes control is the right software or not. In operating systems (for instance, the usual PC operating systems) that do not have strong access control safeguards, a virus or other attack can modify the system itself, so that after the next bootstrap, any other security mechanisms may be disabled. The authors point out a second problem with ensuring a secure bootstrap process: One would like a secure bootstrap process to pass control from one level of abstraction to the next, validating each level first. But in fact that standard PC boot process has a star-like structure, in which the system BIOS passes control to expansion ROMs, which pass control back to the system BIOS. The authors' approach to the secure bootstrap problem is to break the boot process into a succession of stages. Each stage verifies a cryptographic checksum on the software implementing the next stage before passing control to it. As a consequence, if the hardware that implements the first stage has not been compromised (e.g. because an attacker physically replaced the motherboard), then a cryptographic guarantee ensures that the final running operating system is the expected one. The authors describe a mechanism in which a PROM card supplements a portion of the PC's BIOS to start the bootstrap process securely. They have made small changes to the bootstrap logic to allow a more linear flow of control than the standard one. In case of integrity failures, the PROM card supports a secure bootstrap from a remote networked server. It seems to me that assuring the bootstrap process is an important practical issue, and the authors have done a service in developing an architecture to approach the problem. 5. Security Theory Secure Software Architectures Mark Moriconi, Xiaolei Qian, R. A. Riemenschneider (SRI) and Li Gong (JavaSoft) In this paper, the authors apply their work on formally specifying software architectures to the case of software security architectures. The architecture specification method that the authors have developed, and embodied in a specification language named SADL, is intended to formalize the common practice of representing an architecture through box and arrow diagrams. In these diagrams, major components (at some level of abstraction) are displayed as boxes, and the paths of communication among them are represented as arrows. Such diagrams are intended as a compact reminder of what can affect what, and they are often the basis for elaborate annotations of the kinds of data that may be communicated, the style of communication (RPC, messages, pipes, etc.), and the semantics of the components when necessary. In SADL, these same kinds of information are represented logically, using abstract data types to characterize the data involved and logical axioms to characterize the communication paths and the semantics of the components. In previous papers, the authors have presented examples (not related to security) and have developed a method based on faithful theory interpretations to show that one architecture correctly refines another. An important ingredient in the approach appears to be that the theories involved are semantically rather shallow. That is, it is distinguished from more traditional formal specification and verified refinement in that this work emphasizes the box-and-arrow level more, whereas much previous work assumes a closer focus on the behavioral properties of the components. I believe that the authors consider this semantic shallowness an advantage, because it will be easier to collect the information needed to construct the specification and easier to read and understand the specification afterwards. Thus, these specifications seem potential candidates for guiding the design and maintenance of systems; they may succeed in a way that might be impossible for more detailed specifications. On the other hand, the limitation to the box-and-arrow level of description also exacts a price. Although this paper was presented in a session called "Security Theory," it does not contain any new theoretical content about security; instead, it applies axiomatic theories to describing security. The examples in the paper use a very traditional access control model of multi-level security, and a reader will wonder whether this decision is forced by the fact that subtler or more informative models of security may require a more detailed specification of the components' behavior in order to draw any system-level conclusions about security. The presentation in the paper would be easier to follow if more detail were available, particularly in Section 6, "Relating the Secure Distributed Transaction Processing Architectures." A General Theory of Security Properties and Secure Composition A. Zakinthinos and E.S. Lee (Cambridge University, U.K.) This ambitious paper sets out to characterize the set of all "possibilistic" security properties, and to identify the most permissive member of the set that "does not allow any information to flow from high level users to low level users." A possibilistic security property is one that does not refer to the probability that a system will undergo a particular event, but only the possibility (or impossibility) of its doing so. The key idea in the paper is to characterize the "type" of property that can be a security property. Suppose that the behavior of a system is represented as a trace, which is to say a sequence of events, and each event has a recognizable security level (either "high" or "low"). Then the system itself may be characterized by its set of traces, which is all the sequences of events which it would be possible for the system to engage in. Two traces are low-level equivalent if they reduce to the same sequence if all the high events are omitted. The low-level equivalent set of a trace (relative to a system) is the set of all traces (of that system) that are low-level equivalent to it. The authors state that a property P of systems is a security property just in case there is a property Q of sets of traces such that a system has property P just in case the low-level equivalent set of every trace (of that system) has Q (Definition 9). Clearly various familiar security properties such as non-interference can be brought to this form. The paper then identifies a particular "perfect security property," claiming that it is the weakest security property that prevents downward information flow. However, the paper relies on an "event system" model that considers only traces and makes a static distinction between inputs and outputs. In this it follows a tradition dating back to McCullough's papers of a decade ago, rather than using the more expressive frameworks offered by process algebras such as CCS and CSP. The definition of security property that the authors propose (definition 9, stated as a biconditional), may be a necessary condition for a security property, but is clearly not sufficient. Consider for example the stipulation that the set of low-level equivalent traces should have cardinality greater than 17, which is of the required form but does not seem to be a security property. Moreover, there is no definition of "information flow from high to low," so that a reader is not quite sure exactly what theorems 1 and 2 are intended to mean. The proofs use puzzling notions such as an event being "only dependent on high level events" or "dependent on low level events." This seems too unreliable a vocabulary for formal proofs. The bibliography misses a number of important papers. The CCS-based work of Gorrieri and Focardi is mentioned but hardly evaluated. The paper would have been stronger if previous efforts had been more carefully correlated with the authors' point of view. The paper does however make a contribution to sorting out the adequacy of McLean's Selective Interleaving Functions as a way to define the class of security properties. It seems to me that some of the material in this paper was already known by 1990. For instance, the fact that security properties are determined by certain characteristics of the low-level equivalent set was appreciated, although there doesn't seem to have been a term invented for it. Sutherland's "hidden" and "view" operators were close to this. Some other material is new, but seems to need a more rigorous treatment. The paper would also be strengthened by a richer explanation of the authors' modeling decisions. Why are event systems the right model? Why are possibilistic security properties the right properties? What inner connection is there between the proposed "perfect security property" and the notion of "flow from high to low"? 6. Security Policies and Authorization An Authorization Scheme for Distributed Object Systems V. Nicomette and Y. Deswarte (LAAS-CNRS & INRIA, France) There are three main ingredients to this paper. First, the author summarizes an architecture for secure distributed systems in which authorization is divided depending on localization. When an access involves only local subjects and objects, the security kernel of the local machine may decide whether the access is permitted. For access that involves multiple hosts, a centralized authorization server must be consulted. Previous work of the authors and their colleagues has investigated how to make the centralized authorization server intrusion resistant and fault tolerant. Second, an approach to "access right management" is proposed in which the right to execute a method is governed by symbolic rights. These symbolic rights may govern the the individual object involved in the method invocation, or else some group or class of which the object is an instance. This allows a multi-way check on access depending on the method, the calling subject, and all of the objects given as parameters. A supplementary distinctive ingredient is the concept of a voucher. A voucher is a right that allows one entity e to request a second entity e' to invoke a particular method on particular resources on its behalf. The authors comment that this improves a previous notion of proxy in that a voucher may be granted to e for delegation to e' even when neither e nor e' on its own would have the authority to take the action. Thus, the voucher notion may be used to give a tighter implementation of the least privilege principle. Third, this approach to access control is instantiated to define a multi-level security policy suited to object-oriented systems. The authors claim that it prevents information flow, while permitting a wider range of actions than a mechanical application of the Bell-LaPadula principles. The authors cite previous, more specialized papers of theirs describing each of the three main ingredients in more detail. This paper is distinctive in that it conveys the sort of system that is possible if the three ingredients are all present. A Logical Language for Expressing Authorizations Sushil Jajodia (George Mason University), Pierangela Samarati (Universita' di Milano) and V. S. Subrahmanian (University of Maryland) In this paper the authors develop a notation, based on Horn clauses, for reasoning about authorization. They claim that it allows a variety of different access control policies to be represented, which seems to be true. They also claim that all existing access control systems support only a single access control policy, which seems to be false; the Nicomette/Deswarte paper just described is the most convenient counterexample, though not the only one. In this framework, a system security officer stipulates positive or negative authorization facts for users, groups, and roles using a predicate "cando" (pronounced "can do"). A second predicate "dercando" (pronounced "derived can do") is inductively defined using "cando," group membership, etc. Conflicts may arise, in the sense that the same user receives both a positive and a negative authorization for the same action on the same object. Therefore resolution rules are used; these derive conclusions about authorizations, expressed using a new predicate "do," on the basis of literals involving the previous predicates. The reader should be careful not to assume that there will be no conflicts involving "do." Both a positive and a negative authorization expressed in terms of "do" may be derivable for a given user, object, and action. I believe the authors' point is that while one must expect that conflicts in "derived can do" facts will occur, a good set of resolution rules should have the property that conflicts are not transmitted to the "do" predicate. Integrity rules that allow an "error" predicate to be inferred are also available, and are intended to allow a compact statement of global properties of the other rules, for instance that there are no conflicts in the "do" predicate. The authors illustrate their framework by showing rules that express various popular access control policies, such as separation of duty policies. A natural question concerns the computational complexity of evaluating the various kinds of atomic formulas. The authors refer to a different paper, published in a data management conference, stating "This access control checking can be performed in linear time w.r.t. the number of rules" in an authorization specification. The cited paper says only that this result "follows from well-known results on the data complexity of stratified datalog programs." Claims of this kind seem apt to be fragile though, as adding a little extra notation to the specification language may have drastic consequences for a checking algorithm. Providing Flexibility in Information Flow Control for Object-Oriented Systems Elena Ferrari, Pierangela Samarati and Elisa Bertino (Universita' di Milano) and Sushil Jajodia (George Mason University) Information flow security policies are more robust than policies based on discretionary access control. However, they are also more stringent: When information flow policies are straightforwardly implemented, they prohibit activities that do not compromise semantically unreleasable information. Naturally, an information flow policy cannot simply stipulate that an activity should be permitted if the semantics of information it discloses is releasable. This paper is one in a group by the authors and others that look for defensible ways to loosen information flow policies without disclosing sensitive information. It uses two main ingredients. One is to have a finer appraisal of when information can flow from one action to another. In object-oriented systems, one method invocation may preceed another without having the ability to pass information to the latter, if the former is an asynchronous invocation, and the process does not block to receive the reply message until after the call to the latter. The second main ingredient is the idea of a waiver. A waiver may be attached to a method in a particular object to indicate that the information flow policy is to be waived for this invocation. The waiver may be a reply waiver, which stipulates that information may be returned to the caller even though the information flow policy would not otherwise have permitted it. Or a waiver may be an invoke waiver, indicating that potential information flow from the caller (through the parameters to the method, or the fact of the call) should be ignored, so that the object invoked may still write to low classification objects. Waivers are apparently a fine-granularity way to specify that a particular method is trusted in a certain way, because the system designer understands that its semantics are not in fact disclosing sensitive information. The paper also presents a message filter algorithm that uses these ideas to provide a more flexible reference monitor than traditional information flow policies would support. Analyzing Consistency of Security Policies Laurence Cholvy and Frederic Cuppens (ONERA CERT, France) In this paper the authors consider deontic logic as a method for specifying security policies, following previous work of their own and of Glasgow and MacEwen. A deontic logic is a logic in which an "obligatory" operator may be applied to a sentence. For instance, if "Guttman reads file f" is a sentence of the language, then "It is obligatory that Guttman read file f" is also a sentence of the language. Permission (etc.) may also be expressed; "It is permissible that Guttman read file f" is just "It is not obligatory that Guttman not read file f". A security policy on this view is a set of sentences about what users (or user roles) are permitted, forbidden, or obligated to take what actions. The authors call such a policy a "regulation." The main problem that the authors address is the problem of normative conflicts. A normative conflict arises when someone is both permitted and forbidden to take the same action, or when someone faces a "moral dilemma" because they are obligated to do incompatible actions. The method that the authors recommend for checking for normative conflicts is to translate the regulation from the deontic logic into predicate logic with an "obligatory" predicate, rather than sentential operator. Presumably the predicate is a property of events. The authors claim that a conflict exists in the deontic logic if and only if, roughly speaking, the translation of the regulation has new consequences in which the "obligation" predicate does not occur. In essence, there are normative conflicts if the evaluative statements would have purely factual logical consequences. This is an interesting application of the so-called fact/value dichotomy, according to which moral evaluation should go beyond everything contained in the factual details of how the world stands. One would guess that this property (having new purely factual logical consequences) would be undecidable, since we are working here in the context of full predicate logic. The authors do not explain whether this is the case. Acknowledgments I am grateful to a number of friends and colleagues without whose advice this summary would have contained even more errors, irritating quirks, and unfounded opinions than it does now. My stubbornness is undoubtedly reponsible for the ones that still remain. They include Dale Johnson, Catherine McCollum, John McLean, Marion Michaud, Jonathan Millen, Jeffrey Picciotto, Peter Ryan, and John Vasak. ________________________________________________________________________ New Research Registry Entry ________________________________________________________________________ The following research registry entry was submitted April 3, 1997. * Name: Dr. Ed Gerck * E-mail : egerck@laser.cps.softex.br * Title: Meta-Certificate * Affiliation:Novaware * Description: Define intrinsic and non-hierarchical secure certification methods for the global Internet. I am the Coordinator of the MCG, the Meta-Certificate Group. The MCG is a non-profit, open international group that represents a technical development forum for current security needs. Currently, the MCG has representatives from 19 countries. We have developed a new paradigm for certification, called intrinsic certification, which can considerably increase the security standards in the Internet. Intrinsic certification does not depend on trust, previous common knowledge, Public Key Infrastructure, CAs, TTPs and does not use Certificate Revocation Lists (CRLs). The theoretical basis for the work can be seen at http://novaware.cps.softex.br/mcg/cie.htm In particular, our work proves that current certification procedures are all extrinsic, as represented by X.509, PGP, etc., and suffer from basic flaws which cannot be solved -- such as CRLs and the dependence on trust. * URL for further information: http://novaware.cps.softex.br/mcg [couldn't reach this successfully 7/12 - DNS failure -- CEL] ________________________________________________________________________ New Reports available via FTP and WWW ________________________________________________________________________ [check next issue] ________________________________________________________________________ Interesting Links [new entries only] ________________________________________________________________________ [check next issue] ________________________________________________________________________ Who's Where: recent address changes ________________________________________________________________________ Entered 12 July 1997 Charles Pfleeger ARCA Systems, Inc 6889 Boone Blvd, Suite 750 Vienna, VS 22182-2623 voice: +1 (703) 734-5611 fax: +1 (703) 790-0385 e-mail: pfleeger@arca.com Entered 23 May 1997 Mike Reiter AT&T Labs Research Room A269 180 Park Avenue Florham Park, NJ 07932-0971 Tel: (973) 360-8349 Fax: (973) 360-8809 e-mail: reiter@research.att.com Entered 28 April 1997 Dr Jianying Zhou Dept of Information Systems & Computer Science National University of Singapore Kent Ridge, S119260 Singapore Tel: +65-7722911(O) Fax: +65-7794580 e-mail: zhoujy@iscs.nus.edu.sg WWW: http://www.iscs.nus.edu.sg/~zhoujy/ Entered 5 April 1997 William Pase Object Technology International Inc. 2670 Queensview Drive Ottawa, Ontario K2B 8K1 Canada Tel: +1 (613) 820-1200 Fax: +1 (613) 820-1202 e-mail: bill_pase@oti.com Web: http://www.oti.com _______________________________________________________________________ Calls for Papers (new listings since last issue only -- full list on Web) ________________________________________________________________________ CONFERENCES Listed earliest deadline first. See also Cipher Calendar * DIMACS'98 Workshop on Formal Verification of Security Protocols, DIMACS Center, CoRE Building, Rutgers University, September 3-5, 1997 (Abstracts are due June 16, 1997; full papers are due August 1, 1997). As we come to rely more and more upon computer networks to perform vital functions, the need for cryptographic protocols that can enforce a variety of security properties has become more and more important. Since it is notoriously difficult to design cryptographic protocols correctly, this increased reliance on them to provide security has become cause for some concern. This is especially the case since many of the new protocols are extremely complex. In answer to these needs, research has been intensifying in the application of formal methods to cryptographic protocol verification. The goal of this workshop is to facilitate this process by bringing together those were are involved in the design and standardization of cryptographic protocols, and those who are developing and using formal methods techniques for the verification of such protocols. To this end we plan to alternate papers with panels soliciting new paths for research. We are particularly interested in paper and panel proposals addressing new protocols with respect to their formal and informal analysis. Other topics of interest include, but are not limited to - Progress in belief logics - Use of theorem provers and model checkers in verifying crypto protocols - Interaction between protocols and cryptographic modes of operation - Methods for unifying documentation and formal, verifiable specification - Methods for incorporating formal methods into crypto protocol design - Verification of cryptographic API systems - Formal definition of correctness of a cryptographic protocol - Arithmetic capability required for proofs of security for number theoretic systems - Formal definitions of cryptographic protocol requirements - Design methodologies - Emerging needs and new uses for cryptographic protocols - Multiparty protocols, in particular design and verification methods On-line conference information: registration form, accommodations, travel arrangements, and general conference information. * SNDSS'98 The Internet Society Symposium on Network and Distributed System Security, Catamaran Resort, San Diego, California, March 11-13, 1998. (submissions due: August 1, 1997) [posted here 7/7/97] The symposium will foster information exchange between hardware and software developers of network and distributed system security services. The intended audience is those who are interested in the practical aspects of network and distributed system security, focusing on actual system design and implementation, rather than theory. Encouraging and enabling the Internet community to apply, deploy, and advance the state of available security technology is the major focus of symposium. A complete list of topics of interest is given in the call for papers. The program committee invites technical papers and panel proposals, for topics of technical and general interest. Technical papers should be 10-20 pages in length. Panel proposals should be two pages and should describe the topic, identify the panel chair, explain the format of the panel, and list three to four potential panelists. Each submission must contain a separate title page with the type of submission (paper or panel), the title or topic, the names of the author(s), organizational affiliation(s), telephone and FAX numbers, postal addresses, Internet electronic mail addresses, and must list a single point of contact if more than one author. The names of authors, affiliations, and other identifying information should appear only on the separate title page. Submissions must be received by August 1, 1997, and should be made via electronic mail in either PostScript or ASCII format. All submissions and program related correspondence (only) should be directed to the program chair: Matt Bishop, Department of Computer Science, University of California at Davis, Davis CA 95616-8562, Email: sndss98-submissions@cs.ucdavis.edu. Phone: +1 (916) 752-8060, FAX: +1 (916) 752-4767. Dates, final call for papers, advance program, and registration information will be available at the URL: http://www.isoc.org/conferences/ndss98 * ADC'98 9th Australasian Database Conference, University of Western Australia, Perth, Western Australia, February 2-3, 1998. (submissions due: August 15, 1997). [posted here May 13, 1997] The Australasian Database Conference series is an annual forum for exploring research, development and novel applications of database systems, particularly research into novel and emerging areas of database systems and technology. Authors should submit either: four (4) copies of a paper to the address below or one (1) copy of a postscript version to adc98@cis.unisa.edu.au. Submitted papers should be original, no longer than 5,000 words, and both the submission and the camera ready copy must in A4 two column format, no more than 10 pages in length, have 25mm margins, single spaced, and in 10 point font). Papers must include the author's name, address, e-mail address, a 200 word abstract, and 3 to 6 keywords identifying the subject area. Papers may be submitted to: John Roddick / ADC'98 / School of Computer and Information Science / University of South Australia / The Levels, South Australia 5095 / Australia / Telephone: +61 8 8302 3463 / Facsimile: +61 8 8302 3381 / Email: adc98@cis.unisa.edu.au. * ETAPS'98 European Joint Conferences on Theory and Practice of Software, Lisbon, Portugal, March 30-April 3, 1998. (submissions due: October 6, 1998) [posted here 7/7/97] The European Joint Conferences on Theory and Practice of Software (ETAPS) is a new annual meeting covering a wide range of topics in Software Science which will take place in Europe each spring in the slot currently occupied by CAAP/ESOP/CC and TAPSOFT. ETAPS is a loose and open confederation of existing and new conferences and other events which aims to become the primary European forum for academic and industrial researchers working on topics relating to Software Science. The call for papers is now open for the five main conferences of ETAPS'98. See the ETAPS web page http://www.di.fc.ul.pt/~llf/etaps98/ for more details about the scope and submission instructions of each individual conference. Prospective authors who have no access to WWW should use the e-mail address given for each conference to obtain further information. o Foundations of Software Science and Computation Structures (FoSSaCS): E-mail address: Maurice.Nivat@litp.ibp.fr o Fundamental Approaches to Software Engineering (FASE): E-mail address: fase98@disi.unige.it o European Symposium On Programming (ESOP): E-mail address: clh@doc.ic.ac.uk o International Conference on Compiler Construction (CC): E-mail address: koskimie@cs.uta.fi o Tools and Algorithms for the Construction and Analysis of Systems (TACAS): E-mail address: tacas98@fmi.uni-passau.de * SEC'98 International Information Security Conference, Vienna and Budapest, Austria and Hungary, August 31-September 4, 1988. (submissions due: January 16, 1998) [posted here 7/7/97] SEC '98 covers a wide range of security related topics. It comprises both emphasis in technical as well as commercial aspects of information security. The conference is held within the framework of the IFIP World Computer Congress. Contact: Prof. Dr. Reinhard Posch; IAIK Klosterwiesgasse 32/I A-8010 Graz, AUSTRIA Tel: +43 316 8735510 Fax: +43 316 873520. e-mail: rposch@iaik.tu-graz.ac.at. * IFIP WG11.3 Twelfth Annual IFIP WG 11.3 Working Conference on Database Security, Porto Carras Complex, Chalkidiki, Greece, July 15-17, 1998. (submissions due: March 10, 1998). [posted here: 7/7/97] The conference provides a forum for presenting original unpublished research results, practical experiences, and innovative ideas in database security. Papers and panel proposals are solicited. The conference is limited to about forty participants so that ample time for discussion and interaction may occur. Preliminary conference proceedings will be distributed to participants; revised papers and an account of the discussions at the meeting will be published by IFIP as the next volume in the ``Database Security: Status and Prospects'' series. Authors are invited to submit five copies of their papers to the program chair. Manuscripts must be in English, typed in double spaced format in 12 point font, and not more than 5000 words. Each copy should have a cover page with name, title, and address (including e-mail address) of authors, and an abstract of no more than 200 words. Electronic or fax submissions will not be accepted. Proposals for panels should include a one-page description of the subject matter and should be submitted electronically. Program chair: Professor Sushil Jajodia, Mail Stop 4A4, George Mason University, Fairfax, VA 22030-4444, USA. Tel: 703-993-1653, Fax: 703-993-1638, email: jajodia@gmu.edu http://www.isse.gmu.edu/~csis/faculty/jajodia.html More information about the conference and about IFIP WG 11.3 can be found at URL: http://www.cs.rpi.edu/ifip/. JOURNALS Special Issues of Journals and Handbooks: listed earliest deadline first. * IEEE Network Magazine Special Issue on PCS Network Management has a call for papers for topics on Internet computing. (Submissions due October 25, 1997.) [posted here 7/7/97] Personal communications services (PCS) provide communication services anywhere, anytime, with anybody, and in any form. To implement these communications concepts, extremely sophisticated network management which integrates many diverse technologies are required. This special issue focuses on the research and development of advanced PCS network management techniques. A complete list of topics can be found in the call for papers. Authors are invited to submit postscript files of their papers to liny@csie.nctu.edu.tw or sohraby@lucent.com. Papers should not exceed twenty double spaced pages in length, excluding figures and diagrams. * IEEE Network Magazine Special Issue on Active and Programmable Networks has a call for papers for topics on Internet computing. (Submissions due November 10, 1997.) [posted here 7/8/97] New networking concepts, building on recent advances in mobile software, have been proposed with the purposes of accelerating services and enhancing network management. An active network can give a high degree of control to users to customize their network services dynamically. Users can in effect "program" their services by injecting mobile programs in special packets that are executed at network elements. These mobile programs can carry out management and control functions as well, without the need for pre-programming network elements. Such software-intensive networks rely on agreement on a basic instruction set or primitives rather than consensus on specific protocols and services. This special issue of IEEE Network will present an overview of research in this area which is still in the early stages. A complete list of topics can be found in the call for papers. Authors are invited to submit hardcopies or electronic files of their papers to tchen@gte.com. Papers should not exceed twenty double spaced pages in length, excluding figures and diagrams. More information for potential authors is available at the IEEE Network Home Page http://www.comsoc.org/socstr/techcom/ntwrk/. ________________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 1: Conference Papers ________________________________________________________________________ ICCC '97, Int'l Conf. for Computer Communications, November 19-21, 1997, Cannes, France. Conference information. Security-related paper: o Intranet packet encryption with minimum overhead. S. Seno (Japan). SOSP - 16, 16th ACM Symposium on Operating Systems Principles, Saint-Malo, France, October 5-8, 1997. Conference information. Security-related papers: o Extensible Security Architectures for Java. Dan S. Wallach, Dirk Balfanz, Drew Dean, and Edward W. Felten (Princeton University). o A Decentralized Model for Information Flow Control. Andrew C. Myers and Barbara Liskov (MIT). Third Annual ACM/IEEE Int'l Conf on Mobile Computing and Networking (MobiCom '97), Sept 26-30, 1997, Budapest, Hungary. Conference information. Security-related papers: o A Public-Key Based Secure Mobile IP, John Zao, Stephen Kent, Joshua Gahm, Gregory Troxel, Matt Condell, Pam Helinek, Nina Yuan, and Isidro Castineyra (BBN, USA) o A Protection Scheme for Mobile Agents on Java, Daniel Hagimont and Leila Ismail (INRIA, France) o Ticket Based Service Access for the Mobile User, Bhrat Patel and Jon Crowcroft (University College London, UK) o Dealing with Server Corruption in Weakly Consistent, Replicated Data Systems, Mike Spreitzer, Marvin Theimer, and Karin Petersen (Xerox PARC, USA); Alan Demers (Oracle Corporation, USA); and Doug Terry (Xerox PARC, USA) SAFECOMP '97, 16th Int'l Conf. on Computer Safety, Reliability, and Security, York, UK, Sept. 8-10, 1997. Conference information. Security-related papers: o Safety and security for and advanced train control system, J. Braband. o Cryptographic protocols over open distributed systems: a taxonomy of flaws and related protocol analysis tools, S. Gritzalis and D. Spinellis. Int'l Database Engineering and Applications Symp., IDEAS97, August 25-27, 1997, Montreal, Canada. Conference information. Security-related paper: o Detection of Access Control Flaws in a Distributed Database System with Local Site Autonomy, Yaowadee Temtanapat and David Spooner (USA). CRYPTO '97, August 18-21, Santa Barbara, CA. Conference information. o The Complexity of Computing Hard Core Predicates Mikael Goldmann (Royal Institute of Technology, Sweden) and Mats Ndslund (Royal Institute of Technology, Sweden) o Statistical Zero Knowledge Protocols to Prove Modular Polynomial Relations Eiichiro Fujisaki (NTT Laboratories, Japan) and Tatsuaki Okamoto (NTT Laboratories, Japan) o Keeping the SZK-Verifier Honest Unconditionally Giovanni Di Crescenzo (University of California at San Diego, USA), Tatsuaki Okamoto (NTT Laboratories, Japan), and Moti Yung (CertCo, USA) o On the Foundations of Modern Cryptography Oded Goldreich (Computer Science Department, Weizmann Institute, Israel) o Plug and Play Encryption Donald Beaver (Transarc, USA) o Deniable Encryption Ran Canetti (IBM T.J. Watson Research Center, USA), Cynthia Dwork (IBM Almaden Research Center, USA), Moni Naor (Weizmann Institute of Science, Israel), and Rafail Ostrovsky (Bellcore, USA) o Eliminating Decryption Errors in the Ajtai-Dwork Cryptosystem Oded Goldreich (Computer Science Department, Weizmann Institute, Israel), Shafi Goldwasser (MIT Laboratory for Computer Science, USA), and Shai Halevi (MIT Laboratory for Computer Science, USA) o Public-Key Cryptosystems from Lattice Reduction Problems Oded Goldreich (Computer Science Department, Weizmann Institute, Israel), Shafi Goldwasser (MIT Laboratory for Computer Science, USA), and Shai Halevi (MIT Laboratory for Computer Science, USA) o RSA-Based Undeniable Signatures Rosario Gennaro (IBM T.J. Watson Research Center, USA), Hugo Krawczyk (IBM T.J. Watson Research Center, USA, and Technion, Israel), and Tal Rabin (IBM T.J. Watson Research Center, USA) o Security of Blind Digital Signatures Ari Juels (RSA Laboratories, USA), Michael Luby (Digital Equipment Corporation, USA), and Rafail Ostrovsky (Bellcore, USA) o Digital Signcryption or How to Achieve Cost (Signature & Encryption) << Cost (Signature) + Cost (Encryption) Yuliang Zheng (Monash University, Australia) o How to Sign Digital Streams Rosario Gennaro (IBM T.J. Watson Research Center, USA) and Pankaj Rohatgi (IBM T.J. Watson Research Center, USA) o Merkle-Hellman Revisited: A Cryptanalysis of the Qu-Vanstone Cryptosystem Based on Group Factorizations Phong Nguyen (Ecole Normale Supirieure, France) and Jacques Stern (Ecole Normale Supirieure, France) o Failure of the McEliece Public-Key Cryptosystem Under Message-Resend and Related-Message Attack Thomas A. Berson (Anagram Laboratories, USA) o A Multiplicative Attack Using LLL Algorithm on RSA Signatures with Redundancy Jean-Frangois Misarsky (France Telecom, France) o On the Security of the KMOV Public Key Cryptosystem Daniel Bleichenbacher (Bell Laboratories, USA) o A Key Recovery Attack on Discrete Log-Based Schemes Using a Prime Order Subgroup Chae Hoon Lim (Future Systems Inc., Korea) and Pil Joong Lee (Pohang Univ. of Science & Technology, Korea) o The Prevalence of Kleptographic Attacks on Discrete-Log Based Cryptosystems Adam Young (Columbia University, USA) and Moti Yung (CertCo, USA) o "Pseudo-Random" Number Generation within Cryptographic Algorithms: The DSS Case Mihir Bellare (University of California at San Diego, USA), Shafi Goldwasser (MIT Laboratory for Computer Science, USA), and Daniele Micciancio(MIT Laboratory for Computer Science, USA) o Unconditional Security Against Memory-Bounded Adversaries Christian Cachin (ETH Z|rich, Switzerland) and Ueli Maurer (ETH Z|rich, Switzerland) o Privacy Amplification Secure Against Active Adversaries Ueli Maurer (ETH Z|rich, Switzerland) and Stefan Wolf (ETH Z|rich, Switzerland) o Visual Authentication and Identification Moni Naor (Weizmann Institute of Science, Israel) and Benny Pinkas (Weizmann Inst. of Science, Israel) o Quantum Information Processing: The Good, the Bad, and the Ugly Gilles Brassard (Universiti de Montrial, Canada) o Efficient Algorithms for Elliptic Curve Cryptosystems Jorge Guajardo (WPI, USA) and Christof Paar (WPI, USA) o An Improved Algorithm for Arithmetic on a Family of Elliptic Curves Jerry Solinas (National Security Agency, USA) o Fast RSA-Type Cryptosystems Using n-adic Expansion Tsuyoshi Takagi (NTT Software Laboratories, Japan) o A One Way Function Based on Ideal Arithmetic in Number Fields Johannes Buchmann (Technische Hochschule Darmstadt, Germany) and Sachar Paulus (Technische Hochschule Darmstadt, Germany) o Efficient Anonymous Multicast and Reception Shlomi Dolev (Ben-Gurion University, Israel) and Rafail Ostrovsky (Bellcore, USA) o Efficient Group Signature Schemes for Large Groups Jan Camenisch (ETH Z|rich, Switzerland) and Markus Stadler (Ubilab/UBS, Switzerland) o Efficient Generation of Shared RSA Keys Dan Boneh (Bellcore, USA) and Matthew Franklin (AT&T Labs, USA) o Proactive RSA Yair Frankel (CertCo, USA and Sandia National Laboratories, USA), Peter Gemmell (Sandia National Laboratories, USA), Philip D. MacKenzie (Boise State University, Idaho), and Moti Yung (CertCo, USA) o Towards Realizing Random Oracles: Hash Functions that Hide All Partial Information Ran Canetti (IBM T.J. Watson Research Center, USA) o Collision-Resistant Hashing: Towards Making UOWHFs Practical Mihir Bellare (University of California at San Diego, USA) and Phillip Rogaway (University of California at Davis, USA) o Fast and Secure Hashing Based on Codes Lars Knudsen (Katholieke Universiteit Leuven, Belgium) and Bart Preneel (Katholieke Universiteit Leuven, Belgium) o Cryptanalysis of Secret-Key Cryptosystems Chair: Douglas Stinson (University of Nebraska, USA) o Edit Distance Correlation Attack on the Alternating Step Generator Jovan Dj. Golic (University of Belgrade, Yugoslavia) and Renato Menicocci (Fondazione Ugo Bordini, Italy) o Differential Fault Analysis of Secret Key Cryptosystems Eli Biham (Technion, Israel) and Adi Shamir (Weizmann Inst. of Science, Israel) o Cryptanalysis of the Cellular Message Encryption Algorithm David Wagner (University of California at Berkeley, USA), Bruce Schneier (Counterpane Systems, USA), and John Kelsey (Counterpane Systems, USA) IFIP WG 11.3 11th Working Conf. on Database Security, Lake Tahoe, CA, August 11-13, 1997. Conference information. o Access Control by Object-Oriented Concepts Wolfgang Essmayr (Softwarepark Hagenberg), G. Pernul (University of Essen), A M. Tjoa (Technical University of Vienna) o Administration Policies in a Multipolicy Authorization System Elisa Bertino, Elena Ferrari (Universita di Milano) o Designing Security Agents for the DOK Federated System Zahir Tari (Royal Melbourne Institute of Technology) o Web Implementation of a Security Mediator for Medical Databases G. Wiederhold, M. Bilello, C. Donahue (Stanford University) o Supporting the Requirements for Multilevel Secure and Real-time Databases in Distributed Environments Craig Chaney, Sang H. Son (University of Virginia) o A Principled Approach to Object Deletion and Garbage Collection in a Multi-level Secure Object Base Elisa Bertino, Elan Ferrari (Universita di Milano) o Compile-time Flow Analysis of Transactions and Methods in Object-Oriented Databases Masha Gendler-Fishman, Ehud Gudes (Ben-Gurion University) o Capability-based Primitives for Access Control in Object-Oriented Systems John Hale, Jody Threet, Sujeet Shenoi (University of Tulsa) o An Execution Model for Multilevel Secure Workflows Vijayalakshmi Atluri, Wei-Kuang Huang (Rutgers University), Elisa Bertino (Universita di Milano) o Task-based Authorization Controls (TBAC): Models for Active and Enterprise-oriented Authorization Management Roshan K. Thomas (ORA), Ravi S. Sandhu (George Mason University) o Alter-egos and Roles Supporting Workflow Security in Cyberspace Ehud Gudes (Ben-Gurion University), Reind P. van de Riet, Hans FM Burg (Free University), Martin Olivier (Rand Afrikaans University) o A Two-tier Coarse Indexing Scheme for MLS Database Systems Sushil Jajodia, Ravi Mukkamala, Indrajit Ray (George Mason University) o Replication Does Survive Information Warfare Attacks John McDermott (Naval Research Laboratory) o Priority-driven Secure Multiversion Locking Protocal for Real-Time Secure Database Systems Chanjung Park, Seog Park (Sogang University), Sang H. Son (University of Virginia) o IRI: A Quantitative Approach to Inference Analysis in Relational Databases Kan Zhang (Cambridge University) o Hierarchical Namespaces in Secure Databases Adrian Spalka, Armin B. Cremers (University of Bonn) o Software Architectures for Consistency and Assurance of User-Role Based Security Policies S. Demurjian, T. C. Ting, J. Reisner (University of Connecticut) o Role-Based Administration of User-Role Assignment: The URA97 Model and its Oracle Implementation Ravi Sandhu, Venkata Bhamidipati (George Mason University) 2nd IEEE High-Assurance Systems Engineering Workshop, Bethesda, MD, August 11-12, 1997 (in conjunction with COMPSAC'97). Conference information. Security related paper: o Design and assurance strategy for the NRL pump, by Myong H Kang, Andrew P. Moore, Ira S. Moskowitz. Ninth Int'l Conf. on Scientific and Statistical Database Management, Olympia, WA, August 11-13, 1997. Security-related paper: o Security problems for statistical databases with general cell suppressions, Tsan-sheng Hsu and Ming-Yang Kao. NGITS 97, Third Int'l Wkshp on Next Generation Information Technologies and Systems, Neve Ilan, Israel, 30 June - 3 July 1997. Security-related papers: o Automated Negotiation in Electronic Commerce, C. Beam and A. Segev, USA o Persistence and Security Support for Distributed System with Mobile Software Objects (short paper), B. Lavva, O. Holder and I. Ben-Shaul, Israel INET '97, Internet Society 7th Annual Conference, Kuala Lumpur, Malaysia, June 25-30, 1997. Security-related papers. Full conference proceedings available on the Web at: http://www.isoc.org/isoc/whatis/conferences/inet/97/proceedings/. o A Review of E-mail Security Standards, Laurence Lundblade, Qualcomm Incorporated, USA o A System for Public-Key Services in the Spanish Research and Academic Network Lucia Pino, Universidad de Malaga, SPAIN Juan J. Ortega, Universidad de Malaga, SPAIN Javier Lspez, Universidad de Malaga, SPAIN Antonio Maqa, Universidad de Malaga, SPAIN o Capability-Based Usage Control Scheme for Network-Transferred Objects Shuichi Tashiro, Electrotechnical Laboratory, JAPAN o Privacy on the Internet David M. Goldschlag, Naval Research Laboratory, USA Michael G. Reed, Naval Research Laboratory, USA Paul F. Syverson, Naval Research Laboratory, USA o Seamless VPN, Makoto Kayashima, Hitachi Ltd., JAPAN Minoru Koizumi, Hitachi Ltd., JAPAN Tatsuya Fujiyama, Hitachi Ltd., JAPAN Masato Terada, Hitachi Ltd., JAPAN Kazunari Hirayama, Hitachi Ltd., JAPAN o Network Access Control for DHCP Environment, Kazumasa Kobayashi, Nara Institute of Science and Technology, JAPAN Suguru Yamaguchi, Nara Institute of Science and Technology, JAPAN CSFW 10, Tenth IEEE Computer Security Foundations Workshop, Rockport, MA, June 10-12, 1997. Conference information. o Verifying Authentication Protocols with CSP, Steve Schneider (University of London, UK). o Casper: A Compiler for the Analysis of Security Protocols, Gavin Lowe (University of Leicester, UK). o A Hierarchy of Authentication Specifications, Gavin Lowe (University of Leicester, UK). o Provable Security for Cryptographic Protocols---Exact Analysis and Engineering Applications, James Gray III and Kin Fai Epsilon Ip (HK University of S&T, Hong Kong). o Strategies Against Replay Attacks, Tuomas Aura (Helsinki University of Technology, Finland). o Proving Properties of Security Protocols by Induction, Lawrence C. Paulson (University of Cambridge, UK). o Mechanized Proofs for a Recursive Authentication Protocol, Lawrence C. Paulson (University of Cambridge, UK). o On SDSI's Linked Local Name Spaces, Martin Abadi (DEC Systems Research Center, USA). o A Different Look at Secure Distributed Computation, Paul F Syverson (Naval Research Laboratory, USA). o Unreliable Intrusion Detection in Distributed Computations, Dahlia Malkhi and Michael Reiter (AT&T Labs, USA). o An Efficient Non-repudiation Protocotol, Jianying Zhou and Dieter Gollmann (University of London, UK). o Towards the Formal Verification of Electronic Commerce Protocols, Dominique Bolignano (GIE Dyade, France). o A Theory for System Security, Kan Zhang (University of Cambridge, UK). o Minimizing Covert Flows with Minimum Typings, Geoffrey Smith (Florida International University, USA) and Dennis Volpano (Naval Postgraduate School, USA). o A Logic Based Approach for the Transformation of Authorization Policies, Yun Bai and Vijay Varadharajan (University of W. Sidney, Australia). o Separation of Duty in Role-based Environments, Rich Simon and Mary Ellen Zurko (Open Group Research Institute, USA). o Security Engineering of Lattice-Based Policies, Ciaran Bryce (GMD, Germany). Software Process Improvement and Capability Determination Symposium (SPICE '97), June 1-6, 1997, Walnut Creek, CA, Conference information. Security-related papers: o A process standard for system security engineering: development experiences and pilot results. Richard Hefner. o A survey to determine federal agency needs for a role-based access control security product. Charles Smith. TAPSOFT '97, April 14-18, Lille, France. Security-related paper: o A Type-Based Approach to Program Security. D. Volpano and G. Smith ____________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 2: Journal and Newsletter Articles, Book Chapters _______________________________________________________________________ o ACM SIGOPS Operating System Review, Vol. 31, No. 3 (July, 1997). - Keok Auyong and Chye-Lin Chee. Authentication services for computer networks and electronic messaging systems. pp. 3-15. - Liqun Chen, Dieter Gollmann, and Chris J. Mitchell. Authentication using minimally trusted servers. pp. 16-28. o IEEE COMPUTER, Vol. 30, No. 6 (June 1997). X. Nick Zhang. Secure code distribution. pp. 76-79. o Communications of the ACM, Vol. 40, No. 5 (May 1997): Rolf Oppliger. Internet security: firewalls and beyond. pp. 92-102. o Dr. Dobb's Journal, Vol. 22, No. 6 (June 1997): - Tom Markham. Internet security protocol. pp. 70-75. - Cliff Berg. Java Q&A: How do I create my own security manager? pp. 115-119. o BYTE, Vol. 22, No. 6 (June 1997): Peter Wayner. Who goes there? (authentication). pp. 70-80. o IEEE Network, Vol. 11, No. 3, May/June 1997. Issue on network and Internet security: - Roger M. Needham. The changing environment for security protocols. pp. 12-15. - David Chadwick, Andrew J. Young, and Nada Kapidzic Cicovic. Merging and extending the PGP and PEM trust models -- the ICE-TEL trust model. pp. 16-25. - Uri Blumenthal, Nguyen C. Hien, and Bert Wijnen. Key derivation for network management applications. pp. 26-29. - Michael Herfert. Security enhanced mailing lists. pp. 30-33. - Mohammad Peyravian and Thomas D. Tarman. Asynchronous Transfer Mode security. pp. 34-41. - Muriel Medard, Douglas Marquis, Richard A. Barry, and Steven G. Finn. Security issues in all-optical networks. pp. 42-48. o IEEE Trans. on Computers Vol. 46, Number 5 (May 1997). - S. R. Blackburn, S. Murghy, and K.G. Paterson. Comments on "Theory and Applications of Cellular Automata in Cryptography." pp. 637-638. - S. Nandi and P. Pal Chaudhuri. Reply to comments on "Theory and Application of Cellular Automata in Cryptography." pl. 639. o Crosstalk, The Journal of Defense Software Engineering, Vol. 10, No. 5 (May, 1997). John Mochulski. Connecting classified environments to the Internet. pp. 9-13. o BYTE, Vol. 22, No. 5 (May 1997): Gary McGraw and Edward Felten. Avoiding hostile applets. pp. 89-92. o IEEE COMPUTER, Vol. 30, No. 4 (April 1997). - Lee Garber. Students stumble onto Internet Explorer flaw. pp. 18-20. - D. Richard Kuhn. Source of failure in the public switched telephone network. pp. 31-36. o ACM SIGSAC Security Audit & Control Review, Vol. 15, No. 2 (April 1997). - Cynthia Irvine. Report on the First ACM Workshop on Education in Computer Security. pp. 3-5. - Selwyn Russell. A k-th order Carmichael key scheme for shared encryption (abstract). pp. 6-7. - Carl Stephen Guynes, Richard G. Vedder, and Michael T. Vanecek. Security issues on the Internet. pp. 9-12. - Daniel Guinier. From eavesdropping to security on the cellular telephone system GSM. pp. 13-18. o IEEE Transactions on Software Engineering, Vol. 23, No. 4 (April 1997): E. Jonsson and T. Olovsson. A quantitative model of the security intrusion process based on attacker behavior. pp. 235-245. o ACM SIGOPS Operating System Review, Vol. 31, No. 2 (April, 1997). - Arne Helme and tage Stabell-Kulo. Security functions for a file repository. pp. 3-8. - Tage Stabell-Kulo. Security and log structured file systems. pp. 9-10. o IEEE Transactions on Software Engineering, Vol. 23, No. 3 (March 1997): M. Abadi. Explicit communication revisited: two new attacks on authentication protocols. pp. 185-186. o Computers & Security Volume 16, Number 1 (1997). (Elsevier) Features: - Gerald Kovacich. Information warfare and the information system security professional. pp. 14-24. - Brian Boyce. Cyber Extortion -- the corporate response. pp. 25-28. - Fred Cohen. Information system attacks: a preliminary classification scheme. pp. 29-46. Refereed Paper: B.C. Soh and T. S. Dillon. System intrusion processes: a simulation model. pp. 71-79. o Journal of Computer Security, Vol. 4, Nos. 2,3 (Dec. 1996) [received about 3/97]: - S.-P. Shieh and V. D. Gligor. Detecting illicit leakage of informaion in operating systems. pp. 123-148. - P. Ammann, R. S. Sandhu, and R. Lipton. The expressive power of multi-parent creation in monotonic access control models. pp. 149-166. - D. Volpano, C. Irvine, and G. Smith. A sound type system for secure flow analysis. pp. 167-188. - J. McDermott and R. Mukkamala. Analytic performance comparison of transaction processing algorithms for the SINTRA replicated-architecture database system. pp. 189-228. - J. Millen. Editor's preface to the Bell-LaPadula model. pp. 229-232. - L. J. LaPadula. Foreword. pp. 233-238. - L. J. LaPadula and D. E. Bell. MITRE Technical Report 2547, Volume II. pp. 239-263. o IEEE Trans. on Knowledge and Data Engineering Vol. 9 No. 2 (Mar-Apr 1997). X. Qian and T. F. Lunt. A semantic framework of the multilevel secure relational model. pp. 292-301. An extended authorization model for relational databases. pp. 85-101. _______________________________________________________________________ Reader's Guide to Current Technical Literature in Security and Privacy Part 3: Books ________________________________________________________________________ o For the Record: Protecting Electronic Health Information by Committee on Maintaining Privacy and Security in Health Care Applications of the National Information Infrastructure, P. Clayton, Chair, National Academy Press, ISBN 0-309-05697-7. Available at http://www.nap.edu (final version of NRC report on health information privacy reported in Cipher EI #21). o Birman, Kenneth. Building Secure and Reliable Network Applications. Manning, 1996 ISBN: 1-884777-29-5 , Hardbound, 591 pages, $58.00 Bibliography (381 items), index, appendix with 68 problems. http://www.browsebooks.com; orders@manning.com Available through Prentice Hall. Power Point slides at ftp.cs.cornell.edu/pub/ken/slides/ (see review this issue) ________________________________________________________________________ Calendar ________________________________________________________________________ ==================================================================== See Calls for Papers section for details on many of these listings. ==================================================================== "Conf Web Page" indicates there is a hyperlink on the Cipher Web pages to conference information. (In many cases there is such a link even though mention is not made of it here, to save space.) Dates Event, Location Point of Contact/ more information ----- --------------- ---------------------------------- 8/ 1/97: SNDSS '98, San Diego, California. Conf Web page; 8/11/97- 8/13/97: IFIP WG 11.3, Lake Tahoe, California, Conf web page 8/11/97- 8/12/97: HASE97. Washington, DC Conf Web page 8/15/97: ADC '98. The Levels, South Australia, submissions due to adc98@cis.unisa.edu.au; 8/17/97- 8/21/97: Crypto '97, Santa Barbara, California 8/25/97- 8/27/97: IDEAS '97. Montreal, Canada Conf Web page 9/ 3/97- 9/ 5/97: DIMACS Security Ver, Piscataway, NJ DIMACS Web page 9/ 8/97- 9/10/97: SAFECOMP97. University of York, UK Conf Web page 9/ 9/97: USENIX Sec Symp. San Antonio, Texas Conf Web page. Submissions to securitypapers@usenix.org; 9/22/97- 9/24/97: INTRA-FORA. Linz, Austria Conf Web page 9/22/97- 9/25/97: IC3N '97, Las Vegas, NV Conf Web page 9/23/97- 9/26/97: NSPW '97, Great Langdale, Cumbria, UK 9/26/97- 9/30/97: MOBICOM '97, Budapest, Hungary Conf Web page 10/ 1/97: WOBIS '97, Budapest, Hungary; Conf Web page 10/ 5/97-10/ 8/97: SOSP '97, Malo, France; Conf Web page 10/ 6/97-10/10/97: NISS '97, Baltimore, MD, Conf web page 10/ 6/97: ETAPS '98, Lisbon, Portugal, Conf Web page; Submissions to Nivat@litp.ibp.fr; 10/24/97-10/26/97: EDOC '97; Gold Coast, Australia. Conf Web page 10/25/97: IEEE Net Mag Special Issue; submissions to liny@csie.nctu.edu.tw 10/28/97-10/31/97: ICNP '97, Atlanta, Georgia; Conf Web page 10/31/97-11/ 5/97: WebNet97. Toronto, Canada; Conf Web page 11/ 1/97: IEEE Personal Communications Special Issue on Mobile Computing Systems and the Web, submissions due 11/ 6/97-11/ 7/97: RBAC97. McLean, Virginia Conf Web page 11/10/97: IEEE Network Magazine Special Issue on Active and Programmable Networks; Conf Web page; submissions due to tchen@gte.com 11/11/97-11/13/97: ICICS '97, Beijing, P.R. China 11/12/97-11/14/97: Chilean CompSci Soc, Valparaiso, Chile; 11/19/97-11/21/97: ICCC '97. Cannes, France Conf Web page 12/ 4/97-12/ 5/97: IFIP-IICIS. Zurich, Switzerland Conf Web page 12/ 8/97-12/12/97: ACSAC '97, San Diego, CA 12/17/97-12/19/97: ISCOM '97. Hsinchu, Taiwan Conf Web page 1/ 6/98- 1/ 9/98: ENCXCS. Hawaii, HI Conf Web page 1/16/98: IFIP/SEC '98, Vienna and Budapest, Austria and Hungary; Conf Web page Submissions due to rposch@iaik.tu; 1/26/98- 1/29/98: USENIX Sec Symp. San Antonio, Texas Conf Web page 2/ 2/98- 2/ 3/98: ADC '98. The Levels, South Australia 2/23/98- 2/27/98: ICDE '98. Orlando, Florida Conf Web page 3/10/98: IFIP WG11.3 Chalkidiki, Greece; Conf Web page Submissions due to jajodia@gmu.edu; 3/11/98- 3/13/98: SNDSS '98, San Diego, California Conf Web page 3/30/98- 4/ 3/98: ETAPS '98. Lisbon, Portugal, Conf Web page 5/ 3/98- 5/ 6/98: IEEE S&P 98; Oakland no e-mail address available 5/12/98- 5/15/98: 10th CITSS, Ottawa; no e-mail address available 7/15/98- 7/17/98: IFIP WG11.3, Chalkidiki, Greece Conf Web page 8/31/98- 9/ 4/98: IFIP/SEC '98, Vienna and Budapest, Austria and Hungary; Conf Web page 5/ 2/99- 5/ 5/99: IEEE S&P 99; Oakland no e-mail address available 5/11/99- 5/14/99: 11th CITSS, Ottawa; no e-mail address available 4/30/00- 5/ 3/00: IEEE S&P 00; Oakland no e-mail address available 5/16/00- 5/19/00: 12th CITSS, Ottawa; no e-mail address available Key: * ACISP = Australasian Conference on Information Security and Privacy, * ACSAC = Annual Computer Security Applications Conference 13th Annual * ADC = Australasian Database Conference, ADC '98 * CCS = ACM Conference on Computer and Communications Security * CITSS = Canadian Information Technology Security Symposium * COMPASS = Conference on Computer Assurance COMPASS '97 * CORBA SW = Workshop on Building and Using CORBASEC ORBS CORBA SW * CRYPTO = IACR Annual CRYPTO Conference CRYPTO97 * CSFW = Computer Security Foundations Workshop CSFW10 , Wrkshp Page * DASFAA = Database Systems For Advanced Applications DASFAA '97 * DIMACS Security Ver = DIMACS Workshop on Formal Verification of Security Protocols '97 workshop * EDOC = Enterprise Distributed Object Computing Workshop EDOC '97 * Electronic Commerce for Content II = Forum on Technology-Based Intellectual Property Management URL * ENCXCS = Engineering Complex Computer Systems Minitrack of HICSS ENCXCS * ENM = Enterprise Networking ENM '97 * ENTRSEC = International Workshop on Enterprise Security ENTRSEC '97 * ETAPS = European Joint Conferences on Theory and Practice of Software * FMP = Formal Methods Pacific FMP '97 * GBN = Gigabit Networking Workshop GBN'97 * HASE = High-Assurance Systems Engineering Workshop HASE '97 * HICSS = Hawaii International Conference on Systems Sciences * HPTS = Workshop on High Performance Transaction Systems * ICAST = Conference on Advanced Science and Technology, 13th ICAST * ICCC = International Conference for Computer Communications ICCC '97 * IC3N = Int'l Conf. on Computer Communications aand Networks * ICDE = Int. Conf. on Data Engineering ICDE '98 * ICI = International Cryptography Institute * ICICS = International Conference on Information and Communications Security ICICS '97 * ICNP = IEEE International Conf. on Network Protocols * IDEAS = Int'l Database Engineering and Applications Symposium IDEAS '97 * IEEE S&P = IEEE Symposium on Security and Privacy - IEEE S&P '97 * IESS = Int'al Symposium on Software Engineering Standards IESS '97 * IFIP/SEC = International Conference on Information Security (IFIP TC11) * IFIP WG11.3 = IFIP WG11.3 11th Working Conference on Database Security * IFIP-IICIS = First Working Conference on Integrity and Internal Control in Information Systems * INET = Internet Society Annual Conference * INETCOMP = IEEE Internet Computing (magazine) * INTRA-FORA = International Conference on INTRANET: Foundation, Research, and Applications INTRA-FORA * IRISH = Irish Workshop on Formal Methods IRISH97 * ISADS = Symposium on Autonomous Decentralized Systems ISADS '97 * ISCOM - International Symp. on Communications * JCS = Journal of Computer Security WWW issue * JTS = Journal of Telecommunications Systems, special multimedia issue * MOBICOM = Mobile Computing and Networking MOBICOM '97 * NGITS = World Conference of the WWW, Internet, and Intranet NGITS '97 * NISS = National Information Systems Security Conference NISS * NSPW = New Security Paradigms Workshop NSPW '96 * OSDI = Operating Systems Design and Implementation OSDI '96 * PKS = Public Key Solutions PKS '97 * PTP = Workshop on Proof Transformation and Presentation PTP '97 * RBAC = ACM Workshop on Role-Based Access Control RBAC '97 * RIDE = High Performance Database Management for Large Scale Applications RIDE97 * SAFECOMP = Computer Safety, Reliability and Security SAFECOMP '97 * SICON = IEEE Singapore International Conference on Networks SICON '97 * SNDSS = Symposium on Network and Distributed System Security (ISOC) * SOSP = 16th ACM Symposium on Operating Systems Principles SOSP '97 * TAPOS = Theory and Applications of Object Systems, special issue Objects, Databases, and the WWW TAPOS * USENIX Sec Symp = USENIX UNIX Security Symposium, 8th Annual * WebNet = World Conference of the Web Society, WebNet 97 * WOBIS = Workshop on Satellite-based Information Services ________________________________________________________________________ Data Security Letter Subscription Offer ________________________________________________________________________ A special subscription rate of $25/year for the Data Security Letter is now available to IEEE TC members. The DSL is an external, nonpartisan newsletter published by Trusted Information Systems, Inc. Eleven issues (usually 16 pages each) per year are published. The DSL welcomes reader suggestions and contributions and accepts short research abstracts (about 130 words) for publication on an ongoing basis. On occasion, the DSL will be republishing Cipher articles (with authors' approval), but such articles will constitute a small portion of DSL content (thus there will be very little duplication of Cipher material). IEEE TC members wishing to take advantage of the special subscription rate should send the following to sharon@tis.com. The information can also be faxed to 301-854-5363 (attention: DSL) phoned to 301-854-5338, or mailed to Trusted Information Systems, Inc., 3060 Washington Rd., Glenwood, MD 21738 USA. NAME: POSTAL ADDRESS: (Please indicate company name, if a business address) PHONE: (Please indicate if home or business) FAX: E-MAIL: IEEE Membership No. (if applicable): NOTE: If you are already a paying subscriber to the DSL, for the $25 you will receive a 2-year renewal; refunds, rebates, etc., on your current subscription are not available. If you have any questions about the offer or anything else pertaining to the DSL, you may contact the editor, Sharon Osuna, via E-Mail to sharon@tis.com or call her at 301-854-5338. ________________________________________________________________________ How to become <> a member of the IEEE Computer Society's TC on Security and Privacy ________________________________________________________________________ You do NOT have to join either IEEE or the IEEE Computer Society to join the TC, and there is no cost to join the TC. All you need to do is fill out an application form and mail or fax it to the IEEE Computer Society. A copy of the form is included below (to simplify things, only the TC on Security and Privacy is included, and is marked for you) The full and complete form is available on the IEEE Computer Society's Web Server at URL: http://www.computer.org:80/tab/tcapplic.htm (print & mail form) or http://www.computer.org:80/tab/Tcappli1.htm (HTML form for form-enabled browsers) IF YOU USE THE FORM BELOW, PLEASE NOTE THAT THE IT IS TO BE RETURNED (BY MAIL OR FAX) TO THE IEEE COMPUTER SOCIETY, >>NOT<< TO CIPHER. --------- IEEE Computer Society Technical Committee Membership Application ----------------------------------------------------------- Please print clearly or type. ----------------------------------------------------------- Last Name First Name Middle Initial ___________________________________________________________ Company/Organization ___________________________________________________________ Office Street Address (Please use street addresses over P.O.) ___________________________________________________________ City State ___________________________________________________________ Country Postal Code ___________________________________________________________ Office Phone Fax ___________________________________________________________ Email Address (Internet accessible) ___________________________________________________________ Home Address (optional) ___________________________________________________________ Home Phone ___________________________________________________________ [ ] I am a member of the Computer Society IMPORTANT: IEEE Member/Affiliate/Computer Society Number: ____________________ [ ] I am not a member of the Computer Society* Please Note: In some TCs only current Computer Society members are eligible to receive Technical Committee newsletters. Please select up to four Technical Committees/Technical Councils of interest. TECHNICAL COMMITTEES [ X ] T27 Security and Privacy Please Return Form To: IEEE Computer Society 1730 Massachusetts Ave, NW Washington, DC 20036-1992 Phone: (202) 371-0101 FAX: (202) 728-9614 ________________________________________________________________________ TC Publications for Sale (YES!) ________________________________________________________________________ New Low Prices! Price by mail per volume IEEE CS Press IEEE CS Press Year from TC* IEEE member price List Price ---- ---------- ----------------- ------------- 1992 $ 5 Only available from TC! 1993 $ 5 Only available from TC! 1994 $ 5 $30+$4 S&H $60+$5 S&H 1995 $10 $30+$4 S&H $60+$4 S&H 1996 $15 1997 $25 *price includes shipping and handling For overseas delivery: -- by surface mail, please add $5 per order (3 volumes or fewer) -- by air mail, please add $10 per volume to the prices listed above. If you would like to place an order, please send a letter specifying * which issues you would like, * where to send them, and * a check in US dollars, payable to the 1997 IEEE Symposium on Security and Privacy to: Charles N. Payne Treasurer, IEEE TC on Security and Privacy Secure Computing Corp. 2675 Long Lake Rd. Roseville, MN 55113 U S A e-mail: cpayne@securecomputing.com Sorry, we are not yet ready for electronic commerce! ________________________________________________________________________ TC Officer Roster ________________________________________________________________________ Past Chair: Chair: Deborah Cooper Charles P. Pfleeger P.O. Box 17753 ARCA Systems, Inc. Arlington, VA 22216 6889 Boone Blvd., (703)908-9312 voice and fax Vienna, VA 22182-2623 dmcooper@ix.netcom.com (703)734-5611 (voice) (703)790-0385 (fax) pfleeger@arca.com Newsletter Editor: Chair, Subcommittee on Academic Affairs: Carl Landwehr Prof. Karl Levitt Code 5542 University of California, Davis Naval Research Laboratory Division of Computer Science Washington, DC 20375-5337 Davis CA 95611 (202)767-3381 (916)752-0832 landwehr@itd.nrl.navy.mil levitt@iris.ucdavis.edu Standards Subcommittee Chair: Chair, Subcommittee on Security Conferences: Greg Bergren Dr. Stephen Kent 10528 Hunters Way BBN Corporation Laurel, MD 20723-5724 70 Fawcett Street (410)684-7302 Cambridge, MA 02138 (410)684-7502 (fax) (617) 873-3988 glbergr@missi.ncsc.mil kent@bbn.com ________________________________________________________________________ Information for Subscribers and Contributors ________________________________________________________________________ SUBSCRIPTIONS: Two options: 1. To receive the full ascii CIPHER issues as e-mail, send e-mail to (which is NOT automated) with subject line "subscribe". 2. To receive a short e-mail note announcing when a new issue of CIPHER is available for Web browsing or downloading from our ftp server send e-mail to (which is NOT automated) with subject line "subscribe postcard". To remove yourself from the subscription list, send e-mail to cipher-request@itd.nrl.navy.mil with subject line "unsubscribe". Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher CONTRIBUTIONS: to are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. For Calendar entries, please include an e-mail address for the point-of-contact. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. BACK ISSUES: There is an archive that includes each copy distributed so far, in ascii, in files you can download at URL http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/cipher-archive.html There is also an anonymous FTP server that contains the same files. To access the archive via anonymous FTP: 1. ftp www.itd.nrl.navy.mil 2. At prompt for ID, enter "anonymous" 3. At prompt for password, enter your actual, full e-mail address 4. Once you are logged in, change to the Cipher Directory: cd pub/cipher 5. Now you can request any of the files containing Cipher issues in ascii. Issues are named in the form: EI#N.9707 where N is the number of the issue desired and 9703 captures the year and month it appeared. ========end of Electronic Cipher Issue #22, 12 July 1997=============