_/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ============================================================================ Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 90 May 30, 2009 Hilarie Orman, Editor Sven Dietrich, Assoc. Editor cipher-editor @ ieee-security.org cipher-assoc-editor @ ieee-security.org Yong Guan Book Review Editor Calendar Editor cipher-bookrev @ ieee-security.org cipher-cfp @ ieee-security.org ============================================================================ The newsletter is also at http://www.ieee-security.org/cipher.html Cipher is published 6 times per year Contents: * Letter from the Editor * Commentary and Opinion o Review of the Symposium on Security & Privacy (Claremont Hotel Resort and Spa, Berkeley, CA, May 17-20, 2009) by Martin Szydlowski o Richard Austin's review of "Chained Exploits: Advanced Hacking Attacks from Start to Finish" by A. Whitaker, K. Evans and J. Voth o Book reviews, Conference Reports and Commentary and News items from past Cipher issues are available at the Cipher website * Conference and Workshop Announcements o Upcoming calls-for-papers and events * List of Computer Security Academic Positions, by Cynthia Irvine * Staying in Touch o Information for subscribers and contributors o Recent address changes * Links for the IEEE Computer Society TC on Security and Privacy o Becoming a member of the TC o TC Officers o TC publications for sale ==================================================================== Letter from the Editor ==================================================================== Dear Readers: The 30th Security and Privacy Symposium was held in May at its usual location at the Claremont Hotel in Berkeley, California, USA. The event was congenial and informative, as always. For those who could not attend, we have notes from the conference by Martin Szydlowski. Attendees (there were 270 of them!) received a DVD with all papers from the history of the conference, as well as all the papers from the other Technical Committee sponsored event, the Computer Security Foundations Workshop (coming up in July). Next year is the 30th anniversary of the meeting, and festivities are in the works. Several computer scientists at this year's event wondered aloud about the concept "30th meeting != 30th anniversary". Perhaps this explains the prevalence of buffer overflow bugs! At the business meeting, Sven Dietrich, associate editor of this newsletter, was nominated and confirmed as the next vice chair of the Technical Committee, to take office in January of 2010. At this date the US government's cybersecurity czar has not been named. Whoever it may be surely will have a large task ahead, as cybersecurity lags far behind cyberattacks in today's world. Tend to your firewalls. Hilarie Orman cipher-editor @ ieee-security.org ==================================================================== Commentary and Opinion ==================================================================== Book reviews from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/BookReviews.html, and conference reports are archived at http://www.ieee-security.org/Cipher/ConfReports.html ____________________________________________________________________ Review of Symposium on Security & Privacy Claremont Hotel Resort and Spa, Berkeley, CA May 17-20, 2009 by Martin Szydlowski ____________________________________________________________________ Introduction, by Sven Dietrich, IEEE Cipher Associate Editor About 260 participants attended the 30th Oakland conference this year to listen to the presentations, to mingle in the hallways, at the reception or poster session, and perhaps for some to skip a talk or two to go to the spa (we know who you are ;) We are looking forward to the 30th anniversary next year ("anniversary" implies zero-based counting) at the 31st Oakland conference with lots of special events. In the meantime, please enjoy this year's write-up by Martin Szydlowski (UCSB), complemented by additional Q&A capture from Matthias Vallentin (ICIR) and some fill-ins here and there by yours truly. ----------------------------- Conference notes by Martin Szydlowski Day 1 ================================= First Session: Attacks and Defenses Chair: Tadayoshi Kohno Wirelessly Pickpocketing a Mifare Classic Card (Best Practical Paper Award), by Flavio D. Garcia, Peter van Rossum, Roel Verdult, Ronny Wichers Schreur (Radboud University Nijmegen) Peter von Rossum presented. He showed how the Mifare classic card, the first card up from a non-crypto RFID card in the Mifare family, was broken. He also showed the timeline of the attack and mentioned the legal tidbits with the manufacturer who tried to halt the publication of this attack. He pointed out that these security problems don't exist at the higher level of the Mifare card (AES rather than an LFSR), but those are more expensive of course. The Mifare classic is in use as transportation fare card, e.g. the Charlie card (Boston), Oyster card (UK), and in the Netherlands, and in some ski resorts in France. At lunch Peter showed a whole collection of such cards, some plastic, some paper, including peeled ones where you can see the RFID chip exposed with its antenna. --------------------- Plaintext Recovery Attacks Against SSH by Martin R. Albrecht, Kenneth G. Paterson, Gaven J. Watson (Royal Holloway, University of London) This talk showed an attack on the Binary Packet Protocol in SSH, causing leaks of plaintext bits. Recent versions of OpenSSH are fixed, and the SSH developers were given some time to fix the flaw. Premise: they can recover 14 bits of plaintext from an arbitrary block of ciphertext with probability 2^(-14) and 32 bits of plaintext from an arbitrary block of ciphertext with probability 2^(-18). --------------------- Exploiting Unix File-System Races via Algorithmic Complexity Attacks by Xiang Cai, Yuwei Gui, Rob Johnson (Stony Brook University) This talk showed the breaking of two file-system race condition defense mechanisms. The speaker argued that these attacks were applicable to multiple Unix operating systems and affect most Unix file system races. One person asked whether a true "atomic" statement in the kernel would resolve the problem, but the speaker was not sure that was doable. ================================= Second Session: Information Security Chair: Patrick Traynor --------------------- The first talk of this session was titled "Practical Mitigations for Timing-Based Side-Channel Attacks on Modern x86 Processors". As motivation, the speaker explained the idea of side channel attacks in general and then timing-based attacks in particular. As example, he show a snippet of source code from a modular exponentiation algorithm, where, depending on the input, a computationally expensive branch might be taken that will introduce a measurable delay on the execution time of this code segment. He went on explaining, that side-channels can be introduced in every step of the creation of a cryptographic algorithm -- when the algorithm is devised by a cryptographer, implemented by a programmer, compiled and executed on a particular processor. The work presented here focuses on eliminating side-channels during the compilation step by introducing a "side-channel aware" compiler. After explaining the basic workings of a compiler, He explained their approach to eliminating control-flow that leads to run-time differences. A simple and complete solution for this problem is possible on processor architectures that support conditional execution, however, on modern Intel processors only assignment operations (e.g. MOV) support conditional execution while arithmetic instructions do not. To work around this issues, he introduced several techniques to transform code that eliminate timing differences and presented experimental results that showed the effectiveness of the presented solution. He singled out two Intel micro-code features that are problematic to handle: First, the latency of the DIV instruction is dependend on the operands and side-effects (division by zero) need to be taken into account. There are workable solutions, however, they incur a considerable overhead. The second problematic feature is pessimistic load bypassing, which he will try to solve in future work. The first question asked was if it would be feasible to just compute the worst-case time and introduce a wait in the algorithm. The answer was that the worst-case time is hard to compute and the mechanisms for the wait would be very hard to implement. Asked if a randomized delay would help, he said he doesn't think so. The last question asked was about dead code elimination in the processor and how it affects the presented solution. He answered that with current processors, this does not affect the conditional MOV instruction. ----------------------- The second talk of this session was titled "Non-Interference for a Practical DIFC-Based Operating System", presented by Maxwell Krohn. Krohn started by explaining the possible approaches to "Decentralized Information Flow Controls" (DIFC), one being language-based and compile-time the other OS-based and run-time. The focus of Krohn's talk was a proof of security for a DIFC-based OS. He explained how DIFC is enforced through security labels attached to processes and the two approaches used to expose the functionality to applications: implicit "floating" labels and labels that have to be set explicitly by the requesting application. Implicit labels bear an intrinsic risk of information leaks, therefore, Krohn's work focused on proving the security of explicit labels by showing that the property of non-interference (i.e., that one group of processes can in no way interfere with another group of processes) is valid at all times. They modeled the system using the Communicating Sequential Processes (CSP) process algebra and Krohn briefly outlined the proof, referring to the paper for details. He concluded by saying that provable security in a DIFC-based OS is possible. Q: If processes can change their own levels, how does that affect the model? A: If a high process adds more tags, still a high proces. ================================= Third Session: Malicious Code Chair: Ulfar Erlingsson --------------------- The first talk of this session was "Native Client: A Sandbox for Portable, Untrusted x86 Native Code". This won the best paper award this year. The speaker started by outlining the problems with current web applications, namely the performance limits when using built-in browser methods (JS) and the security implications of browser plug-ins. Then he presented Native Client (NaCl), a safe execution environment for untrusted code that is portable to all browsers/operating systems, offers almost native performance and is small and simple. He explicitly stated that NaCl will not execute existing web-app code, instead, a recompilation for NaCl will be necessary. To demonstrate the performance of NaCl, he demoed Quake, a 1996 3D action game running inside a browser window. To facilitate this performance, NaCl has support for many modern CPU features like SSE. The system is open, and the authors provide a gcc toolchain. The idea behind NaCl is following: One browser plug-in runs all the apps in a process jail that allows only limited interaction with the OS. The safety is comparable to JavaScript, while the performance benefits from native machine code. The threats that need to be mitigated when allowing native code are the use of privileged syscalls and unusual control flow and read/write instructions. Some other threats, like infinite loops, memory leaks and covered channels are not addressed by NaCl in the current version. To ensure control flow integrity, the instruction set needs to be restricted and some instructions need to be restructured. This allows for a reliable disassembly that is required to verify the safeness of the code. Code that fails the disassembly step is automatically deemed unsafe. To constrain data and instruction references x86 segmentation is used which allows for integrity checks without any overhead. The NaCl runtime environment features a subset of POSIX APIs, an inter-module-communication interface, multimedia support and RPCs. Google conducted a public safety test of NaCl, exposing several weaknesses that were fixed in subsequent releases. The biggest attack surface was definitely the browser integration. There were several questions after the talk: The first was about the validation step in the sandbox and if it's possible to subvert it with a hacked compiler. He answered that the parsing is pretty robust. The second question was about the possibility of self-modifying code. According to the speaker, this is possible within the constraints of the sandbox. Another question was about the complexity of applications under NaCl. He answered that for now, the apps are not allowed network or filesystem access. Q: What about Adobe Flash player? A: It has its own network stack and would require significant surgery to incorporate. --------------------- The second talk of the session was "Automatic Reverse Engineering of Malware Emulators" (Best Student Paper Award). In the beginning, the speaker explained the reasons behind and the evolution of malware obfuscation. The overall trend goes towards finer-grained obfuscation, with instruction-level obfuscation being current state of the art. With this method, the malware logic is transformed into an arbitrary byte-code language and executed through an emulator packaged into the same executable. Contrary to previous methods, where a decryption routine would unpack the malicious code to memory and then execute it, malware emulation decodes the instructions one by one as they are fetched and executed. Hence, there is no point in time at which the whole malicious binary is in memory in native machine code form. Of course, the emulator and byte-code can be modified on propagation producing an infinite number of variants of the same code. He went on explaining that static analysis does not work at all on this type of obfuscation, while recent dynamic analysis techniques like multi-path exploration fail to scale well under these conditions. Then he presented their approach to the problem: Automatically identify and reverse-engineer the emulator routines to retrieve a decrypted (transformed back to x86 machine code) version of the malware payload. No prior knowledge of the byte-code or the emulator workings is required, instead the authors rely on the fact that an emulator must contain a CPU-like fetch-decode-execute main loop and an associated program counter (PC). He explained briefly the techniques they use to find the location of the PC and the fetch-decode-execute code and highlighted some implementation details. Finally, he showed the results of several synthetic test, where the authors obfuscated a binary and then tried to analyze it, and test on real-world emulated malware samples. In both tests, the system was able to identify the emulator routine and produce a control-flow graph for it. The first question was about any further steps that malware authors could take to obfuscate their code. He answered that, as far as granularity goes, instruction-level obfuscation is the end. Another question concerned the applicability of this technique to other types of obfuscation and the answer was that other types of obfuscation require different approaches. The next asker wanted to know if any "innocent" programs have been identified as malware emulators to which He answered that code parsing network traffic exhibits similar behavior and, hence, might cause false positives. The last question was if the system is capable of extracting other semantics beside control-flow from the emulator and he said that will be included in future work. Q: In this arms race, what's the next step of the attackers? A: Emulation is the finest granularity of obfuscation, but there is potential for variation. Q: Do you look at anything beyond Control Flow Graph (CFG) extraction? A: No. --------------------- The last talk of this session was "Prospex: Protocol Specification Extraction", presented by Gilbert Wondracek. As motivation, Gilbert explained the usefulness of stateful network protocol specifications for applications like fuzz testing, deep packet inspection and comparison of different implementations of the same protocol. Manual reverse-engineering of protocol specifications is a tedious and time-consuming process (e.g., the Microsoft SMB protocol was manually reverse-engineered over many years for the samba project), while the current automatic solutions are only able to deduce the message format and not the underlying state machine. In order to obtain the state machine, Prospex performs four steps: First, network sessions are being analyzed by recording execution traces of applications using the protocol. Dynamic data tainting is used to track the flow of received data through the application. In the second step, message format inference is performed using a Multiple Sequence Alignment algorithm. Then the message specifications are clustered according to 3 features (input, execution and output similarity). The result is a generalized format for each message type. In the final step, a state machine is inferred by first creating a labeled augmented prefix tree acceptor (APTA) and then merging states to create a minimal, yet expressive, DFA that will accept all the observed traces. To evaluate the system, the authors tested in on four real-world network protocols, three text-based (SMTP, SIP, Agobot) and one binary (SMB) and were able to create state machines for all of them. To asses the quality of the generated state machines they measured the parsing success of control datasets, compared it with a reference (manually generated) state machine and pitted it against other similar algorithms. The results in all three test were positive, and Prospex outperformed other existing algorithms in both recall and precision. Gilbert then showed an application example: By using the extracted state machines to create input for a stateful fuzzer, they were able to identify two (previously known) vulnerabilities in Samba and Asterisk. At the end Gilbert mentioned some limitations of the current system: While their implementation only inferred the "server-side" state machine, it is thinkable to do this for bidirectional communication. On the other hand, encrypted network traffic poses a significant difficulty, and the quality of the state machine is directly influenced by the quality of the training data set. The first question was about how easy it would be to confuse the system and cause a state explosion. The second asker wanted to know if invalid sessions were included in the training set, to which the answer was no. The last question was about the importance of the different similarity scores for detection, to which Gilbert answered that the scores were weighted according to a simple, fixed scheme and did not require tuning for different protocols. Q: If an attacker creates a program with zillions of states, does this approach collapse? A: Correct. Q: Would it be possible to identify the message types, e.g. within SIP? A: No. ================================= Fourth Session: Information Leaks Chair: Radu Sion --------------------- The first talk of this session was "Quantifying Information Leaks in Outbound Web Traffic". The speaker started by explaining the threats of data leaks which can occur due to outsider (hacker) or insider activity. In any case, the goal of the attacker is to steal information and avoid detection while transferring it over the network, and to that end, he might use different techniques to obfuscate the data and hide it in legitimate network traffic. To mitigate this threat, the network traffic must be examined and the leaks identified and quantified. There are both on-line (Intrusion Detection) and off-line (Forensics) application cases. In order to make identification of possible leaks easier, he suggested to eliminate all network traffic which cannot be used to leak information and in his approach he focuses on HTTP requests which is the most likely vector in secured enterprise environments. In order to achieve that, the authors propose to compare the actual request with the "expected" request and strip away everything (using edit distance comparison) that matches the expectation, then the rest will be an upper bound on the amount of possibly leaked information. The expected data consists of fields that can be extracted from the protocol specification (e.g. fixed headers), from previous requests (User-Agent, Referer) and from processed HTML pages that originated the request (form fields, links). While the protocol specification is available, and previous requests can be extracted directly from network traffic, HTML pages need to be parsed and contained JavaScript code needs to be executed to obtain the DOM tree of the document as seen by the browser. his proposed system does all that - it uses SpiderMonkey, the Mozilla JS interpreter, to obtain the final DOM tree of the document and extracts all links from there. The authors evaluated their system on constructed and real-world browsing traces. The synthetic test was performed on five different web sites, and only for one, a social networking site, not all links could be retrieved. This was due to the fact that that site created new links on user input events, which were not simulated by the system. The test on real data showed that the presented technique can narrow down the amount of data possibly leaked far more precisely than other methods. However, the performance of the system right now is only suitable for off-line (forensic) analysis, it would need substantial improvements to work with an IDS. He explained that the main bottlenecks now are JavaScript execution and edit distance computation. The first question concerned the feasibility of using the system on-line for IDS purposes and the answer was, that, as stated in the talk, the performance is not sufficient for this application yet. The second asker wanted to know if the authors have successfully discovered any leaks with their system. He answered that this is not the goal of this project, which is only about isolating the places where leaks might occur. --------------------- The second talk of this session was "Automatic Discovery and Quantification of Information Leaks". In this talk, the speaker presented an automated approach to determine the amount of information from a program input that can be determined by observing the program output. For example, a failed password check reveals that the input was not, indeed, the password. The presented approach is a combination of program verification and information theory and the authors have a working prototype implementation working on programs written in C. The system, called DisQuant, consists of two parts: The first part, Disco, is responsible for partitioning the "secrets" space - the finer the partitioning, the more secrets are revealed. The second part, Quant, quantifies information leaks using combinatorial methods. To implement the system, the authors used off-the-shelf model-checking software and other tools. He used an electronic auction as example how the system identifies information leaks. Finally, he explained that given enough computation time the system can be arbitrarily precise, however for reasonable performance some precision must be sacrificed and that, by under-approximating and over-approximating DisQuant can give guarantees on the bounds of possible leaks. Asked how bad the performance really was, he replied that for now the system only works on small contrived examples and not on real-world software. The second asker wanted to know if the system might be useful for forensics or reverse engineering and the answer was no, since more information is required. Another person wanted to know if DisQuant could be used to determine information leaks in crypto systems, to which he answered that the system would definitely over-approximate the leaks. Another question was if the system will give recommendations for improvements to which the answer was plain no. Q: What about larger programs? A: At the moment, compute precise program size. Scaling-Up might help Q: Rob Johnson: what about encryption algorithms? A: Overappromixation what an attacker can achieve Q: Does this tool recommend how to refactor code? A: No, it's a pure analysis tool and does not give any recommendations. --------------------- The last talk of the day was "CLAMP: Practical Prevention of Large-Scale Data Leaks", presented by Bryan Parno. The speaker started be reminding everyone that the amount of data leaks can only increase due to the ever-growing amount of information available online. He identified an architectural flaw in current web services that largely contributes to possible leaks: Web applications consist of a rather secure server platform running usually poorly audited and insecure scripts and application code that have access to all the information stored in the database back-end. Parno argued that the complexity of server hardening or using secure frameworks (e.g. DIFC) hinders adoption. He proposed an alternative solution to this problem: Give each user her own web server with access limited only to that users data. In this way, when an attacker compromises a server, he will fail to obtain any data except his own. He proposed to implement such a system the following way. The user authentication code is taken out of the Web App and into an Authenticator module which is compact and easy to secure. After authentication, the Dispatcher will spawn a web server instance for that user, and the Restrictor will create database views for that server instance that only contain data relevant to that user. For this to work, a set of configuration files needs to be created for each application by hand. Although this seems hard and time-consuming, in practice it is easy to identify the pieces that need to be isolated, since developers will generally create human-readable code and database schemas. He demonstrated the last point showing real-world database schemas where the relevant tables and columns were easily identifiable. He suggested several methods how to implement such a system (e.g., JVM, SELinux) and that they used Xen for the prototype system. Xen has support for "flash" cloning of virtual machines which is still unstable but offers great performance and minimal overhead. For the Restrictor, a SQL proxy was used. He gave a few examples of real-world web apps they have converted to CLAMP and how long it took to achieve that, with most apps ready for use in several hours. Finally, he showed some performance statistics that showed that, while there is noticeable drop in throughput, this can be mostly attributed to the VM cloning process and advancements in that field should mitigate the effects. The first question was what happens if the webserver is compromised before authentication. He replied that this is not possible since authentication takes place before the web server is spawned. Asked how it is possible to determine what data user want to see, he answered that all that users are supposed to see can be deducted from the application code. The next questions concerned the mechanisms used to restrict database access and the reply was database views. Another question concerned transactions where data from multiple users is involved, for example a funds transfer. He replied that this action should be separated from the web server, with the user only able to issue the request and a separate application handling the actual transaction. Asked what role he sees for the web server, the speaker answered that the web server should be responsible for presenting the data while authentication and database access should be handled externally. The next question was about legitimate uses of aggregated user data, to which the answer was again that views should be used for that. Finally, one audience member commented that views are already the best practice for database access and the speaker agreed, stating that their system will create per-user views on-the-fly and lift the burden from the developer. Q: If attacker compromises Web server before authentation, a user could masquerade as any user, right? A: No, the User Authentication (UA) module is external and not part of the webserver. Q: How to adapt to data that is only visible to a limited set of people? A: It should be able to encode these restrictions in data access policies. Q: Query Restrictor: what about SQL code provided by the user? A: Use database views. Q: How to handle a banking application where multiple data is touched at the same time? A: Don't handle the sensitive tasks in the webserver, instead use an external application. Q: Rob Johnson: what's the webserver's role given that the application is now spread over many different instances? Does it still have a role? A: Webserver renders data and content. Q: How to handle aggregate data? A: Database views again. ==================== Day 2 Fifth Session: Privacy Chair: George Danezis --------------------- The first talk was "De-anonymizing Social Networks", presented by Arvind Narayanan. The talk started with Arvind giving examples of different social networks and the information they may contain. The examples ranged from AT&T's telephone call graphs and doctor-patient-networks to high school romantic and sex networks. On-line social networks have grown in popularity in recent years and the amount of personal information contained it them is coveted by data-mining researchers and advertisers, among others. To protect user privacy such graphs are usually anonymized before publication. Arvind then outlined the key contributions of this work: First, he and his co-authors devised a privacy framework to model the storage and distribution of private information and background data, then they developed a de-anonymizing attack on social network graphs and finally they experimentally validated the attack on real-world data. The presented model states that to obtain privacy, node attributes, edge attributes and edge existence in a social graph need to be protected and further, that anonymity alone is necessary but insufficient to guarantee privacy. The attack Arvind presented is based on the following assumptions: The graph is anonymized an published and the adversary has access to large-scale background knowledge. Arvind explained that the overlapping of membership in different social networks is a key to the success of the attack and that by means of graph aggregation, where the anonymized target graph containing sensitive information is compared to a non-anonymized but insensitive publicly available graph (auxillary graph). The process is not straightforward, since the auxiliary graph is generally not isomorphic to the target graph and also is noisy, imprecise and large. The attack works in two stages: First several seed nodes need to be identified in the target graph. Some detailed background knowledge is required to find these nodes, acquired, for example, through phishing or similar attacks. Once a sufficient number of seeds has been identified and mapped to both graph, the propagation phase begins. Using only the degree and number of common neighbors of matched nodes, the target and auxiliary graphs are mapped through an iterative process. What Arvind's team found was that the edge overlap in real-world graphs is around 15%, therefore, they devised many heuristics to improve the matching. Arvind did not go into detail on the heuristics and instead moved over to the evaluation part. The algorithm was tested on graphs collected from flickr and Twitter and had an overlap of 15%. The team anonymized one of those graphs, leaving 160 seeds and they were able to recover (de-anonymize) 31% of the target graph. The first asker wanted to know if the algorithm would be still effective if everybody had the same number of friends (i.e., all nodes had the same degree), to what Arvind answered that the number of common neighbors is the more important criterion. Another audience member wanted to know if this means anonymity is unachievable on the Internet and the answer was that with the amount of (high-dimensional) data already present on the web it is almost impossible. The final question concerned the difference of the presented work to previous contributions and Arvind explained that they are the first that use a completely passive approach leveraging globally available background information. ----------------------- The second talk was "Privacy Weaknesses in Biometric Sketches", presented by Koen Simoens. To motivate his work, Koen explained the move to biometric security that in recent times has been considered as a better alternative to passwords or other authentication tokens. The problem with biometric data, however, is that usually it cannot be reproduced with 100% accuracy, so methods like simple hashes that allow comparisons but are not reversible do not work. Instead a noise-tolerant algorithm to compare similarity is required. The state-of-the-art solution are "fuzzy sketches" and Koen explained the process of creating and authenticating using them. The threat model he presented is the assumption of an inside attacker who, having access to the sketches could potentially be able to combine related sketches and identify the owner. The main contribution of this work, according to Koen, is the definition of security notions for distinguishability and reversibility attacks on biometric sketches and the evaluation of two sketch algorithms under these aspects. Then he explained the fuzzy commitment scheme and the inherent information leakage therein, which can be leveraged for de-anonymization or data correlation (attribute data in different databases to same user) attacks. Koen then briefly explained the details of the attack on the distinguishability of fuzzy sketch schemes and showed that related sketches are always identified correctly while unrelated sketches have a noticeable false positive rate. Finally, he outlined an attack that will reverse the original input from related sketches, which requires solving a linear system. The first question was about the dataset used for evaluation and Koen replied that they used data from around 100 members of their department, each giving multiple samples. Another question concerned the technical details of the sketch format, in particular the offset that was used in the attack. The answer was that the offset is random for every sketch but stored within, otherwise a successful comparison would not be possible. Q: Where did you get the data and how large is it? A: From an industrial partner, 100-200 individuals with multiple samples. Q: There exists related work in passport schemes, does it apply there as well? A: Don't know too much about these schemes, it might apply as well. --------------------- The final talk of this session was "The Mastermind Attack on Genomic Data", presented by Michael Goodrich. He started by explaining the motivation behind genome comparisons (e.g., paternity tests), and the two types of comparisons performed: Aligned matching and sequence alignment. Mechanisms like Secure Multiparty Computations exist to perform genome matching in a completely private manner, however, repeated comparisons will leak information. Then Michael explained the mechanics of the Mastermind game and how repeated genome matching can be modeled in similar terms. Since a genome comparison will show which portions of the genome do match and which don't, repeated comparisons with different patterns can reveal the whole sequence that is being compared to. The problem of Mastermind is provably NP-complete, however, this gives a false sense of security since strategies exist that make is solvable in polynomial time. Michael then briefly explained the origins and characteristics of human DNA and then presented the algorithms he devised that allow recovery of the reference string through multiple comparisons, and that are not strictly limited to DNA comparisons. When aligned matching is performed, he uses a divide-and-conquer strategy that iteratively splits the string in smaller pieces, until each part can be matched. While delegating the proof to the paper, he showed that the worst-case bound for the number of queries is indeed polynomial. Then he demonstrated the algorithm for multiple sequence alignment that uses frequency analysis and also proved that the worst-case number of tries is within polynomial bounds. He the further suggested that these bounds could be improved leveraging knowledge of the human genome structure, which for most parts is not a random string. In the end, Michael presented an evaluation performed on real-world DNA (1000 random mitichondrial DNS samples). Using aligned matching, it was possible to guess the whole sequence with about 300 comparisons, while sequence alignment required under 1000 comparisons and explained that the only defense against this attack would be to disallow indexing of the genome. The next question was if it is possible to retrieve a whole genome database with this method and Michael said that since it is an adaptive attack, it can only reconstruct one sequence at a time. Another audience member suggested that using more info from Biology to combine individual base pairs into sequences could improve the efficiency, to which the answer was indeed, it could. Q: Peter Neumann: digrams, trigrams would give better results? A: Yes. Q: With a small number of queries, is it possible to extract a large amount of data of many people? A: Not going to be effective. Q: Each DNA symbol does not appear independently, so why don't use sequences of symbols? A: Yes, attacks would be even more effective that way. ====================== Session Seven: Formal Foundations Chair: Vitaly Shmatikov --------------------- The first talk was "A Logic of Secure Systems and its Application to Trusted Computing", presented by Deepak Garg. Deepak started by explaining the main parts of modeling secure systems design. A secure system has certain security properties and the adversary has certain capabilities. To asses the security of a system, often an informal approach is used, where known attacks are attempted against the system, but this does not prove the security against unknown attacks. A logic-based analysis defines a system with properties and an adversary with capabilities and reasoning is used to prove the security under these assumptions. When a proof can be obtained, the system is secure, otherwise the process gives hints on improving the system. Deepak then introduced LS2, a language they devised to reason about the security of systems. Their adversary model includes full local and network access but no ability to break cryptographic protocols - this enables him to carry our many common attacks. He then demonstrated the use of LS2 to verify security properties of Static Root of Trust Measurements (SRTM), a protocol to remotely verify the integrity of a Trusted Computing Platform, and found two new and one previously known weakness. Deepak then dove into details how SRTM is modeled with LS2 and how a proof is obtained or weaknesses discovered. The first question referred to the SSH attack described in a previous session and if it could be predicted with LS2. Deepak replied that this is out of the scope of their current work. The next asker wanted to know what their analysis of TPM resulted in to which the reply was that they have uncovered some weaknesses in the SRTM protocol. Q: Would the SSH attack we saw earlier be detectable? A: Out of scope of our work. Q: What's the result of your model? A: A precise characterisation of the properties of PCR. Q: TPM 1.2 has dynamic root of trust? A: Correctness proof available. We have accounted for that version and a version that breaks SRTM. --------------------- The second talk was "Formally Certifying the Security of Digital Signature Schemes", presented by Santiago Zanella-Beguelin. (No notes for this talk) --------------------- The last talk of the session was "An Epistemic Approach to Coercion-Resistance for Electronic Voting Protocols", presented by Tomasz Truderung. He opened by explaining that e-voting protocols are the most complex protocol since they exhibit contradictory requirements. He explained the mechanics used in current e-voting systems and the need for a voter-verifiable paper trail. However, this opens up the possibility for voter coercion that does not exists with traditional voting methods, since now the coercer has the means (the paper trail) to verify that the voter adhered to his wishes. This type of coercion is difficult to define formally since cryptographic definitions are secure but hard to use while symbolic definitions are easier to use but have weaker security. The goal of Tomasz' work is to create a general, intuitive and simple definition of coercion resistance. He then explained the roles in a coercion system - the voter, the coercer and the (honest) environment and defined coercion-resistance as a strategy that will achieve the voters goal while being indistinguishable from the strategy required to achieve the coercers goal. In order to make the proof work, some outcomes of the election (e.g. the coercers desired candidate gets zero votes) must be excluded. Tomasz then dove briefly in to a more formal definition of the model and then stated that the system also works for multiple coerced voters since that can be reduced to a one-voter problem. He continued with the evaluation of three e-voting systems and singled out CIVITAS as an example. With the presented model, Tomasz was able to prove that CIVITAS is coercion-resistant after the registration phase, however, coercion is possible before registration is completed. Asked if their model can handle invalid votes and intentionally spoiled elections, Tomasz responded that this was not considered. Another audience member pointed out that the slides lacked some logic details, and Tomasz suggested to talk off-line. The final question was what makes the approach epistemic. The answer was that they did not take probabilities into account. ======================= Seventh Session: Network Security Chair: Jonathon Griffin --------------------- The first talk of this session was "Sphinx: A Compact and Provably Secure Mix Format", presented by Ian Goldberg. Ian addressed the problem of sending messages without revealing the senders identity. Using one trusted relay is not enough, you require a path of relays. While onion routing (e.g., TOR) can do that it works in real-time, which might be required for certain applications but is vulnerable to traffic analysis. Mix-nodes, in contrast, use batch processing of messages that can defeat traffic analysis and are excellent for asynchronous communication like e-mails, blogging, etc. The focus of Ian's talk is on the message format used by Mix-nodes, since it can leak information if not implemented correctly. A desired property of a Mix protocol is the ability to reply to a message without knowing who sent it and to ensure anonymity a reply must be indistinguishable from a regular message. Ian then gave a historical overview of Mix protocols and their strengths and weaknesses, and then introduced Sphinx, a new compact mix protocol with indistinguishable replies and provable security which none of the previous protocols have achieved. Ian presented the message format used by Sphinx in an illustrative diagram and then outlined the proofs for security, indistinguishability and compactness of Sphinx. The first question concerned the integrity checks of the payload, to which Ian replied that they are built in and changes to the encrypted payload would randomize the contained data. The next asker wanted to know if there is room for improvement, and the answer was that the only possible improvement is a further reduction of overhead. Q: Rob Johnson: Is the integrity only protected by a large block cipher? A: (Yes) Q: Security analysis in the Random Oracle Model? A: (Yes) Q: Does your work represent the last iteration since we have all desired properties? A: Reducing the size is still possible. Q: How were the ECC coordinates represented? A: Only the x-coordinate was used. ----------------------- * "DSybil: Optimal Sybil-Resistance for Recommendation Systems", presented by Haifeng Yu. The problem Haifeng outlined is that on-line recommendation systems as used by Amazon, Youtube etc. can be easily subverted by influencing users or by using a Sybil attack - creating a large amount of fake identities that will alter the recommendation. To defend against Sybil attacks, one could tie on-line identities in a stronger fashion to real identities but that would raise many privacy concerns. Resource challenges (e.g. CAPTCHAS) can be tackled by botnets and social network based defense has proven to be insufficient. Recommendation-based systems have a very low tolerance to Sybil attacks. To mitigate this problem, Haifeng proposed to use an ancient idea - trust history and presented DSybil, an recommendation system based on feedback and trust with provable guarantees on the worst-case number of Sybils required to subvert the system. DSybil has been tested on a 1-year trace file from Digg.com. In order to make the system workable, several subtle aspects need to be take into account, for example, how to assign initial trust, how to grow it, how to prevent participants from gaining trust for free and how to filter out non-helpful votes. To achieve that, the team leverages the typical voting behavior of honest users. So far, the system only supports good-bad choices but it is thinkable to extend it to a 5-star rating system or similar. The system is modeled as following: Only "good" votes a considered, since negative recommendations are no help and only one vote per object per identity is permitted. Then Haifeng explained the voting algorithm and how users can gain or loose trust. One important part is that each user gets a "personalized" recommendation based on guides - users with high trust that have similar taste, i.e. voting history. It is necessary for the algorithm that the guides cover a large portion of good objects while the number of guides does not need to be big. Haifeng presented an evaluation performed on a one-year Digg trace containing 0.5 million users and showed that three guides are sufficient to cover 60% of good objects and after removing the top three, the next five are good enough. Trying an attack with 10 billion Sybils vs. 1000 honest voters, new users only received bad recommendation 12% of the time and that number dropped to 5% after one week of using the system. The first question was which assumptions were made about the attacker and Haifeng responded that the attacker can know everything, including the algorithm and still be unsuccessful. The next asker wanted to clarify if each user has different guides, which is indeed the case. The last question was if the guides reputation can be destroyed by always voting against them, to which the answer was that this will affect only the users trust to her guide and have no global effects. Q: What assumptions are you making about attacker nodes? Are they allowed to look at the first 100 users? A: The attacker is allowed to have knowledge about everything. Q: For different users, do guides have to be different? A: Yes. Q: Is trust only relative to Alice? A: Correct. Q: In practice, does Alice need to provide feedback to the system? A: Correct. ====================== Eighth Session: Physical Security Chair: Farinaz Koushanfar --------------------- The first talk of this session was "Fingerprinting Blank Paper Using Commodity Scanners", presented by William Clarkson. William explained that every sheet of paper is unique, due to the material and the manufacturing process. Hence, it is possible to devise a sort of Biometrics for paper. This can be useful to, for example, verify the authenticity of documents, tickets or money. The desirable properties of any proposed scheme should be accuracy, resilience to material wear, non-modification, cost efficiency and security against forgery. William proposed the following scheme to obtain a document fingerprint: The texture of the paper is measured using an of-the-shelf scanner that scans the sheet at 1200dpi with 16bpp. By adjusting the contrast of the resulting image, the paper texture is revealed. Four scans (each rotated by 90 degrees from the previous) are required to compute the surface normals. Then 100 randomly selected patches of 8x8 pixels are used to compute a 3200 byte feature vector. The random seed, together with some error correction bits and a fuzzy hash of the feature vector are stored as the sheet fingerprint. To verify the documents, the same steps are repeated and the fuzzy hash is compared to the one from the fingerprint. William presented an evaluation which showed that each sheet can be recognized with a 3% error rate. After scribbling on the paper with a pen, the error rate rises to 9%. When the paper has been printed on, assuming an ink coverage of 15%, the error rate climbs to 18%. Soaking the sheet in water and ironing it or crumbling it yields a comparable error rate. As one implication that has both positive and negative side-effects, William singled out (traditional ballot) voting: On the positive side, it is possible to verify that the ballots that are counted are the same that have been given out, on the other hand, it destroys anonymity by allowing a connection between ballot and voter. A positive example for e-voting is that the paper trail can be tied to the electronic vote. The first asker wanted to know how the technique works on glossy paper and William answered that due to specular reflections the measurements are less precise. Asked if it is possible to use a scanner different from the one the fingerprint was created on for verification to which the answer was that the error rates remain the same. Another audience member wanted to know if this will work when pieces of paper are missing. William replied that that should also work within certain bounds. The next question was if a mosaic attack - piecing one sheet of paper from several different sources - could be successful. The answer was yes, it could possibly fool the system but would definitely fail human inspection. The next two questions were about the recognition rate on pages printed with 100% covering rate - in this case the system fails and although William's team did some attempts of scanning through the ink they were not very successful. The final question was if they tried already pre-printed images, and the answer was that they did not experiment with that. Q: What happends with glossy paper? A: Idea should work in principle, but it is more difficult. A: What is the manufacturer difference when creating the fingerprint on one scanner and verifying on another? A: 10-12% difference across different scanner manufacturers. Q: What about schemes where only a part of the paper is available? What about sampling patches from many documents and clipping them together? A: We believe this is quite difficult and noticable when a human is in the loop. Q: What about papers where the majority is covered with ink? A: Fully colored papers are problematic because the surface changes and thus the error rate. --------------------- The last presentation of the day was "Tempest in a Teapot: Compromising Reflections Revisited", presented by Markus Duehrmuth. Markus started by referencing previous work where his team was able to read contents of a monitor without a direct line-of-sight on the screen, using reflections in stationary glossy objects, for example, a teapot. Then he claimed that removing all such objects still does not guarantee security since it is possible to retrieve some information from reflections in the eye and even from diffuse surfaces like clothes or a wall. First, he described the challenges of reading the reflection from a human eye the target is smaller and moving, which makes it difficult to obtain a sharp image. Hence, a technique borrow from astronomy called image deconvolution is used to compensate for out-of-focus blur due to the small depth-of-field and motion blur caused by the exposure time. Through deconvolution, a sharp image can be restored from a blurry image and a point-spread-function (PSF). The challenging part is obtaining the PSF and there are two possibilities: An "in-field" measurement obtained while taking the actual image that would exploit some stationary point light source or an off-line measuremen under controlled conditions. While the first method is good for measuring motion blur it exhibits substantial noise. The second method is accurate but does not account for motion blur, however, it generally produces better results then in-field measurement. Then Markus demonstrated some images reconstructed from reflections on the eye using the different methods. He then went on to explain how images can be extracted from reflections on diffuse surfaces. It is necessary to have two images - one image of an empty screen and one with the text - with this method it is possible to barely reconstruct letters of 10cm height - far bigger than a regular on-screen font. The good news is, however, that it is not possible to do it any better, and hence, removing all glossy object from the room should make it impossible to spy on you without direct line-of-sight. Markus also gave a theoretical proof of this assertion through analysis of diffuse deconvolution. As addition he showed that using these new techniques on stationary glossy objects makes the screen readable from a great distance. Finally, Markus mentioned some possible countermeasures with only one, a notch filter, showing promising results but at a high price. The first question concerned the application of this technique to small devices like cell phones to which the reply was that such devices are suited neither for capturing the image nor for being spied upon. The next asker wanted to know if any information can be gained from IR emissions of the screen and Markus answered that they did not investigate that. Asked if glasses make it easier to reconstruct the image he answered that it should definitely be easier, however, they only experimented with glasses on the table and not on a persons head. The final question was if the method works on text in a reasonably-sized font or printed on a piece of paper to which the reply was that this works only on stationary objects like teapots. ============================== Short Talks Session The first two talks were brief announcements for the Cloud Computing Security Workshop 2009 and the Trusted Infrastructure Workshop. * "On Voting Machine Design for Verification and Testability" by Cynthia Sturton (UC Berkeley). She briefly explained that testing of voting machines only guarantees the absence of certain bugs, while formal methods are difficult since it is hard to formalize user expectation. Cynthia then proposed a hybrid approach where the design is formally verified and then tested by humans. * "Using Java for a Forensically Secure Encryption" by Cedric Jeannot (Louisville). He mentioned a case study where it is possible to retrieve a temporary file from an encrypted file system and suggested to apply security principles in the context of forensics. Jave is perfect for that since it's sandboxed, designed with security in mind and available on all platforms. Cedric's goal is to have Java Apps leave no traces on the system. There are three categories of traces that need to be considered: output, disk and memory and so far, he has solved the first two. * "Bringing Geographic Diversity to IEEE S&P" and was a not entirely serious talk about introducing a West Oakland conference. * "Membership-concealing overlay network" by Eugene Vassermann (Minnesota). He proposed an overlay network where participants do not disclose their membership, which is an orthogonal problem to anonymity and censorship resistance. The reasons for concealing ones membership in an overlay network might be that the software is illegal in your region. Eugene then listed his design goals to withstand attacks against current systems like passive harvesting, bootstrapping and celebrity attacks. At the end he mentioned that he has an initial DHT based design. * "Acoustic Side-Channel Attacks on Printers" by Michael Backes (Saarland). He showed that is is possible to recover the printed text from the sound made by a dot-matrix printer. Such printers are still widely used by doctors and banking institutions and certain laws in Germany require narcotics prescriptions to be printed using this technology only. Michael's team created a training set of dictionary words and extracted features using HMMs using off-the-shelf hard and software. They were able to recover 63% of printed words and almost 70% after some post-processing. They have also conducted a successful in-field test in a noisy doctors office - however since the relevant noises are in the 20-40kHz range, human voices do not interfere with the process. * "Secure Web Templating" by Adrian Mettler (UC Berkeley). He motivated his talk by listing the numerous templating languages used for web development, like PHP, JSP or ASP and that it is easy to write buggy code using them since different escaping functions have to be used on different elements. He suggested a structure-aware templating language that is sensitive to context and auto-escapes certain variables accordingly. * "A Physics for Digital Information Systems" by, Fred Cohen (California Sciences Institute). He suggested to define physical approximations for information processing to teach to CS students and outlined some differences between classical physics and the information world. * "Encoding Information Flow Types Using Authorization Logic Constructs" by Limin Jia (UPenn). * "The Security Through Environmentalism Theorem" Steven Greenwald (SteveGreenwald.com). His goal was to increase security awareness of executives by stating and proving that "bad computer security harms the environment". The proposed proof goes in the line of: bad security causes longer runtime of machines due to maintenance which causes more C02 production which is harmful to the environment. * "A Functionality for Symmetric Encryption in Simulation-Based Security: Joint State Theorems and Applications" by Ralf Kuesters (U of Trier). * Darleen Fisher from the National Science Foundation presented the "NSF Future Internet Design Program" and invited interested researchers to participate. The program has three phases, first the design of network architecture elements, then a full scale "future Internet" design and finally evaluation. * "Optical Scanning of Paper Election Ballots" by Arel Cordero (UC Berkeley). He explained that current voting systems are composed of many parts that are mostly proprietary. He proposed an independent system that would only require to scan the ballots and then, using only computer vision (and with no prior knowledge of the ballot) extract all information necessary to correctly count the ballot. Asked why this method shold be more trustworthy than existing system, Arel responded that this system is completely independent and can be used for verification. * "WOMBAT: Worldwide Observatory for Malicious Behavior and Attack Threats" by Sotiris Ioannidis (FORTH). He explained the goal of WOMBAT: Building a network of network sensors to observe attacks as they are happening. Recorded traffic is than fed to different analysis tool (Anti-virus, dynamic analysis) to enrich the data, identify the root cause of the attack and build even better sensors. He showed a list of institutions already participating in WOMBAT and invited interested companies to join. All partners of WOMBAT get open access to all the collected data. * "Application of 3-D Integration to hardware trust" by Ted Huffmire (Naval Postdoc School). He presented a novel 2-layer design for TPM chips that separates the computational plane from the control plane making the design supposedly more resistant to attacks and tampering attempts. * The final talk of the day was about the "Security 2.0 Strategic Initiative" by JR Rao (IBM Research). ====================== Day 3 Ninth Session: Web Security Chair: Sam King ----------------------- The first talk of the day was "Blueprint: Robust Prevention of Cross-site Scripting Attacks for Existing Browsers", presented by Mike Ter Louw. Mike opened by reminding everyone that Cross-site Scripting (XSS) attacks are still wide-spread and showed some recent examples that affected the New York Times and Twitter. He briefly explained the mechanics of a XSS attack and pointed out that the main cause for XSS vulnerabilities is poor sanitation of user input. Mike presented the main goal of his work to be creating a robust defense against XSS that would still allow users to format their input using HTML markup while preventing the execution of scripts injected by malicious users. In order to successfully do that, browsers must not execute code in parts of the page that should contain only data. Todays browsers have no provisions to enforce that policy and would require extensive modifications to support it. The most popular defense today is server-side sanitation of "dangerous" HTML constructs, however, since different browsers interpret the same HTML differently there is a "parsing gap" that allows for possibly harmful markup to remain unsanitized. Mike proposes a solution to the parsing gap - to enforce the way the server parses HTML in the browser. To realize this, a trusted script inserted into the page, will act as parser for the remaining scripts in the page. To prevent the browser from executing the other scripts, they are translated into a safe alphabet that will prompt the browser to treat it as text. Mike then presented a short example how a regular web page containing scripts is instrumented with Blueprint. Then, he explained other possible XSS attack vectors apart from scripts, namely Cascading Style Sheets (CSS) and Uniform Resource Identifiers (URI). The Internet Explorer implementation of CSS supports an "expression" syntax that allows execution of JavaScript code in style element properties. Blueprint is able to mitigate this attack by embedding a trusted script whenever the browser request a style property. URI-based XSS attacks are possible due to the "javascript:" scheme that will execute the script when the resource of that URI is requested. Unfortunately, there is no browser API to enforce an URI whitelisting scheme but Mike suggested several workarounds employed by Blueprint. Then he showed an evaluation of Blueprint under the aspects of effectiveness, compatibility and performance. Blueprint has been tested on 8 of the most popular browsers, constituting a 96% market share. To test different attacks, the freely available XSS cheat sheet containing 94 XSS attacks has been used. Without Blueprint, all attacks work on all browsers. Two popular web applications, WordPress and MediaWiki have been instrumented with Blueprint and apart from the imagemap tag used by MediaWiki that was not in Blueprints whitelist, the success rate was 100%. The overhead induced by Blueprint (latency and memory use) is noticeable, however, it could be improved by removing the now obsoleted built-in sanitation routines from the web apps. The impact on user experience is less pronounced. The first question was if it is even possible to insert executable code using MediaWiki's markup language. Mike replied that it does not matter because untrusted markup will result in untrusted HTML code. The next audience member wanted to know how they determined the set of safe API calls that Blueprint uses and the answer was that they composed a minimal set using experimental analysis under the aspect that the API calls will not induce the browser to parse additional code. Another person wanted to know if they have used any undocumented features of the API, however the answer was not clear to me. The last question was why Blueprint does not run on the server side only to which the answer was that the current approach makes sure that no client-side differences can introduce vulnerabilities. ----------------------- The second talk of the session was "Pretty-Bad-Proxy: An Overlooked Adversary in Browsers' HTTPS Deployments", presented by Shuo Chen. As motivation, Shuo questioned the security of HTTPS and if implementations on different browsers are consistent. He then introduced a new class of browser vulnerabilities that are caused by the proxy capabilities of these browsers. He noted that the disclosed vulnerabilities are being address already. Shuo introduced the "Pretty-Bad-Proxy" adversary, that is capable of breaking the end-to-end HTTPS security guarantees without breaking any cryptographic scheme. The vulnerability lies not in the HTTPS protocol itself, but in it's integration in the browser. Specifically, the attack targets the rendering modules which lie above the HTTP/HTTPS layer. Then Shuo explained and demoed the different attack schemes possible with PBP. The first attack involves injecting HTTP 4XX or 5XX error messages that, although they are sent unencrypted, are rendered in the context of the HTTPS-encrypted web site and allow XSS-style script injections through iframes. The second attack involves HTTP redirect (3XX) messages. Since a script that is referenced by a URI in a web page will execute in the context of that page even when loaded from a different server, it is possible for PBP to redirect a script request to any malicious URL. Another attack involves pages that are designed for HTTP but can also be transferred through HTTPS - they usually will contain resources refered to by HTTP URIs and browsers have security mechanisms that warn the user if non-HTTPS content is loaded in a HTTPS page. However, these safeguards only work for the top-level frame of the document, so these things go unnoticed when contained in an iframe. This allows the attacker to inject into unencrypted traffic an iframe pointing to a HTTPS-protected portion of the website and steal information from there. The final attack Shuo demonstrated allows, using only static HTML, to render arbitrary content in the browser while displaying a valid security certificate for any website. It works by letting the browser download the certificate and cache it, then trigger a refresh and inject an error page (HTTP 5XX) containing malicious content, that will be display in the browser as valid HTTPS page bearing the correct certificate. According to Shuo, all these attacks are highly feasible since proxies are used everywhere and when their integrity is compromised so is the security of HTTPS. Additionally, when the attacker has physical access to the victims local network, he can simply sniff and inject the traffic without the need of compromising the proxy machine. The tests conducted showed that the vulnerability is present in all networks and with all proxy configuration. Ever since they discovered the vulnerability, Shuo's team was cooperating with browser vendors to mend these vulnerabilities and he presented a table detailing which vulnerabilities have been fixed by vendors up until now. For most vulnerabilities fixes already exist, the hardest one to solve though seems to be the HTTP-Intended-but-HTTPS-Loadable (HPIHSL) problem. To mitigate the vulnerabilities until all are fixed, Shuo suggest not relying on session layer security but to use encryption at a lower level, for example IPSec, WPA or VPN. In the future, Shuo's team will try to find if oder protocols exhibit similar flaws and to conclude, he stated that HTTPS is not flawed by itself, however, it is made vulnerable through bad deployment in browsers. The first question was if a manually configured proxy will mitigate the attack to which Shuo answered that then the sniff & inject attack will still work. The next question was how the browser vendors are fixing the issue and the reply was that now, all non-200 OK responses are discarded before a SSL handshake is completed. Another audience member asked if there are any server-side fixes to which the answer was that this issue is easier to handle on the client-side. The last asker wanted to know which browsers have proxy detection enabled in their default configuration and the answer was all but FireFox, however, the latter will try to determine if behind proxy upon installation and base the default setting on that. Q: Ian Goldberg: How does this work with manual proxy configuration? A: You need to hijack the TCP connection. Q: Can we address these issues on the server side? A: Server is not in the position to determine whether a proxy exists. Q: David Wagner: which browsers have automatic proxy detection turned on by default? A: IE has, Firefox has not. ----------------------- The last talk of this session was "Secure Content Sniffing for Web Browsers, or How to Stop Papers from Reviewing Themselves", presented by Juan Caballero. He started by explaining the Content Sniffing Algorithm (CSA) that browsers use to determine the MIME-type of received data. This algorithm is necessary since some servers tend to send false or missing MIME-types, while the browser needs the correct type to choose the appropriate handler for the content. The danger lies in the possibility that the browser might interpret plain text or another file type as HTML which in turn means executing all the scripts that are contained within. Juan then outlined the Content Sniffing XSS Attack, in which an image with embedded HTML and JavaScript is uploaded to a Wikipedia page. The image is recognized by the Wiki content filter as image, however, the content sniffing algorithm in the browser will discover the embedded HTML, change the MIME type and execute the contained script. Then he presented their contribution, which is extracting models of browser content filters to automatically craft attacks and upload filters to counter them. Since some of the browsers are only available in binary form, they developed a novel string-enhanced white-box exploration technique. To find attacks they model the website's content filter as boolean predicate, the browsers CSA as multi-classifier and then use a query solver. With this technique they explored the content filters for MediaWiki and HotCRP with CSA's from all major browsers. Juan took a quick detour to highlight the string-enhanced white-box model extraction which increases the coverage per time of path exploration for string-heavy programs. He then demonstrated two attacks that the system was able to discover and explained that the main problem lies in the varying signatures used to determine the content type. Especially in Internet Explorer and Safari, the HTML signatures are far too permissive in that they allow leading junk before valid HTML. To mitigate the problem, disabling CSA is not a solution since this would break approximately one percent of HTTP responses, which is a lot considered the amount of content server through HTTP. Instead, a trade-off that exchanges slightly reduced compatibility for increased security needs to be devised. For once, stricter restrictions on permissible content types can be enforced, and on the client side there is the option of avoiding privilege escalation (never upgrade non-executable to executable) and using prefix-disjoint signatures. To evaluate the compatibility of this solution the team implemented a content filter adhering to these principles and tested it on the Google database containing several billion pages. They deduced that most pages sending an invalid content type are sniffed correctly by their algorithm. The CSA they proposed has been fully implemented in the Google Chrome browser and partially in Internet Explorer 8 and is standardized in HTML 5. The first question concerned the handling of text/plain MIME types in Chrome to which the answer was that Chrome does not treat text/plain differently from text/html. The second question was if the CSA is also used on images embedded in the web page using the IMG tag to which the answer was yes. Q: What happens for an image that is embedded into an html page vs directly typed in the url? A: CSA is still performed, independent of the location. ======================= Tenth Session: Humans and Secrets Chair: Michael Backes ----------------------- The first talk of this session was "It's No Secret. Measuring the Security and Reliability of Authentication via "Secret" Questions", presented by Stuart Schechter. Stuart opened by reminding everyone of the recent wide-published story about the compromised e-mail account of Vice-Presidential candidate Sarah Palin. Then he showed a study from 1990 about the guessability of secret questions by a persons close friends or relatives. The authors of that study used 20 questions based on facts or preferences and it turns out that the correct answer can be guesses almost one third of the time, while one out of five participants forgot the answers to his question within three months. Stuart then examined the security questions used by the top four webmail providers (GMail, Hotmail, Yahoo and AOL) and conducted a study of his own where people together with their significant others or acquaintances where asked to guess the other persons answer. They used several incentives to make people answer the questions truthfully, like they would do by signing up for a real service. After 3-6 months a follow-up study was conducted to measure the recall of the participants, again using incentives to motivate them. In brief, the results were that on average 20% have forgotten their answers while 22% where guessable by people close to them. The exact numbers for different types of questions can be seen in the paper. The collected data also allowed Stuart's team to conduct a statistical guessing attack which was successful 13% of the time. By showing some more statistics, Stuart reinforced his point that there are no good `security' questions. Some services offer the possibility for users to write their own questions, however, most people choose really poor questions with the number of possible answers often less than 5 while the some sites allow five attempts at answering the question correctly. Self-devised question are also easily guessable by acquaintances and on the other hand, the questions themselves might reveal too much personal information. Stuart proposed a scheme to prevent guessing attacks by dynamically updating the number of tries when the answers are related and reducing them when they seem random, however the implementation of that is not trivial. The alternative of e-mail based authentication is not viable for e-mail providers since not many people have reliable alternative addresses. Further suggestions of alternative backup authentication schemes went in the direction of printed shared secrets, SMS-based authentication or social network-based authentication. Asked what he considered secret enough, Stuart answered that a baseline is hard to find and depends on the application. Another asker wanted to know if people in the study tried to research the answer on the web, to which the answer was that the initial group of test subjects did not try to research the answer, so the following groups have been informed about that option. The next question was if the participants answered the questions truthfully, to which Stuart replied that the answers were manually inspected and for the most part the answers were honest. The next question concerned people re-using their password on multiple sites and if that changes anything and the answer was that even then people forget the answers and get compromised. The final question was if the study showed that security questions are less secure than passwords to which the answer was a definitive yes. Q: What would be secret enough in terms of percentage? A: Enough depends on your own use. For example, you might want different security for your throw-away email account vs. a secure service that stores data in the cloud. Q: Do you look into how these questions can be researched? A: We told people to look at Facebook and search engines. Q: Did you find that the answer had to do something with the questions? A: Just by eyeballing over the data, most answers are indeed related to the questions. Q: Did this study give insight about the relative strenght of secrect questsions vs. passwords? A: As one would expect, secret questions are significantly less secure. ----------------------- The last talk of the conference was "Password Cracking Using Probabilistic Context-Free Grammars", presented by Matt Weir. The goal of Matt's work was to build a better password cracker to aid law enforcement in a forensic setting. The process of cracking a hashed password to create candidate passwords, hash them and compare to the hashed version. The most resource intensive part is the hashing, however, by using more educated guesses to create the passwords, the number of required hash operations can be reduced. Matt proposes to create a grammar based on previously discovered passwords that will reflect the way people usually choose and construct their passwords. The grammar would allow to rank and order the guesses and apply several mangling rules at once. Matt then mention the current state-of-the-art password cracker. John the Ripper, to which his approach will be compared. His method is based on a dictionary and advanced mangling rules and works in two stages. First, the system is trained on publicly available password lists and password structure rules are inferred, splitting the passwords in letters, digits and symbols and recording the length of each class. Then probabilities for each terminal value are learned. By following the grammar rules and combining probabilities, a score for each password can be computed. Of course the grammar can be adapted to a specific language or social group. The function that chooses the next password to try has to fulfill several criteria: Discard duplicates, return passwords in probability order, be memory and time efficient and allow distributed cracking. Matt mentioned that John the Ripper word mangling rules can be modeled entirely with this grammar and are only a small subset of what is possible with this system. From the training data they had, Matt's team generated 1589 base rules which together with a dictionary can be expanded to 34 trillion passwords. Then Matt explained the cracking process and how the list of passwords to try is generated. In a comparison with John the Ripper over the MySpace password dataset this system wins in all aspects (time, number of cracked passwords, etc.). In the end, Matt briefly mentioned related work and emphasized that this system is the first that can devise word-mangling rules automatically. Asked if people use more secure passwords for banking sites compared to, for example, social networking sites, Matt answered that they did not posses bank password lists to try their system on. The last question concerned the reaction of the Institutional Review Board to this research and Matt said that it took a while to convince the IRB and promises were made to minimize the privacy breaches, for example, user names obtained with the passwords were discarded. Q: Could you observe that banking passwords are more secure than those of social networking sites like MySpace? A: We didn't have a list of banking passwords, but (hashed) CC data. Q: Did you run into any IRB issues? Laws? Ethical problems? A: Hopefully not, but there are indeed a lot of ethical issues. For example, we stripped all user information in our lists to prevent identity disclosure. ---------------------------------------------------------------------- AND THAT IS IT, SEE YA ALL NEXT YEAR! ____________________________________________________________________ Book Review By Richard Austin May 29, 2009 ____________________________________________________________________ Chained Exploits: Advanced Hacking Attacks from Start to Finish by A. Whitaker, K. Evans and J. Voth Addison-Wesley 2009. ISBN 13-978-0-321-49881-6 Amazon.com USD 31.49 With books on how to "hack" practically everything from your favorite operating system to the kitchen toaster, one might legitimately wonder what could be done to set a new book apart from this crowded flock? Whitaker and his colleagues have managed to carve out their own niche by taking a comprehensive look at how penetrating an organization's defenses often requires that several exploits be applied in sequence (or a chain) to actually achieve the objective. In a refreshing counter to the "technology tunnel-vision" that afflicts many of us (i.e., if it's not a technical vulnerability then it's nothing to worry about), the authors' scenarios include defects in process and implementation as well as good, old-fashioned social engineering in the attacker's bag of tricks. The book is organized around the activities of a fictional hacker named "Phoenix" whose exploits are motivated by the usual causes such as greed or revenge for some slight. Some of the tasks are assigned by a shadowy criminal organization that "recruited" Phoenix after becoming aware of his activities. This organization doesn't hesitate to remind him that while the pay is good, they also know where he and his significant other live and their safety is contingent on his continued successful performance. This provides an important reminder that cybercime is no longer a playground for the technological elite but rather a thriving business venture where lavish profits reward successful operations. Each of the eight chapters of this short book (279 pages) opens with an initial scenario that describes the assignment. Phoenix then has to conduct reconnaissance and develop a plan of attack to accomplish his objective. Sometimes an attack is foiled by a countermeasure and he has to circle back around to find a new avenue for that phase of his assault. The chapter concludes with a countermeasures section that discusses the defenses that would have foiled the successful chain of attacks. The scenarios are very realistic and range from the staples of stealing credit cards and industrial espionage to more exotic goals such as destroying a politician's career through compromising a social networking site. This is a book that needed to be written and the authors, themselves penetration testers by profession, have done an admirable job of graphically illustrating how successful penetrations are often not the result of a single flaw or attack but the result of a carefully crafted chain of actions that worm their way around and though the successive layers of our defenses. I suspect many readers, like myself, will experience several "Oh fudge!" moments and dash off copious notes that will guide new hardening efforts in their organization for some time to come. -------- Before beginning life as an itinerant university instructor and cybersecurity consultant, Richard Austin was the storage network security architect for a Fortune 25 company. He welcomes your thoughts and comments at rausti19 at Kennesaw dot edu ==================================================================== Conference and Workshop Announcements Upcoming Calls-For-Papers and Events ==================================================================== The complete Cipher Calls-for-Papers is located at http://www.ieee-security.org/CFP/Cipher-Call-for-Papers.html The Cipher event Calendar is at http://www.ieee-security.org/Calendar/cipher-hypercalendar.html ____________________________________________________________________ Cipher Event Calendar ____________________________________________________________________ Calendar of Security and Privacy Related Events maintained by Hilarie Orman Date (Month/Day/Year), Event, Locations, web page for more info. 5/29/09: ASIACRYPT, 15th Annual International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan; http://asiacrypt2009.cipher.risk.tsukuba.ac.jp; Submissions are due 5/29/09: SSN, 5th International Workshop on Security in Systems and Networks, Held in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS 2009), Rome, Italy; http://www4.comp.polyu.edu.hk/~csbxiao/ssn09/ 5/30/09: STC, 4th Annual Workshop on Scalable Trusted Computing, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://projects.cerias.purdue.edu/stc2009/call.html; Submissions are due 5/31/09: InSPEC, 2nd International Workshop on Security and Privacy in Enterprise Computing, Held in conjunction with the 13th IEEE International Enterprise Distributed Object Computing Conference (EDOC 2009), Auckland, New Zealand; http://sesar.dti.unimi.it/InSPEC2009/ Submissions are due 5/31/09: TSP, IEEE International Symposium on Trust, Security and Privacy for Pervasive Applications, Held in conjunction with the IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS 2009), Macau SAR, China; http://trust.csu.edu.cn/conference/tsp2009/ Submissions are due 6/ 1/09: CANS, 8th International Conference on Cryptography and Network Security, Kanazawa, Ishikawa, Japan; http://www.rcis.aist.go.jp/cans2009/ Submissions are due 6/ 1/09: ACSAC, 25th Annual Computer Security Applications Conference, Honolulu, Hawaii, USA; http://www.acsac.org; Submissions are due 6/ 1/09: EuroPKI, 6th European Workshop on Public Key Services, Applications and Infrastructures, Pisa, Tuscany, Italy; http://www.iit.cnr.it/EUROPKI09; Submissions are due 6/ 1/09: SETOP, International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2009, Saint Malo, Britany, France; http://conferences.telecom-bretagne.eu/setop-2009; Submissions are due 6/ 1/09: IWNS, International Workshop on Network Steganography, Held in conjunction with the International Conference on Multimedia Information Networking and Security (MINES 2009), Wuhan, Hubei, China; http://stegano.net/workshop; Submissions are due 6/ 1/09: CSET, Workshop on Cyber Security Experimentation and Test, Held in conjunction with the USENIX Security Symposium (USENIX-Security 2009), Montreal, Canada; http://www.usenix.org/event/cset09/ Submissions are due 6/ 2/09- 6/ 5/09: ACNS, 7th International Conference on Applied Cryptography and Network Security, Paris, France; http://acns09.di.ens.fr/ 6/ 3/09- 6/ 5/09: MobiSec, 1st International Conference on Security and Privacy in Mobile Information and Communication Systems, Turin, Italy; http://www.mobisec.org/ 6/ 3/09- 6/ 5/09: SACMAT, 14th ACM Symposium on Access Control Models and Technologies, Hotel La Palma, Stresa, Italy; http://www.sacmat.org 6/ 7/09- 6/10/09: IH, 11th Information Hiding Workshop, Darmstadt, Germany; http://www.ih09.tu-darmstadt.de/ 6/ 7/09: SecPri-WiMob, International Workshop on Security and Privacy in Wireless and Mobile Computing, Networking and Communications, Held in the 5th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2009), Marrakech, Morocco; http://www.icsd.aegean.gr/SecPri_WiMob_2009/ Submissions are due 6/10/09: DPM, 4th International Workshop on Data Privacy Management, Saint Malo, Britany, France; http://dpm09.dyndns.org/ Submissions are due 6/12/09: SWS, ACM Workshop on Secure Web Services, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://sesar.dti.unimi.it/SWS09/ Submissions are due 6/12/09: SPIMACS, ACM Workshop on Security and Privacy in Medical and Home-Care Systems, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://www.infosecon.net/SPIMACS/cfp.php; Submissions are due 6/14/09- 6/18/09: CISS, Communication and Information Systems Security Symposium, Held in conjunction with the IEEE International Conference on Communications (ICC 2009), Dresden, Germany; http://www.ieee-icc.org/2009/ 6/14/09- 6/19/09: SECURWARE, 3rd International Conference on Emerging Security Information, Systems and Technologies, Athens, Greece; http://www.iaria.org/conferences2009/SECURWARE09.html 6/15/09: ARO-DF, ARO Workshop on Digital Forensics, Washington DC., USA; http://www.engineering.iastate.edu/~guan/ARO-DF/ARO-DF.html; Submissions are due 6/15/09: ICPADS, 15th IEEE International Conference on Parallel and Distributed Systems, Shenzhen, China; http://www.comp.polyu.edu.hk/conference/icpads09/ Submissions are due 6/15/09: SECMCS, Workshop on Secure Multimedia Communication and Services, Held in conjunction with the 2009 International Conference on Multimedia Information Networking and Security (MINES 2009), Wuhan, China; http://liss.whu.edu.cn/mines2009/SECMCS.htm; Submissions are due 6/15/09: IS, 4th International Symposium on Information Security, Vilamoura, Algarve-Portugal; http://www.onthemove-conferences.org/index.php?option=com_content&view=article&id=65&Itemid=140; Submissions are due 6/15/09: HICSS-DF, 43rd Hawaii International Conference on System Sciences, Digital Forensics Minitrack, Koloa, Kauai, Hawaii; http://www.hicss.hawaii.edu/hicss_43/apahome43.html; Submissions are due 6/15/09- 6/19/09: MIST, International Workshop on Managing Insider Security Threats, Held in conjunction with the 3rd IFIP International Conference on Trust Management (IFIPTM 2009), West Lafayette, IN, USA; http://isyou.hosting.paran.com/mist09/ 6/19/09: CCSW, ACM Cloud Computing Security Workshop, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://crypto.cs.stonybrook.edu/ccsw09; Submissions are due 6/21/09: STM, 5th International Workshop on Security and Trust Management, Held in conjunction with ESORICS 2009, Saint Malo, France; http://stm09.dti.unimi.it; Submissions are due 6/25/09- 6/27/09: WNGS, 4th International Workshop on Security, Korea University, Seoul, Korea; http://www.sersc.org/WNGS2009/ 7/ 1/09- 7/ 3/09: ACSISP, 14th Australasian Conference on Information Security and Privacy, Brisbane, Australia; http://conf.isi.qut.edu.au/acisp2009/ 7/ 7/09- 7/10/09: SECRYPT, International Conference on Security and Cryptography, Milan, Italy; http://www.secrypt.org/ 7/ 7/09- 7/10/09: ATC, 6th International Conference on Autonomic and Trusted Computing, Brisbane, Australia; http://www.itee.uq.edu.au/~atc09 7/ 7/09- 7/10/09: CTC, Cybercrime and Trustworthy Computing Workshop, Held in conjunction with the 6th International Conference on Autonomic and Trusted Computing (ATC 2009), Brisbane, Australia; http://www.cybercrime.com.au/ctc09 7/ 8/09- 7/10/09: CSF, 22nd IEEE Computer Security Foundations Symposium, Port Jefferson, New York, USA; http://www.cs.sunysb.edu/csf09/ 7/12/09- 7/15/09: DBSEC, 23rd Annual IFIP WG 11.3 Working Conference on Data and Applications Security, Montreal, Canada; http://www.ciise.concordia.ca/dbsec09/ 7/15/09: ICISS, 5th International Conference on Information Systems Security, Kolkata, India; http://www.eecs.umich.edu/iciss09/ Submissions are due 7/20/09- 7/22/09: POLICY, IEEE International Symposium on Policies for Distributed Systems and Networks, Imperial College London, UK; http://ieee-policy.org 7/27/09: HOST, 2nd IEEE International Workshop on Hardware-Oriented Security and Trust, San Francisco, CA, USA; http://www.engr.uconn.edu/HOST/ 8/ 1/09: INTRUST, The International Conference on Trusted Systems, Beijing, P. R. China; http://www.tcgchina.org; Submissions are due 8/10/09: CSET, Workshop on Cyber Security Experimentation and Test, Held in conjunction with the USENIX Security Symposium (USENIX-Security 2009), Montreal, Canada; http://www.usenix.org/event/cset09/ 8/11/09: HotSec, 4th USENIX Workshop on Hot Topics in Security, Held in conjunction with the 18th USENIX Security Symposium (USENIX-Security 2009), Montreal, Canada; http://www.usenix.org/events/hotsec09/cfp/ 8/12/09- 8/14/09: USENIX-SECURITY, 18th USENIX Security Symposium, Montreal, Canada; http://www.usenix.org/events/sec09/cfp/ 8/14/09: Information Systems Frontiers, Special Issue on Security Management and Technologies for Protecting Against Internal Data Leakages; http://www.som.buffalo.edu/isinterface/ISFrontiers/forthcoming1 /InfoSec09-SI-CFP.pdf; Submissions are due 8/15/09: IFIP-DF, 6th Annual IFIP WG 11.9 International Conference on Digital Forensics, University of Hong Kong, Hong Kong; http://www.ifip119.org/Conferences/WG11-9-CFP-2010.pdf; Submissions are due 8/17/09- 8/19/09: DFRWS, 9th Digital Forensics Research Workshop, Montreal, Canada; http://www.dfrws.org/2009/cfp.shtml 8/31/09- 9/ 4/09: TrustBus, 6th International Conference on Trust, Privacy, and Security in Digital Business, Held in conjunction with the 20th International Conference on Database and Expert Systems Applications (DEXA 2009), Linz, Austria; http://www.icsd.aegean.gr/trustbus2009/ 8/31/09- 9/ 4/09: DaSECo, 1st International Workshop on Defence against Spam in Electronic Communication, Held in conjunction with the 20th International Conference on Database and Expert Systems Applications (DEXA 2009), Linz, Austria; http://www.dexa.org/files/CfP_DaSECo_15.Jan_.pdf 8/31/09- 9/ 4/09: InSPEC, 2nd International Workshop on Security and Privacy in Enterprise Computing, Held in conjunction with the 13th IEEE International Enterprise Distributed Object Computing Conference (EDOC 2009), Auckland, New Zealand; http://sesar.dti.unimi.it/InSPEC2009/ 9/ 1/09: International Journal of Communication Networks and Information Security, Special Issue on Composite and Integrated Security Solutions for Wireless Sensor Networks; http://ijcnis.kust.edu.pk/announcement; Submissions are due 9/ 2/09- 9/ 4/09: WISTP, Workshop on Information Security Theory and Practices (Smart Devices, Pervasive Systems, and Ubiquitous Networks), Bruxelles, Belgium; http://www.wistp.org/ 9/ 7/09- 9/ 9/09: ISC, 12th Information Security Conference, Pisa, Italy; http://isc09.dti.unimi.it/ 9/ 8/09: SAC-CF, 25th ACM Symposium on Applied Computing, Computer Forensics Track, Sierre, Switzerland; http://comp.uark.edu/~bpanda/sac2010cfp.pdf; Submissions are due 9/ 8/09: SAC-TRECK, 25th ACM Symposium on Applied Computing, Trust, Reputation, Evidence and other Collaboration Know-how Track, Sierre, Switzerland; http://www.trustcomp.org/treck/ Submissions are due 9/ 8/09- 9/11/09: NSPW, New Security Paradigms Workshop, The Queen's College, University of Oxford, UK; http://www.nspw.org/current/cfp.shtml 9/ 9/09- 9/11/09: EuroPKI, 6th European Workshop on Public Key Services, Applications and Infrastructures, Pisa, Tuscany, Italy; http://www.iit.cnr.it/EUROPKI09 9/10/09- 9/11/09: ARO-DF, ARO Workshop on Digital Forensics, Washington DC, USA; http://www.engineering.iastate.edu/~guan/ARO-DF/ARO-DF.html 9/11/09: NDSS, 17th Annual Network & Distributed System Security Symposium, San Diego, CA, USA; http://www.isoc.org/isoc/conferences/ndss/10/cfp.shtml; Submissions are due 9/14/09- 9/18/09: SECURECOMM, 5th International ICST Conference on Security and Privacy for Communication Networks, Athens, Greece; http://www.securecomm.org 9/21/09- 9/25/09: ESORICS, 14th European Symposium on Research in Computer Security, Saint Malo, France; http://www.esorics.org 9/24/09: DPM, 4th International Workshop on Data Privacy Management, Saint Malo, Britany, France; http://dpm09.dyndns.org/ 9/24/09- 9/25/09: SETOP 2009 International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2009, Saint Malo, Britany, France; http://conferences.telecom-bretagne.eu/setop-2009 9/24/09- 9/25/09: STM, 5th International Workshop on Security and Trust Management, Held in conjunction with ESORICS 2009 Saint Malo, France; http://stm09.dti.unimi.it 9/27/09- 9/30/09: SRDS, 28th International Symposium on Reliable Distributed Systems, Niagara Falls, New York, USA; http://www.cse.buffalo.edu/srds2009/ 9/30/09-10/ 2/09: ICDF2C, International Conference on Digital Forensics & Cyber Crime, Albany, NY, USA; http://www.d-forensics.org/ 10/ 6/09-10/10/09: SIN, 2nd ACM International Conference on Security of Information and Networks, Eastern Mediterranean University, Gazimagusa, TRNC, North Cyprus; http://www.sinconf.org/cfp/cfp.htm 10/11/09: VizSec, Workshop on Visualization for Cyber Security, Atlantic City, NJ, USA; http://vizsec.org/vizsec2009/ 10/12/09: SecPri-WiMob, International Workshop on Security and Privacy in Wireless and Mobile Computing, Networking and Communications, Held in the 5th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2009), Marrakech, Morocco; http://www.icsd.aegean.gr/SecPri_WiMob_2009/ 10/12/09-10/14/09: TSP, IEEE International Symposium on Trust, Security and Privacy for Pervasive Applications, Held in conjunction with the IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS 2009), Macau SAR, China; http://trust.csu.edu.cn/conference/tsp2009/ 10/14/09: MetriSec, 5th International Workshop on Security Measurements and Metrics, Held in conjunction with the International Symposium on Empirical Software Engineering and Measurement (ESEM 2009), Lake Buena Vista, Florida, USA; http://www.cs.kuleuven.be/conference/MetriSec2009/ 10/19/09-10/21/09: NSS, 3rd International Conference on Network & System Security, Gold Coast, Australia; http://nss2007.cqu.edu.au/FCWViewer/view.do?page=8494 10/19/09-10/21/09: DMM, 1st International Workshop on Denial of Service Modelling and Mitigation, Held in conjunction with 3rd International Conference on Network & System Security (NSS 2009), Gold Coast, Australia; http://conf.isi.qut.edu.au/dmm2009 10/28/09-10/30/09: IWSEC, 4th International Workshop on Security, Toyama, Japan; http://www.iwsec.org 11/ 1/09-11/ 6/09: LISA, 23rd USENIX Large Installation System Administration Conference, Baltimore, MD, USA; http://usenix.org/events/lisa09/ 11/ 1/09-11/ 6/09: IS, 4th International Symposium on Information Security, Vilamoura, Algarve-Portugal; http://www.onthemove-conferences.org/index.php? option=com_content&view=article&id=65&Itemid=140 11/ 9/09-11/13/09: CCS, 16th ACM Conference on Computer and Communications Security, Chicago, IL, USA; http://sigsac.org/ccs/CCS2009/index.shtml 11/13/09: STC, 4th Annual Workshop on Scalable Trusted Computing, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://projects.cerias.purdue.edu/stc2009/call.html 11/13/09: SWS, ACM Workshop on Secure Web Services, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://sesar.dti.unimi.it/SWS09/ 11/13/09: SPIMACS, ACM Workshop on Security and Privacy in Medical and Home-Care Systems, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://www.infosecon.net/SPIMACS/cfp.php 11/13/09: CCSW, ACM Cloud Computing Security Workshop, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA; http://crypto.cs.stonybrook.edu/ccsw09 11/18/09-11/20/09: IWNS, International Workshop on Network Steganography, Held in conjunction with the International Conference on Multimedia Information Networking and Security (MINES 2009), Wuhan, Hubei, China; http://stegano.net/workshop 11/18/09-11/20/09: SECMCS, Workshop on Secure Multimedia Communication and Services, Held in conjunction with the 2009 International Conference on Multimedia Information Networking and Security (MINES 2009), Wuhan, China; http://liss.whu.edu.cn/mines2009/SECMCS.htm 12/ 6/09-12/ 9/09: WIFS, 1st IEEE International Workshop on Information Forensics and Security, London, UK; http://www.wifs09.org 12/ 6/09-12/10/09: ASIACRYPT, 15th Annual International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan; http://asiacrypt2009.cipher.risk.tsukuba.ac.jp 12/ 7/09-12/11/09: ACSAC, 25th Annual Computer Security Applications Conference, Honolulu, Hawaii, USA; http://www.acsac.org 12/ 8/09-12/11/09: ICPADS, 15th IEEE International Conference on Parallel and Distributed Systems, Shenzhen, China; http://www.comp.polyu.edu.hk/conference/icpads09/ 12/12/09-12/14/09: CANS, 8th International Conference on Cryptography and Network Security, Kanazawa, Ishikawa, Japan; http://www.rcis.aist.go.jp/cans2009/ 12/14/09-12/18/09: ICISS, 5th International Conference on Information Systems Security, Kolkata, India; http://www.eecs.umich.edu/iciss09/ 12/17/09-12/19/09: INTRUST, The International Conference on Trusted Systems, Beijing, P. R. China; http://www.tcgchina.org 1/ 3/10- 1/ 6/10: IFIP-DF, 6th Annual IFIP WG 11.9 International Conference on Digital Forensics, University of Hong Kong, Hong Kong; http://www.ifip119.org/Conferences/WG11-9-CFP-2010.pdf 1/ 5/10- 1/ 8/10: HICSS-DF, 43rd Hawaii International Conference on System Sciences, Digital Forensics Minitrack, Koloa, Kauai, Hawaii; http://www.hicss.hawaii.edu/hicss_43/apahome43.html 2/28/10- 3/ 3/10: NDSS, 17th Annual Network & Distributed System Security Symposium, San Diego, CA, USA; http://www.isoc.org/isoc/conferences/ndss/10/cfp.shtml 3/22/10- 3/26/10: SAC-CF, 25th ACM Symposium on Applied Computing, Computer Forensics Track, Sierre, Switzerland; http://comp.uark.edu/~bpanda/sac2010cfp.pdf 3/22/10- 3/26/10: SAC-TRECK, 25th ACM Symposium on Applied Computing, Trust, Reputation, Evidence and other Collaboration Know-how Track, Sierre, Switzerland; http://www.trustcomp.org/treck/ ____________________________________________________________________ Journal, Conference and Workshop Calls-for-Papers (new since Cipher E89) ___________________________________________________________________ ASIACRYPT 2009 15th Annual International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, December 6-10, 2009. (Submissions due 29 May 2009) http://asiacrypt2009.cipher.risk.tsukuba.ac.jp Original research papers on all technical aspects of cryptology are solicited for submission to ASIACRYPT 2009, the annual International Conference on Theory and Application of Cryptology and Information Security. The conference is sponsored by the International Association for Cryptologic Research (IACR) in cooperation with Technical Group on Information Security (ISEC) of the Institute of Electronics, Information and Communication Engineers (IEICE). ------------------------------------------------------------------------- STC 2009 4th Annual Workshop on Scalable Trusted Computing, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA, November 13, 2009. (Submissions due 30 May 2009) http://projects.cerias.purdue.edu/stc2009/call.html The workshop focuses on fundamental technologies of trusted computing (in a broad sense, with or without TPMs) and its applications in large-scale systems -- those involving large number of users and parties with varying degrees of trust. The workshop is intended to serve as a forum for researchers as well as practitioners to disseminate and discuss recent advances and emerging issues. Topics of interest include, but not limited to: - Enabling scalable trusted computing - Applications of trusted computing - Pushing the limits ------------------------------------------------------------------------- InSPEC 2009 2nd International Workshop on Security and Privacy in Enterprise Computing, Held in conjunction with the 13th IEEE International Enterprise Distributed Object Computing Conference (EDOC 2009), Auckland, New Zealand, August 31 - September 4, 2009. (Submissions due 31 May 2009) http://sesar.dti.unimi.it/InSPEC2009/ In recent years several technologies have emerged for enterprise computing. Workflows are now widely adopted by industry and distributed workflows have been a topic of research for many years. Today, services are becoming the new building blocks of enterprise systems and service-oriented architectures are combining them in a flexible and novel way. In addition, with wide adoption of e-commerce, business analytics that exploits multiple, heterogeneous data sources have become an important field. Ubiquitous computing technologies, such as RFID or sensor networks change the way business systems interact with their physical environment, such as goods in a supply chain or machines on the shop floor. All these technological trends are accompanied also by new business trends due to globalization that involve innovative forms of collaborations such as virtual organizations. Further, the increased speed of business requires IT systems to become more flexible and highly dynamic. All of these trends bring with them new challenges to the security and privacy of enterprise computing. New concepts for solving these challenges require the combination of many disciplines from computer science and information systems, such as cryptography, networking, distributed systems, process modeling and design, access control, privacy etc. The goal of this workshop is to provide a forum for exchange of novel research in these areas among the experts from academia and industry. Completed work as well as research in progress is welcome, as we want to foster the exchange of novel ideas and approaches. ------------------------------------------------------------------------- TSP 2009 IEEE International Symposium on Trust, Security and Privacy for Pervasive Applications, Held in conjunction with the IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS 2009), Macau SAR, China, October 12-14, 2009. (Submissions due 31 May 2009) http://trust.csu.edu.cn/conference/tsp2009/ TSP 2009 aims at bringing together researchers and practitioners in the world working on trust, security, privacy, and related issues such as technical, social and cultural implications for pervasive devices, services, networks, applications and systems, and providing a forum for them to present and discuss emerging ideas and trends in this highly challenging research area. Topics of interest include, but are not limited to: - Trust, Security and Privacy (TSP) metrics and architectures for pervasive computing - Trust management in pervasive environment - Risk management in pervasive environment - Security and privacy protection in pervasive environment - Security and privacy in mobile and wireless communications - Security and privacy for databases in pervasive environment - Safety and user experiences in pervasive environment - TSP-aware social and cultural implications in pervasive environment - Cryptographic devices for pervasive computing - Biometric authentication for pervasive devices - Security for embedded software and systems - TSP-aware middleware design for pervasive services - TSP-aware case studies on pervasive applications/systems - Key management in pervasive applications/systems - Authentication in pervasive applications/systems - Audit and accountability in pervasive applications/systems - Access control in pervasive applications/systems - Anonymity in pervasive applications/systems - Reliability and fault tolerance in pervasive applications/systems - Miscellaneous issues in pervasive devices, services, applications, and systems ------------------------------------------------------------------------- CANS 2009 8th International Conference on Cryptography and Network Security, Kanazawa, Ishikawa, Japan, December 12-14, 2009. (Submissions due 1 June 2009) http://www.rcis.aist.go.jp/cans2009/ The main goal of this conference is to promote research on all aspects of network security, as well as to build a bridge between research on cryptography and on network security. We therefore welcome scientific and academic papers with this focus. Areas of interest for CANS 2009 include, but are not limited to: - Ad Hoc and Sensor Network Security - Access Control for Networks - Anonymity and Pseudonymity - Authentication Services - Cryptographic Protocols and Schemes - Denial of Service Protection - Digital Rights Management - Fast Cryptographic Algorithms - Identity and Trust Management - Information Hiding and Watermarking - Internet and Router Security - Intrusion Detection and Prevention - Mobile and Wireless Network Security - Multicast Security - Phishing and Online Fraud Prevention - Peer-to-Peer Network Security - PKI - Security Modeling and Architectures - Secure Protocols (SSH, SSL, ...) and Applications - Spam Protection - Spyware Analysis and Detection - Virtual Private Networks ------------------------------------------------------------------------- ACSAC 2009 25th Annual Computer Security Applications Conference, Honolulu, Hawaii, USA, December 7-11, 2009. (Submissions due 1 June 2009) http://www.acsac.org We solicit papers offering novel contributions in computer and application security. Papers should present techniques or applications with practical experience. Papers are encouraged on technologies and methods that have been demonstrated to improve information systems security and that address lessons from actual application. We are especially interested in papers that address the application of security technology, the implementation of systems, and lessons learned. Suggested topics: - access control - applied cryptography - audit and audit reduction - biometrics - certification and accreditation - cybersecurity - database security - denial of service protection - distributed systems security - electronic commerce security - enterprise security management - forensics - identification & authentication - identify management - incident response planning - information survivability - insider threat protection - integrity - intellectual property rights - intrusion detection - mobile and wireless security - multimedia security - operating systems security - peer-to-peer security - privacy and data protection - product evaluation/compliance - risk/vulnerability assessment - securing cloud infrastructures - security engineering and management - security in IT outsourcing - service oriented architectures - software assurance - trust management - virtualization security - VOIP security - Web 2.0/3.0 security ------------------------------------------------------------------------- EuroPKI 2009 6th European Workshop on Public Key Services, Applications and Infrastructures, Pisa, Tuscany, Italy, September 9-11, 2009. (Submissions due 1 June 2009) http://www.iit.cnr.it/EUROPKI09 EuroPKI aims at covering all research aspects of Public Key Services, Applications and Infrastructures. In particular, we want to encourage also submissions dealing with any innovative applications of public key cryptography. Submitted papers may present theory, applications or practical experiences on topics including, but not limited to: - Anonymity and privacy - Architecture and Modeling - Authentication - Authorization and Delegation - Case Studies - Certificates Status - Certification Policy and Practices - Credentials - Cross Certification - Directories - eCommerce/eGovernment - Evaluation - Fault-Tolerance and reliability - Federations - Group signatures - ID-based schemes - Identity Management and eID - Implementations - Interoperability - Key Management - Legal issues - Long-time archiving - Mobile PKI - Multi-signatures - Policies & Regulations - Privacy - Privilege Management - Protocols - Repositories - Risk/attacks - Standards - Timestamping - Trust management - Trusted Computing - Ubiquitous scenarios - Usage Control - Web services security ------------------------------------------------------------------------- SETOP 2009 International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2009, Saint Malo, Britany, France, September 24-25, 2009. (Submissions due 1 June 2009) http://conferences.telecom-bretagne.eu/setop-2009 With the need for evolution, if not revolution, of current network architectures and the Internet, autonomous and spontaneous management will be a key feature of future networks and information systems. In this context, security is an essential property. It must be thought at the early stage of conception of these systems and designed to be also autonomous and spontaneous. Future networks and systems must be able to automatically configure themselves with respect to their security policies. The security policy specification must be dynamic and adapt itself to the changing environment. Those networks and systems should interoperate securely when their respective security policies are heterogeneous and possibly conflicting. They must be able to autonomously evaluate the impact of an intrusion in order to spontaneously select the appropriate and relevant response when a given intrusion is detected. Autonomous and spontaneous security is a major requirement of future networks and systems. Of course, it is crucial to address this issue in different wireless and mobile technologies available today such as RFID, Wifi, Wimax, 3G, etc. Other technologies such as ad hoc and sensor networks, which introduce new type of services, also share similar requirements for an autonomous and spontaneous management of security. The SETOP Workshop seeks submissions that present research results on all aspects related to spontaneous and autonomous security. Submissions by PhD students are encouraged. Topics of interest include, but are not limited to the following: - Security policy deployment - Self evaluation of risk and impact - Distributed intrusion detection - Autonomous and spontaneous response - Trust establishment - Security in ad hoc networks - Security in sensor/RFID networks - Security of Next Generation Networks - Security of Service Oriented Architecture - Security of opportunistic networks - Privacy in self-organized networks - Secure localization - Context aware and ubiquitous computing - Secure inter-operability and negotiation - Self-organization in secure routing - Identity management ------------------------------------------------------------------------- IWNS 2009 International Workshop on Network Steganography, Held in conjunction with the International Conference on Multimedia Information Networking and Security (MINES 2009), Wuhan, Hubei, China, November 18-20, 2009. (Submissions due 1 June 2009) http://stegano.net/workshop Network steganography is part of information hiding focused on modern networks and is a method of hiding secret data in users' normal data transmissions, ideally, so it cannot be detected by third parties. Steganographic techniques arise and evolve with the development of network protocols and mechanisms, and are expected to used in secret communication or information sharing. Now, it becomes a hot topic due to the wide spread of information networks, e.g., multimedia service networks and social networks. The workshop is dedicated to capture such areas of research as steganography, steganalysis, and digital forensics in the meaning of network covert channels, investigate the potential applications, and discuss the future research topics. Research themes of workshop will include: - Steganography and steganalysis - Covert/subliminal channels - Novel applications of information hiding in networks - Political and business issues in network steganography - Information hiding in multimedia services - Digital forensics - Network communication modelling from the viewpoint of steganography and steganalysis - New methods for eliminating network steganography ------------------------------------------------------------------------- CSET 2009 Workshop on Cyber Security Experimentation and Test, Held in conjunction with the USENIX Security Symposium (USENIX-Security 2009), Montreal, Canada, August 10, 2009. (Submissions due 1 June 2009) http://www.usenix.org/event/cset09/ CSET '09 is bringing together researchers and testbed developers to share their experiences and define a forward-looking agenda for the development of scientific, realistic evaluation approaches for security threats and defenses; it provides an important community forum for the exploration of transformational advances in the field of cyber security experimentation and test. While we particularly invite papers that deal with security experimentation, we are also interested in papers that address general testbed/ experiment issues that have implications on security experimentation such as: traffic and topology generation, large-scale experiment support, experiment automation, etc. We are further interested in educational efforts that involve security experimentation. ------------------------------------------------------------------------- SecPri-WiMob 2009 International Workshop on Security and Privacy in Wireless and Mobile Computing, Networking and Communications, Held in the 5th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2009), Marrakech, Morocco, October 12, 2009. (Submissions due 7 June 2009) http://www.icsd.aegean.gr/SecPri_WiMob_2009/ The objectives of the SecPri_WiMob 2009 Workshop are to bring together researchers from research communities in Wireless and Mobile Computing, Networking and Communications, Security and Privacy, with the goal of fostering interaction. Topics of interest may include one or more of the following (but are not limited to) themes: - Cryptographic Protocols for Mobile and Wireless Networks - Key Management in Mobile and Wireless Computing - Reasoning about Security and Privacy - Privacy and Anonymity in Mobile and Wireless Computing - Public Key Infrastructure in Mobile and Wireless Environments - Economics of Security and Privacy in Wireless and Mobile environments - Security Architectures and Protocols in Wireless LANs - Security Architectures and Protocols in B3G/4G Mobile Networks - Security and Privacy features into Mobile and Wearable devices - Location Privacy - Ad hoc Networks Security - Sensor Networks Security - Wireless Ad Hoc Networks Security - Role of Sensors to Enable Security - Security and Privacy in Pervasive Computing - Trust Establishment, Negotiation, and Management - Secure PHY/MAC/routing protocols - Security under Resource Constraints (bandwidth, computation constraints, energy) ------------------------------------------------------------------------- DPM 2009 4th International Workshop on Data Privacy Management, Saint Malo, Britany, France, September 24, 2009. (Submissions due 10 June 2009) http://dpm09.dyndns.org/ DPM 2009 Workshop aims at discussing and exchanging ideas related to privacy data management. We invite papers from researchers and practitioners working in privacy, security, trustworthy data systems and related areas to submit their original papers in this workshop. The main topics, but not limited to, include: - Privacy Information Administration - Privacy Policy-based Infrastructures and Architectures - Privacy-oriented Access Control Language - Privacy in Trust Management - Privacy Data Integration - Privacy Risk Assessment and Assurance - Privacy Services - Privacy Policy Analysis - Query Execution over Privacy Sensitive Data - Privacy Preserving Data Mining - Hippocratic and Water-marking Databases - Privacy for Integrity-based Computing - Privacy Monitoring and Auditing - Privacy in Social Networks - Privacy in Ambient Intelligence (AmI) Applications - Conciliation of Individual Privacy and Corporate/National Security - Privacy in computer networks - Privacy and RFIDs - Privacy in Sensor Networks ------------------------------------------------------------------------- SWS 2009 ACM Workshop on Secure Web Services, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA, November 13, 2009. (Submissions due 12 June 2009) http://sesar.dti.unimi.it/SWS09/ Basic security protocols for Web Services, such as XML Security, the WS-* series of proposals, SAML, and XACML are the basic set of building blocks enabling Web Services and the nodes of GRID architectures to interoperate securely. While these building blocks are now firmly in place, a number of challenges are still to be met for Web services and GRID nodes to be fully secured and trusted, providing for secure communications between cross-platform and cross-language Web services. Also, the current trend toward representing Web services orchestration and choreography via advanced business process metadata is fostering a further evolution of current security models and languages, whose key issues include setting and managing security policies, inter-organizational (trusted partner) security issues and the implementation of high level business policies in a Web services environment. The SWS workshop explores these challenges, ranging from the advancement and best practices of building block technologies such as XML and Web services security protocols to higher level issues such as advanced metadata, general security policies, trust establishment, risk management, and service assurance. The workshop provides a forum for presenting research results, practical experiences, and innovative ideas in web services security. Topics of interest include, but are not limited to, the following: - Web services and GRID computing security - Authentication and authorization - Frameworks for managing, establishing and assessing inter-organizational trust relationships - Web services exploitation of Trusted Computing - Semantics-aware Web service security and Semantic Web Secure orchestration of Web services - Privacy and digital identities support ------------------------------------------------------------------------- SPIMACS 2009 ACM Workshop on Security and Privacy in Medical and Home-Care Systems, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA, November 13, 2009. (Submissions due 12 June 2009) http://www.infosecon.net/SPIMACS/cfp.php The goal of the proposed workshop is to bring together a range of computer and social scientists to develop a more complete understanding of the interaction of individuals and computer security technologies as they are associated with critical care, continuing care and monitoring of the frail. The goals include but go beyond traditional vulnerability and usability critiques to include evaluations of use of security technologies in homes and in health care. The Health Information Technology for Economic Clinical Health Act, signed on 2/17/09, brings this issue strongly to the fore. SPIMACS (pronounced spy-max) seeks to bring together the people and expertise that will be required to address the challenges of securing the intimate digital spaces of the most vulnerable. Therefore the scope of this workshop includes but is not uniquely limited to: - usable security - usable privacy technologies, particularly for the physically or cognitively impaired - home-based wireless network security - security in specialized application for the home, e.g. medical or physical security monitoring - authentication in the home environment - security and anonymization of home-centric data on the network - usable security for unique populations, e.g. elders, children, or the ill - privacy and security evaluation mechanisms for home environments - security in home-based sensor networks - medical and spatial privacy - privacy-aware medical devices - privacy-enhanced medical search - analyses of in-home and medical systems - attacks on medical devices - threat analyses or attacks on medical or home data - novel applications of cryptography to medical or intimate data ------------------------------------------------------------------------- ARO-DF 2009 ARO Workshop on Digital Forensics, Washington DC., USA, September 10-11, 2009. (Submissions due 15 June 2009) http://www.engineering.iastate.edu/~guan/ARO-DF/ARO-DF.html The possibility of becoming a victim of cyber crime is the number one fear of billions of people online. In the years of fighting against cyber-crimes and cyber-enabled crimes, we have seen that digital evidence may often be available for a very short period of time and/or involve huge volumes of data that are found locally on a single digital device or spread globally across dispersed public and proprietary platforms. The field of Digital Forensics faces many challenges and difficult problems. The goal of this workshop is to identify important and hard digital forensic challenges and problems, and to stimulate community efforts on the development of scientific foundation for digital forensics and new theories and practical techniques towards addressing these problems. We invite one-page short statement of ideas addressing the problems and topics of interest for the workshop. The workshop discussions will be initiated by presentations from invited speakers, each representing a different perspective related to digital forensics and views from law enforcement, military, industry, and academia. These presentations will be used to form the basis of the workshop discussions to follow. The remainder of the workshop will be devoted to group discussions led by group coordinators on a selected list of important topics in digital forensics. Topics of relevance include, but are not limited to: - Scientific Foundation and Models, and the Law - Digital Evidence Discovery, Collection, Recovery, and Storage - Digital Evidence Analysis - Network Forensics - Digital Forensics Tool Validation - Anti-forensics Techniques ------------------------------------------------------------------------- ICPADS 2009 15th IEEE International Conference on Parallel and Distributed Systems, Shenzhen, China, December 8-11, 2009. (Submissions due 15 June 2009) http://www.comp.polyu.edu.hk/conference/icpads09/ Following the previous successful events, ICPADS 2009 will be held in Shenzhen, China. The conference provides an international forum for scientists, engineers, and users to exchange and share their experiences, new ideas, and latest research results on all aspects of parallel and distributed systems. Topics of particular interest include, but are not limited to: - High Performance Computational Biology and Bioinformatics - Parallel and Distributed Applications and Algorithms - High Performance Computational Biology and Bioinformatics - Multi-core and Multithreaded Architectures - Power-aware Computing - Distributed and Parallel Operating Systems - Resource Management and Scheduling - Peer-to-Peer Computing - Cluster and Grid Computing - Web-based Computing and Service-Oriented Architecture - Communication and Networking Systems - Wireless and Mobile Computing - Ad Hoc and Sensor Networks - Security and Privacy - Dependable and Trustworthy Computing and Systems - Real-Time and Multimedia Systems - Performance Modeling and Evaluation ------------------------------------------------------------------------- SECMCS 2009 Workshop on Secure Multimedia Communication and Services, Held in conjunction with the 2009 International Conference on Multimedia Information Networking and Security (MINES 2009), Wuhan, China, November 18-20, 2009. (Submissions due 15 June 2009) http://liss.whu.edu.cn/mines2009/SECMCS.htm This workshop covers various aspects of secure multimedia communication in emerging services. The services may work in the following environment: Internet, mobile TV, IPTV, IMS, VoIP, P2P, sensor network, network convergence, etc. The paper may focus on architecture construction, algorithm designing or hardware implementation. Both review paper and technical paper are expected. The topics include but are not limited to: - Lightweight multimedia encryption - Secure multimedia adaptation - Multimedia content authentication - Sensitive content detection/filtering based on multimedia analysis - Security threats or model for multimedia services - Conditional Access and Digital Rights Management - Key management/distribution in multimedia services - Secure payment for multimedia services - Secure user interface in multimedia services - Secure telecom/broadcast convergence - Secure mobile/Internet convergence - Security in 3G/4G multimedia communication networks - Security and privacy in multimedia sensor networks - Security protocols or standards for multimedia communication - Secure devices (set-top box, Smart Cards, SIM card, MID, etc.) - Intrusion detection/prevention in multimedia systems - Denial-of-Service (DoS) attacks in multimedia applications ------------------------------------------------------------------------- IS 2009 4th International Symposium on Information Security, Vilamoura, Algarve-Portugal, November 1-6, 2009. (Submissions due 15 June 2009) http://www.onthemove-conferences.org/index.php?option=com_content&view=article&id=65&Itemid=140 The goal of this symposium is to bring together researchers from the academia and practitioners from the industry in order to address information security issues. The symposium will provide a forum where researchers shall be able to present recent research results and describe emerging technologies and new research problems and directions related to them. The symposium seeks contributions presenting novel research in all aspects of information security. Topics of interest may include one or more of the following (but are not limited to) themes: - Access Control and Authentication - Accounting and Audit - Biometrics for Security - Buffer Overflows - Computer Forensics - Cryptographic Algorithms and Protocols - Databases and Data Warehouses Security - Honey Nets - Identity and Trust Management - Intrusion Detection and Prevention - Information Filtering and Content Management - Information Hiding and Watermarking - Mobile Code Security - Multimedia Security - Network Security - Privacy and Confidentiality - Public-Key Infrastructure - Privilege Management Infrastructure - Risk Assessment - Security Issues in E-Activities - Security and Privacy Economics - Security in RFID Systems - Security and Trustiness in P2P Systems and Grid Computing - Security in Web Services - Smart Card Technology - Software Security - Usability of Security Systems and Services - Vulnerability Assessment ------------------------------------------------------------------------- HICSS-DF 2010 43rd Hawaii International Conference on System Sciences, Digital Forensics Minitrack, Koloa, Kauai, Hawaii, January 5-8, 2010. (Submissions due 15 June 2009) http://www.hicss.hawaii.edu/hicss_43/apahome43.html This is a call for "original" papers addressing the area of digital forensics - to include research endeavors, industrial experiences and pedagogy. This minitrack is attempting to bring together an international collection of papers from academia, industry and law enforcement which address current directions in digital forensics. Digital forensics includes the use of software, computer science, software engineering, and criminal justice procedures to explore and or investigate digital media with the objective of finding evidence to support a criminal or administrative case. It involves the preservation, identification, extraction, and documentation of computer or network evidence. This minitrack is interested in a wide variety of papers which address the following areas as well as others: - Pedagogical papers that describe digital forensics degree programs or the teaching of digital forensics within other programs internationally. - Papers that address a research agenda that considers practitioner requirements, multiple investigative environments and emphasizes real world usability. - Papers that present an experience report involving the discovery, explanation and presentation of conclusive, persuasive evidence from digital forensics investigation. - Papers that combine research and practice. - Processes for the incorporation of rigorous scientific methods as a fundamental tenant of the evolving science of Digital Forensics. - Tools and techniques being developed through research activity. ------------------------------------------------------------------------- CCSW 2009 ACM Cloud Computing Security Workshop, Held in conjunction with the 16th ACM Conference on Computer and Communications Security (CCS 2009), Chicago, IL, USA, November 13, 2009. (Submissions due 19 June 2009) http://crypto.cs.stonybrook.edu/ccsw09 Notwithstanding the latest buzzword (grid, cloud, utility computing, SaaS, etc.), large-scale computing and cloud-like infrastructures are here to stay. How exactly they will look like tomorrow is still for the markets to decide, yet one thing is certain: clouds bring with them new untested deployment and associated adversarial models and vulnerabilities. It is essential that our community becomes involved at this early stage. The CCSW workshop aims to bring together researchers and practitioners in all security aspects of cloud-centric and outsourced computing, including: - secure cloud resource virtualization mechanisms - secure data management outsourcing - practical privacy and integrity mechanisms for outsourcing - foundations of cloud-centric threat models - secure computation outsourcing - remote attestation mechanisms in clouds - sandboxing and VM-based enforcements - trust and policy management in clouds - secure identity management mechanisms - new cloud-aware web service security paradigms and mechanisms - cloud-centric regulatory compliance issues and mechanisms - business and security risk models and clouds - cost and usability models and their interaction with security in clouds - scalability of security in global-size clouds - trusted computing technology and clouds - binary analysis of software for remote attestation and cloud protection - network security (DOS, IDS etc.) mechanisms for cloud contexts - security for emerging cloud programming models - energy/cost/efficiency of security in clouds ------------------------------------------------------------------------- STM 2009 5th International Workshop on Security and Trust Management, Held in conjunction with ESORICS 2009, Saint Malo, France, September 24-25, 2009. (Submissions due 21 June 2009) http://stm09.dti.unimi.it STM (Security and Trust Management) is a established working group of ERCIM (European Research Consortium in Informatics and Mathematics). Topics of interest include, but are not limited to: - access control - cryptography - data protection - digital right management - economics of security and privacy - key management - ICT for securing digital as well as physical assets - identity management - networked systems security - privacy and anonymity - reputation systems and architectures - security and trust management architectures - semantics and computational models for security and trust - trust assessment and negotiation - trust in mobile code - trust in pervasive environments - trust models - trust management policies - trusted platforms and trustworthy systems - trustworthy user devices ------------------------------------------------------------------------- ICISS 2009 5th International Conference on Information Systems Security, Kolkata, India, December 14-18, 2009. (Submissions due 15 July 2009) http://www.eecs.umich.edu/iciss09/ The conference series ICISS (International Conference on Information Systems Security), held annually, provides a forum for disseminating the latest research results in information and systems security. The ICISS 2009 encourages submissions addressing theoretical and practical problems in information and systems security and related areas. We especially like to encourage papers in domains that have not been represented much in the past at the conference, such as database security/privacy, usability aspects of security, operating systems security, and sensor networks security. Papers that introduce and address unique security challenges or present thought-provoking ideas are also welcome. ------------------------------------------------------------------------- INTRUST 2009 The International Conference on Trusted Systems, Beijing, P. R. China, December 17-19, 2009. (Submissions due 1 August 2009) http://www.tcgchina.org INTRUST 2009 is the first International Conference on the theory, technologies and applications of trusted systems. It is devoted to all aspects of trusted computing systems, including trusted modules, platforms, networks, services and applications, from their fundamental features and functionalities to design principles, architecture and implementation technologies. The goal of the conference is to bring academic and industrial researchers, designers, and implementers together with end-users of trusted systems, in order to foster the exchange of ideas in this challenging and fruitful area. INTRUST 2009 solicits original papers on any aspect of the theory, advanced development and applications of trusted computing, trustworthy systems and general trust issues in modern computing systems. The conference will have an academic track and an industrial track. This call for papers is for contributions to both of the tracks. Submissions to the academic track should emphasize theoretical and practical research contributions to general trusted system technologies, while submissions to the industrial track may focus on experiences on the implementation and deployment of real-world systems. Topics of relevance include but are not limited to: - Fundamental features and functionalities of trusted systems - Primitives and mechanisms for building a chain of trust - Design principles and architectures of trusted modules and platforms - Implementation technologies for trusted modules and platforms - Cryptographic aspects of trusted systems, including cryptographic algorithms and protocols, and their implementation and application in trusted systems - Scalable safe network operation in trusted systems - Mobile trusted systems, such as trusted mobile platforms, sensor networks, mobile (ad hoc) networks, peer-to-peer networks, Bluetooth, etc. - Storage aspects for trusted systems - Applications of trusted systems, e.g. trusted email, web services and various e-commerce services - Trusted intellectual property protection: metering, watermarking and digital rights management - Software protection for trusted systems - Authentication and access control for trusted systems - Key, identity and certificate management for trusted systems - Privacy aspects for trusted systems - Attestation aspects for trusted systems, such as measurement and verification of the behavior of trusted systems - Standards organizations and their contributions to trusted systems, such as TCG, ISO/IEC, IEEE 802.11, etc. - Emerging technologies for trusted systems, such as RFID, memory spots, etc. - Trust metrics and robust trust inference in distributed systems - Usability and reliability aspects for trusted systems - Trust modeling, economic analysis and protocol design for rational and malicious adversaries - Virtualisation for trusted systems - Limitations of trusted systems - Security analysis of trusted systems, including formal method proofs, provable security and automated analysis - Security policies for, and management of, trusted systems - Intrusion resilience and revocation aspects for trusted systems - Scalability aspects of trusted systems - Compatibility aspects of trusted systems - Experiences in building real-world trusted systems - Socio-economic aspects of trusted systems ------------------------------------------------------------------------- Information Systems Frontiers, Special Issue on Security Management and Technologies for Protecting Against Internal Data Leakages, Spring or Summer 2010. (Submission Due 14 August 2009) http://www.som.buffalo.edu/isinterface/ISFrontiers/forthcoming1/InfoSec09-SI-CFP.pdf Guest editor: David Chadwick (University of Kent, UK), Hang Bae Chang (Daejin University, South Korea), Ilsun You (Korean Bible University, South Korea), and Seong-Moo Yoo (University of Alabama in Huntsville, USA) During the past decades, information security developments have been mainly concerned with preventing illegal attacks by outsiders, such as hacking, virus propagation, and spyware. However, according to a recent Gartner Research Report, information leakage caused by insiders who are legally authorized to have access to some corporate information is increasing dramatically. These leakages can cause significant damages such as weakening the competitiveness of companies (and even countries). Information leakage caused by insiders occurs less frequently than information leakage caused by outsiders, but the financial damage is much greater. Countermeasures in terms of physical, managerial, and technical aspects are necessary to construct an integral security management system to protect companies' major information assets from unauthorized internal attackers. The objective of this special issue is to showcases the most recent challenges and advances in security technologies and management systems to prevent leakage of organizations' information caused by insiders. It may also include state-of-the-art surveys and case analyses of practical significance. We expect that the special issue will be a trigger for further research and technology improvements related to this important subject. Topics(include but are not limited to): - Theoretical foundations and algorithms for addressing insider threats - Insider threat assessment and modeling - Security technologies to prevent, detect and avoid insider threats - Validating the trustworthiness of staff - Post-insider threat incident analysis - Data breach modeling and mitigation techniques - Registration, authentication and identification - Certification and authorization - Database security - Device control system - Digital forensic system - -Digital right management system - Fraud detection - Network access control system - Intrusion detection - Keyboard information security - Information security governance - Information security management systems - Risk assessment and management - Log collection and analysis - Trust management - IT compliance (audit) and continuous auditing ------------------------------------------------------------------------- IFIP-DF 2010 6th Annual IFIP WG 11.9 International Conference on Digital Forensics, University of Hong Kong, Hong Kong, January 3-6, 2010. (Submissions due 15 August 2009) http://www.ifip119.org/Conferences/WG11-9-CFP-2010.pdf The IFIP Working Group 11.9 on Digital Forensics (www.ifip119.org) is an active international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in the emerging field of digital forensics. The Sixth Annual IFIP WG 11.9 International Conference on Digital Forensics will provide a forum for presenting original, unpublished research results and innovative ideas related to the extraction, analysis and preservation of all forms of electronic evidence. Technical papers are solicited in all areas related to the theory and practice of digital forensics. Areas of special interest include, but are not limited to: - Theories, techniques and tools for extracting, analyzing and preserving digital evidence - Network forensics - Portable electronic device forensics - Digital forensic processes and workflow models - Digital forensic case studies - Legal, ethical and policy issues related to digital forensics ------------------------------------------------------------------------- International Journal of Communication Networks and Information Security, Special Issue on Composite and Integrated Security Solutions for Wireless Sensor Networks, Spring 2010. (Submission Due 1 September 2009) http://ijcnis.kust.edu.pk/announcement Guest editor: Riaz Ahmed Shaikh (Kyung Hee University, Korea), Al-Sakib Khan Pathan (Kyung Hee University, Korea), and Jaime Lloret (Polytechnic University of Valencia, Spain) This special issue is devoted to composite and integrated security solutions for Wireless Sensor Networks (WSNs). In WSNs, researchers have so far focused on the individual aspects (cryptography, privacy or trust) of security that are capable of providing protection against specific types of attacks. However, efforts on achieving completeness via a composite and integrated solution are lacking. That is ultimately necessary to attain because of its wide applicability in various sensitive applications, such as health-care, military, habitat monitoring, etc. The objective of this special issue is to gather recent advances in the area of composite and integrated security solutions of wireless sensor networks. This special issue covers topics that include, but are not limited to: - Adaptive and Intelligent Defense Systems - Authentication and Access control - Data security and privacy - Denial of service attacks and countermeasures - Identity, Route and Location Anonymity schemes - Intrusion detection and prevention techniques - Cryptography, encryption algorithms and Key management schemes - Secure routing schemes - Secure neighbor discovery and localization - Trust establishment and maintenance - Confidentiality and data integrity - Security architectures, deployments and solutions ------------------------------------------------------------------------- SAC-CF 2010 25th ACM Symposium on Applied Computing, Computer Forensics Track, Sierre, Switzerland, March 22-26, 2010. (Submissions due 8 September 2009) http://comp.uark.edu/~bpanda/sac2010cfp.pdf With the exponential growth of computer users, the number of criminal activities that involves computers has increased tremendously. The field of Computer Forensics has gained considerable attention in the past few years. It is clear that in addition to law enforcement agencies and legal personnel, the involvement of computer savvy professionals is vital for any digital incident investigation. Unfortunately, there are not many well-qualified computer crime investigators available to meet this demand. An approach to solve this problem is to develop state-of-the-art research and development tools for practitioners in addition to creating awareness among computer users. The primary goal of this track will be to provide a forum for researchers, practitioners, and educators interested in Computer Forensics in order to advance research and educational methods in this increasingly challenging field. We expect that people from academia, industry, government, and law enforcement will share their previously unpublished ideas on research, education, and practice through this track. We solicit original, previously unpublished papers in the following general (non-exhaustive) list of topics: - Incident Response and Live Data Analysis - Operating System and Application Analysis - File System Analysis - Network Evidence Collection - Network Forensics - Data Hiding and Recovery - Digital Image Forensics - Event Reconstruction and Tracking - Forensics in Untrusted Environments - Hardware Assisted Forensics - Legal, Ethical and Privacy Issues - Attributing Malicious Cyber Activity - Design for Forensic Evaluation - Visualization for Forensics ------------------------------------------------------------------------- SAC-TRECK 2010 25th ACM Symposium on Applied Computing, Trust, Reputation, Evidence and other Collaboration Know-how Track (TRECK), Sierre, Switzerland, March 22-26, 2010. (Submissions due 8 September 2009) http://www.trustcomp.org/treck/ Computational models of trust and online reputation mechanisms have been gaining momentum. The goal of the ACM SAC 2010 TRECK track remains to review the set of applications that benefit from the use of computational trust and online reputation. Computational trust has been used in reputation systems, risk management, collaborative filtering, social/business networking services, dynamic coalitions, virtual organisations and even combined with trusted computing hardware modules. The TRECK track covers all computational trust/reputation applications, especially those used in real-world applications. The topics of interest include, but are not limited to: - Recommender and reputation systems - Trust management, reputation management and identity management - Pervasive computational trust and use of context-awareness - Mobile trust, context-aware trust - Web 2.0 reputation and trust - Trust-based collaborative applications - Automated collaboration and trust negotiation - Trade-off between privacy and trust - Trust/risk-based security frameworks - Combined computational trust and trusted computing - Tangible guarantees given by formal models of trust and risk - Trust metrics assessment and threat analysis - Trust in peer-to-peer and open source systems - Technical trust evaluation and certification - Impacts of social networks on computational trust - Evidence gathering and management - Real-world applications, running prototypes and advanced simulations - Applicability in large-scale, open and decentralised environments - Legal and economic aspects related to the use of trust and reputation engines - User-studies and user interfaces of computational trust and online reputation applications ------------------------------------------------------------------------- NDSS 2010 17th Annual Network & Distributed System Security Symposium, San Diego, CA, USA, February 28 - March 3, 2010. (Submissions due 11 September 2009) http://www.isoc.org/isoc/conferences/ndss/10/cfp.shtml The Network and Distributed System Security Symposium fosters information exchange among research scientists and practitioners of network and distributed system security services. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation (rather than theory). A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technology. Submissions are solicited in, but not limited to, the following areas: - Security of Web-based applications and services - Anti-malware techniques: detection, analysis, and prevention - Intrusion prevention, detection, and response - Security for electronic voting - Combating cyber-crime: anti-phishing, anti-spam, anti-fraud techniques - Privacy and anonymity technologies - Network perimeter controls: firewalls, packet filters, and application gateways - Security for emerging technologies: sensor networks, wireless/mobile (and ad hoc) networks, and personal communication systems - Security for Vehicular Ad-hoc Networks (VANETs) - Security for peer-to-peer and overlay network systems - Security for electronic commerce: e.g., payment, barter, EDI, notarization, timestamping, endorsement, and licensing - Implementation, deployment and management of network security policies - Intellectual property protection: protocols, implementations, metering, watermarking, digital rights management - Integrating security services with system and application security facilities and protocols - Public key infrastructures, key management, certification, and revocation - Special problems and case studies: e.g., tradeoffs between security and efficiency, usability, reliability and cost - Security for collaborative applications: teleconferencing and video-conferencing - Software hardening: e.g., detecting and defending against software bugs (overflows, etc.) - Security for large-scale systems and critical infrastructures - Integrating security in Internet protocols: routing, naming, network management ==================================================================== News Briefs ==================================================================== News briefs from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/NewsBriefs.html ==================================================================== Listing of academic positions available by Cynthia Irvine ==================================================================== http://cisr.nps.edu/jobscipher.html Posted April 2009 Technische Universitat Darmstadt Computer Science Department Darmstadt, Germany PostDocs and PhD students Open until filled http://www.mais.informatik.tu-darmstadt.de/Positions.html -------------- This job listing is maintained as a service to the academic community. If you have an academic position in computer security and would like to have in it included on this page, send the following information: Institution, City, State, Position title, date position announcement closes, and URL of position description to: irvine@cs.nps.navy.mil ==================================================================== Information on the Technical Committee on Security and Privacy ==================================================================== ____________________________________________________________________ Information for Subscribers and Contributors ____________________________________________________________________ SUBSCRIPTIONS: Two options, each with two options: 1. To receive the full ascii CIPHER issues as e-mail, send e-mail to cipher-admin@ieee-security.org (which is NOT automated) with subject line "subscribe". OR send a note to cipher-request@mailman.xmission.com with the subject line "subscribe" (this IS automated - thereafter you can manage your subscription options, including unsubscribing, yourself) 2. To receive a short e-mail note announcing when a new issue of CIPHER is available for Web browsing send e-mail to cipher-admin@ieee-security.org (which is NOT automated) with subject line "subscribe postcard". OR send a note to cipher-postcard-request@mailman.xmission.com with the subject line "subscribe" (this IS automated - thereafter you can manage your subscription options, including unsubscribing, yourself) To remove yourself from the subscription list, send e-mail to cipher-admin@ieee-security.org with subject line "unsubscribe" or "unsubscribe postcard" or, if you have subscribed directly to the xmission.com mailing list, use your password (sent monthly) to unsubscribe per the instructions at http://mailman.xmission.com/cgi-bin/mailman/listinfo/cipher or http://mailman.xmission.com/cgi-bin/mailman/listinfo/cipher-postcard Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.ieee-security.org/cipher.html CONTRIBUTIONS: to cipher @ ieee-security.org are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. Calendar and Calls-for-Papers entries should be sent to cipher-cfp @ ieee-security.org and they will be automatically included in both departments. To facilitate the semi-automated handling, please send either a text version of the CFP or a URL from which a text version can be easily obtained. For Calendar entries, please include a URL and/or e-mail address for the point-of-contact. For Calls for Papers, please submit a one paragraph summary. See this and past issues for examples. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. ____________________________________________________________________ Recent Address Changes ____________________________________________________________________ Address changes from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/AddressChanges.html _____________________________________________________________________ How to become <> a member of the IEEE Computer Society's TC on Security and Privacy _____________________________________________________________________ You may easily join the TC on Security & Privacy by completing the on-line for at IEEE at http://www.computer.org/TCsignup/index.htm ______________________________________________________________________ TC Publications for Sale ______________________________________________________________________ IEEE Security and Privacy Symposium The 2007 proceedings are available in hardcopy for $30.00, the 28 year CD is $20.00, plus shipping and handling. The 2006 Symposium proceedings and 11-year CD are sold out. The 2005, 2004, and 2003 Symposium proceedings are available for $10 plus shipping and handling. Shipping is $4.00/volume within the US, overseas surface mail is $7/volume, and overseas airmail is $11/volume, based on an order of 3 volumes or less. The shipping charge for a CD is $1 per CD (no charge if included with a hard copy order). Send a check made out to the IEEE Symposium on Security and Privacy to the 2007 treasurer (below) with the order description, including shipping method, and send email to the 2007 Registration Chair (Yong Guan) (oakland07-registration @ ieee-security.org) with the shipping address, please. Terry Benzel Treasurer, IEEE Security and Privacy USC Information Sciences Institute 4676 Admiralty Way Marina Del Rey, CA 90292 (310) 822-1511 IEEE CS Press You may order some back issues from IEEE CS Press at http://www.computer.org/cspress/catalog/proc9.htm Computer Security Foundations Symposium Copies of the proceedings of the Computer Security Foundations Workshop (now Symposium) are available for $10 each. Copies of proceedings are available starting with year 10 (1997). Photocopy versions of year 1 are also $10. Contact Jonathan Herzog if interested in purchase. Jonathan Herzog jherzog@alum.mit.edu ____________________________________________________________________________ TC Officer Roster ____________________________________________________________________________ Chair: Security and Privacy Chair Emeritus: Prof. Cynthia Irvine Yong Guan U.S. Naval Postgraduate School Iowa State University Computer Science Department Computer Engineering and Code CS/IC University and Information Monterey CA 93943-5118 Assurance Center (831) 656-2461 (voice) Ames, IA 50011 irvine@nps.edu (515) 294-8378 (voice) guan@iastate.edu Vice Chair: Chair, Subcommittee on Academic Affairs: Hilarie Orman Prof. Cynthia Irvine Purple Streak, Inc. U.S. Naval Postgraduate School 500 S. Maple Dr. Computer Science Department, Code CS/IC Salem, UT 84653 Monterey CA 93943-5118 hilarie @purplestreak.com (831) 656-2461 (voice) irvine@nps.edu Treasurer: Chair, Subcomm. on Security Conferences: Terry Benzel Jonathan Millen USC Information Sciences Intnl The MITRE Corporation, Mail Stop S119 4676 Admiralty Way, Suite 1001 202 Burlington Road Rte. 62 Los Angeles, CA 90292 Bedford, MA 01730-1420 (310) 822-1511 (voice) 781-271-51 (voice) tbenzel @isi.edu jmillen@mitre.org Security and Privacy Symposium Newsletter Editor 2009 General Chair: Hilarie Orman David Du Purple Streak, Inc. Department of Computer Science 500 S. Maple Dr. and Engineering Salem, UT 84653 University of Minnesota cipher-editor@ieee-security.org Minneapolis, MN 55455 oakland09-chair@ieee-security.org ________________________________________________________________________ BACK ISSUES: Cipher is archived at: http://www.ieee-security.org/cipher.html Cipher is published 6 times per year