_/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ============================================================================ Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 96 May 31, 2010 Hilarie Orman, Editor Sven Dietrich, Assoc. Editor cipher-editor @ ieee-security.org cipher-assoc-editor @ ieee-security.org Yong Guan Book Review Editor Calendar Editor cipher-bookrev @ ieee-security.org cipher-cfp @ ieee-security.org ============================================================================ The newsletter is also at http://www.ieee-security.org/cipher.html Cipher is published 6 times per year This is part 1 of Cipher; part 2 is emailed separately Contents: * Letter from the Editor * Commentary and Opinion o Richard Austin's review of "The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System", by Reverend Bill Blunden o Review of the Security and Privacy Awards Banquet (Berkeley, California, May 17, 2010) by Hilarie Orman o Review of the Security and Privacy Symposium, technical sessions (Berkeley/Oakland, California, USA, May 17-19, 2010) by Ben Ransford, University of Massachusetts Amherst, ransford@cs.umass.edu Sruthi Bandhakavi, University of Illinois at Urbana-Champaign Joana Trindade, University of Illinois at Urbana-Champaign o Book reviews, Conference Reports and Commentary and News items from past Cipher issues are available at the Cipher website * List of Computer Security Academic Positions, by Cynthia Irvine * Conference and Workshop Announcements o Calendar of events o Upcoming calls-for-papers and events * Staying in Touch o Information for subscribers and contributors o Recent address changes * Links for the IEEE Computer Society TC on Security and Privacy o Becoming a member of the TC o TC Officers o TC publications for sale ==================================================================== Letter from the Editor ==================================================================== Dear Readers: Privacy on the Internet seems to be the main concern in the news in recent weeks, as users of social networking sites have watched with amazement as their personal information and Internet actions become exposed in a frenzied mesh complex sharing. Those who have followed the field of privacy research over the last few decades should not be not surprised, and the annual meeting of the Security and Privacy Symposium heard presentations about several lines of research specifically concerned with privacy. This issue of Cipher has descriptions of the presentation of all papers. Some aspects of the history of the symposium's focus areas can be inferred from its citation graph, the directed graph showing which papers in the Symposium cite papers from earlier Symposia. That graph appears on the DVD given to all participants (it is also on the ieee-security.org website, under the description of the "Oakland" conferences for the year 2010). A major cluster of citations concern malware --- its design, detection, and prevention. The 2010 Symposium had several papers in this vein, perhaps showing that the tide is turning from secure design to just dealing with what happens. The Symposium attendees took an evening to honor notable people in the field, for their accomplishments and service, and for some laughter and applause. Pictures of the event will be posted on the aforementioned website soon, as soon as we can verify the privacy preferences of the pictured! Watch for it. Yours securely in the knowledge that anything can happen, Hilarie Orman cipher-editor @ ieee-security.org ==================================================================== Commentary and Opinion ==================================================================== Book reviews from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/BookReviews.html, and conference reports are archived at http://www.ieee-security.org/Cipher/ConfReports.html ____________________________________________________________________ Book Review By Richard Austin May 24, 2010 ____________________________________________________________________ The Rootkit Arsenal: Escape and Evasion in the Dark Corners of the System by Reverend Bill Blunden Wordware Press 2009. ISBN 978-1-59822-061-2 Amazon.com USD 32.97 Be warned, this is a dangerous book. When an author can couple deep knowledge and understanding with the ability to clearly present difficult material through the written word, actions may ensue. When that potential is used to explore one of the most feared denizens of the malware menagerie, the rootkit, the result at the very least is dangerous. Blunden does not shy away from the danger but meets the objections head-on in terms that will be familiar to those of us who support (at least in some form) the philosophy of full-disclosure: the bad people already know this stuff. He sums this viewpoint up with an especially apt quote on page 3, "The best safecrackers in the business never steal a penny. They work for UL" (Underwriters Laboratory). If we are to successfully defend our infrastructures, we must understand the weapons that will be used against us. Blunden divides his presentation into four parts: fundamentals, system modification, anti-forensics and a lengthy appendix of hands-on projects that apply the material. Part 1 (Fundamentals) opens with a chapter on "Setting the Stage" which describes his own first encounter with a rootkit and then moves on to define rookits in terms of the services they provide: concealment, command and control (C2) and surveillance. This focus on function rather than form is a deep insight that runs through the rest of the book. Chapter two (Into the Catacombs: IA-32) presents an excellent introduction to the widely used computer architecture, IA32, by skillfully presenting the required information but no more. The discussion is immediately applied through constructing a terminate-and-stay-resident (TSR) keylogger. Chapter 3 (Windows System Architecture) delves into how Microsoft Windows (Vista) works with the IA-32 architecture. Chapter 4 (Rootkit Basics) begins to apply the understanding of hardware and software architecture to the problem of building a rootkit. The chapter provides extensive material on the use of debuggers which play an important role in both exploring the system and testing a rootkit. Particular attention should be paid to section 4.3 (A Rootkit Skeleton) as it abstracts out the common themes of a rootkit that are too often concealed under implementation details. Part II (System Modification) presents the common techniques used by rootkits in "altering the contents of memory at runtime" (p.240) which are so important in providing stealth for the running rootkit while it carries out its mission. The discussion illustrates the "dance" of attack and defense by showing how the countermeasures for one technique (such as call table hooks) are foiled by another. Chapter 5 (Hooking Call Tables) describes how a rootkit can insert itself into a system by altering the dispatch tables used to interface with system routines to enable execution of an adversary's code whither before or instead of the system code. Chapter 6 (Patching System Routines) examines techniques for modifying the code running on the system (either by patching it on disk (binary patching) or when it is loaded into memory (run-time patching)). Chapter 7 (Altering Kernel Objects) delves into how system (kernel) objects can be modified to effect concealment (for example, to hide a process or a driver) or to grant additional privileges. Chapter 8 (Deploying Filter Drivers) covers the nefarious uses that can be made of the "filter driver" capability built into Windows. Filter drivers are intended to allow creation of "stacks" of drivers with well-defined functionality that can be transparently inserted with affecting the layer above or below. For example, one way to implement full-disk encryption is to insert an encryption driver into the disk driver stack that transparently encrypts/decrypts information as it flows through the stack. Nefarious applications include inserting a keylogger into the keyboard stack or a sniffer into the network stack. Part III (Anti-forensics) deals with how a rootkit can maintain its concealment in the face of the techniques normally used to ferret it out, for example, during the investigation of a data breach. Chapter 9 (Defeating Live Response) deals with how the rootkit can defeat "live response" forensic techniques (i.e., those conducted while the affected system is still running. Chapter 10 (Defeating File System Analysis) covers how rootkits can evade analysis conducted on disk images (traditional computer forensics) and includes material on foiling static and dynamic analysis of code (e.g., when a suspicious executable is exported from the disk image and executed under control of a debugger). Chapter 11 (Defeating Network Analysis) covers the important subject of how rootkits hide while they carry out their function. Whether its purpose is to siphon off credit card numbers, access credentials or participate in a botnet, the rootkit will at some point have to communicate with the outside world in order to fulfill its purpose. This communication poses a threat to its concealment. Chapter 12 (Countermeasure Summary) pulls the various forensic techniques together in an overall summary with the aid of several excellent diagrams. Part IV (End Material) contains two short chapters ("The Tao of Rootkits" and "Closing Thoughts") followed by extensive listings of the hands-on projects developed throughout the book. To repeat myself, this is a dangerous book. Unfortunately the knowledge it contains is already widely available to those that would use it to compromise the systems we're charged to protect. Blunden's achievement is the consolidation of the material into an excellent presentation on how rootkits work, how they hide and how their "cloaking devices" can be penetrated. The book will be very hard sledding for a managerially-focused security professional, but it is definitely one they should give to their technical colleagues with a strong "read-and-understand!" admonition. ------ Before beginning life as an itinerant university instructor and security consultant, Richard Austin was the storage network security architect for a Fortune 25 company. He welcomes your thoughts and comments at either rausti19 at Kennesaw dot edu or raustin2 at spsu dot edu. ____________________________________________________________________ Review of Security and Privacy Awards Banquet Berkeley, California, May 17, 2010 by Hilarie Orman ____________________________________________________________________ The Security and Privacy Symposium is always held in May at the Claremont Resort, always Sunday through Wednesday, and Monday evening is always the time for a reception with generous quantities of appetizers. But 2010 was the 30th anniversary of the symposium, and Monday evening was something completely different. The Pauley Ballroom on the University of California at Berkeley campus was the site for an evening of socializing, food, and entertainment. About 360 people dined, applauded, and laughed their way through the program. Pioneers in the field through first-time attendees mingled and met one another. The evening's Master of Ceremonies was Peter G. Neumann, and he set an elegant tone in his tuxedo. He first introduced Tim Grance of the National Institute of Standards and Technologies (NIST), who presented the prestigious Computer System Security Award to Jerome Salzer, a pioneer in secure systems who was a major developer of the Multics operating system. The award is granted annually to recognize "outstanding contributions towards the advancement of computer security technology." Several previous recipients of the NIST award were present, including Roger Schell, Dorothy Denning, Peter Neumann, Virgil Gligor, and Michael Schroeder. Neumann then extolled the contributions of Carl Landwehr of the National Science Foundation and presented him with the IEEE Computer Society's award for Distinguished Service "for excellent service in government organizations responsible for funding important research and incentives, and for his numerous roles in the IEEE Computer Society's Technical Committee on Security and Privacy, the Security and Privacy Symposium, the Cipher Newsletter, the IEEE Security and Privacy Magazine." Douglas Maughan of Homeland Security was honored with the IEEE Computer Society Technical Committee on Security and Privacy's award for Distinguished Community Service "for his many years of excellent service in government organizations responsible for funding important research, incentives, development, system evaluations, and technology transfer, for activities with the INFOSEC Security Council, and for producing an outstanding roadmap for future security research." John Markoff of the New York Times also received the Distinguished Community Service Award "for his superbly computer-literate and educationally valuable journalism, bringing the topics of computer security and privacy clearly into the public eye with his incisive and thoughtful writing throughout most of the Security and Privacy Symposium’s 31 years, with frequent newspaper articles, books, and occasional media appearances." Tom Berson, another early organizer, entrepreneur, and loyal supporter of the Symposium, was also honored with a Distinguished Community Service Award. Terry Benzel was recognized with an IEEE Computer Society award for Continuing Service. She has served in almost every possible Symposium and Technical Committee position, adding her enthusiasm and vision to make the symposium a vital and browing event. George Davida, one of the original Symposium organizers was present, and he received one of the special recognition awards: a small box of exquisite chocolate candies. Dick Kemmerer and Jon Millen, both early organizers, received recognition and chocolates from Neumann. Neumann continued with recognition for the many people who contributed to the Symposium with published papers or who served as organizers and also served in Federal government roles. Noted and missed were several people who were deceased, including Jim Anderson, Joseph Goguen, Ted Linden, Roger Needham, Dan Schackenberg, Rein Turn, and several others. Li Gong of Mozilla got several mentions for his research, business successes, and geographic mobility. Neumann mentioned several authors and their papers that have had significant impact on other researchers: Kemmerer (storage/timing channels); Abadi-Needham (practical aspects of crypto); Blaze-Feigenbaum-Lacy (decentralized trust management); Dean-Felten-Wallach (Java security); Wagner-Dean (static analysis); Song-Wagner-Perrig (search on encrypted data); Clark-Wilson and Goguen-Meseguer. More chocolates were due to some of the most prolific authors over the years, notably Sushil Jajodia (not present), Adrian Perrig, Dawn Song, Mike Reiter, Jon Millen, Virgil Gligor, Teresa Lunt (not present), Dorothy Denning, Cathy Meadows (not present). The awards committee consisted of Peter Neumann, Matt Bishop, Marv Schaefer, and Sean Peisert. The trio authored a special paper ("Reflections") for the Symposium proceedings, as did Carl Landwehr and Douglas Maughan. Ulf Lindqvist and Jenny McNeill of SRI International were instrumental in facilitating the coordination of the awards dinner, which was a two-year project planned by the Technical Committee on Security and Privacy. Hilarie Orman ended the awards presentation by giving a surprise present to Peter Neumann in thanks for his generous work on the awards committee and for being the evening's host. He received a woven blanket with the Symposium's logo, a Trojan Horse. The gift was dubbed "the 30th anniversary commemorative Privacy and Security Blanket." ____________________________________________________________________ Review of Security and Privacy Symposium, technical sessions Berkeley/Oakland, California, USA, May 17-19, 2010 by Ben Ransford, University of Massachusetts Amherst, ransford@cs.umass.edu Sruthi Bandhakavi, University of Illinois at Urbana-Champaign Joana Trindade, University of Illinois at Urbana-Champaign ____________________________________________________________________ Introduction to the conference review by Sven Dietrich, TC Vice Chair This year's Oakland (the 31st) was the best attended ever. The organizers had to cap registrations very early in the process. The initial cap of 320 was achieved a day after the early registration closed. Working with a waiting list, the cap was eventually settled at 350. The conference room was reconfigured to hold that many people, but it didn't get too crowded after all. Attendees were the usual mix of academic, government, industry (and other) origin, leading to a fruitful get together for many discussions in and outside the conference room. Complementing this year's symposium, other than the 30th birthday celebration on Monday night, were a government funding workshop (NITRD) on Wednesday afternoon, which was open to all symposium attendees, and two workshops on Thursday: Web 2.0 Security and Privacy (W2SP) and Forensics (SADFE). And now, without further ado... The following writeups were performed by three students: Ben Ransford, University of Massachusetts Amherst, ransford@cs.umass.edu Sruthi Bandhakavi, University of Illinois at Urbana-Champaign, sbandha2@illinois.edu Joana Trindade, University of Illinois at Urbana-Champaign, trindade@illinois.edu Each write-up is marked by their respective initials, BR, SB, and JT. ----------------------- Opening Remarks (BR): Ulf Lindqvist started the program with details about the conference venue and the events. David Evans followed Ulf with an entertaining comparison between IEEE S&P in 1980 when it started and in 2010. In 2010 there were 50 program committee members and several other external reviewers. 26 research papers were accepted out of 237 submissions. On an average there were 5 reviewers per paper who evaluated the research contributions and novelty of research. 5 systematization of knowledge papers were accepted, which needed to evaluate existing work and put it in context. The evaluation criteria for accepting these papers was their value to the community. SESSION 1: MALWARE ANALYSIS Chair: Jon Giffin, Georgia Institute of Technology PAPER #1: Inspector Gadget: Automated Extraction of Proprietary Gadgets from Malware Binaries AUTHORS: Clemens Kolbitsch (Vienna University of Technology), Thorsten Holz (Vienna University of Technology), Christopher Kruegel (University of California, Santa Barbara), Engin Kirda (Institute Eurecom) WRITEUP BY: BR Clemens Kolbitsch led off the Symposium's first session with a talk on his whimsically titled "Inspector Gadget" paper. The paper concerns Inspector, a system that isolates and extracts interesting program behaviors one-at-a-time from Windows malware executables. The authors built Inspector to address several problems with traditional malware analysis techniques. With tens of thousands of new malware samples appearing each day, researchers cannot practically examine the inner workings of each one. Compounding that problem, malware often exhibits divergent behavior at different times or under different circumstances. Existing tools for malware analysis produce cluttered results that make isolating program behaviors difficult. Inspector extracts single program behaviors into small standalone "gadgets" that can be run and analyzed separately, simplifying the task of analyzing a malware executable. Inspector performs four steps on an input binary. First, it uses the Anubis malware analyzer to run the binary while recording instructions, OS interactions, and other information. Second, the user chooses a detected behavior that Inspector has classified as salient; Inspector grabs the corresponding code from the executable. Third, Inspector performs backward program slicing to discard code sequences that are irrelevant to the selected behavior; it also searches forward to find related behavior. Finally, Inspector writes the resulting minimized code to a standalone shared library and tidies up outstanding references as necessary; it also replaces system calls with special hooks. The resulting shared library can be loaded like any other Windows DLL. The authors load such DLLs into a "gadget player" that confines the program and mediates accesses to the operating system. The gadget player optionally ignores the system calls or manipulates their return values to educe interesting behavior from gadgets. To illustrate the system's operation, Kolbitsch demonstrated that Inspector could extract the Conficker worm's complicated, environment-dependent domain generation algorithm into a separately analyzable gadget. Inspector is implemented as an extension to Anubis. The authors are willing to share with interested researchers the gadgets they extracted from malware during their evaluation of Inspector. In the question and answer period, Juan Caballero asked how the authors' analysis ensured adequate coverage of conditional code in gadgets. Kolbitsch said that Inspector replaces conditional jumps with unconditional ones when doing so will not affect program output, and otherwise replaces branches not seen during dynamic analysis with triggers that pass exceptions to the gadget player. David Brumley speculated that malware authors might circumvent Inspector by using their own implementations of API functions, but Kolbitsch assured him that such attempts would be caught at the system call interface. Rob Cunningham asked whether Inspector's deconstruction would break malware's self-checks for integrity; Kolbitsch remarked that the authors had not yet noticed such self-checks with their system but he believed that such checks, orthogonal as they are to program behavior, would be correctly excluded from extracted gadgets. PAPER #2: Synthesizing Near-Optimal Malware Specifications from Suspicious Behaviors AUTHORS: Matt Fredrikson (University of Wisconsin), Mihai Christodorescu (IBM Research), Somesh Jha (University of Wisconsin), Reiner Sailer (IBM Research), Xifeng Yan (University of California, Santa Barbara. Mihai presented. WRITEUP BY: SB Summary: New malware is still spreading -- more than a billion virus signatures per year are detected. Antivirus programs use syntactic signatures, which have one to one correspondence to viruses and only has 50% detection rate. Ideally one would like to use behavior based detection techniques, which detect malware based on certain high-level descriptions. Typically a behavior would be a sequence of system call or library call operations showing that certain system resources have been accessed. The behavior graph is represented as a set nodes linked with edges that represent data dependencies. Since most of the times the behavior graphs are large and there is a lot of malware, one has to automate the behavior extraction approach so that it is scalable. In the talk, Mihai described the main features of their technique to extract significant malware behavior automatically. The authors used machine learning techniques to compare malware and benign samples to extract the behavior graphs of malware. Additionally, they use correlation between the graphs of different malware samples to mine for significant behavior. Significant behavior is obtained in the form of graphs of operations, where the significance is measured based on metrics like information gain, cross entropy, etc. Finally, Mihai described their experimental results. He showed that their technique had 0% false positives and 86.5% detection rate, which is better than a few of the commercial Anti Virus software. Discussion: Angelos Stavrou commented that the technique in the paper works best if malware is a standalone binary. He asked what would happen if the malware manifested itself to an application e.g. IE or a PDF? Mihai responded that he did not consider that carefully, however their approach should be able to differentiate between standard IE and a malware. Matt Van Gundy asked what the tool would do if the malware had patterns that mislead the signature generation algorithms. Mihai was not sure how easy it is to generate such misleading patterns. He agreed that their learning could be mislead but they hedge against this by looking at different malware samples within a malware family to check if the behaviors exist in a majority of them. PAPER #3: Identifying Dormant Functionality in Malware Programs AUTHORS: Paolo Milani Comparetti (Technical University Vienna), Guido Salvaneschi (Politecnico di Milano), Clemens Kolbitsch (Technical University Vienna), Engin Kirda (Institut Eurecom), Christopher Kruegel (University of California, Santa Barbara), Stefano Zanero (Politecnico di Milano) WRITEUP BY: JT Summary: Paolo presented. Analysis of malware is complex, due to the large number of samples. Existing solutions rely on automated dynamic analysis. The question then is "how can we learn more on these malware samples?". Paolo's approach is to leverage code reuse between malware samples. He has developed a tool called "Reanimator", which runs malware in a monitored environment, and detects malicious behavior, which the authors refer to as a "genotype". The idea is to automatically define genotype models, and match these against all samples. Specifically, the authors run the malware in an instrumented sandbox (Anubis), and observe its behavior. Based on this, they extract genotype models, and use these models on other samples. Genotype models extraction is a multi-step process consisting of (i) identify code responsible for behavior, (ii) model genotype as complete and precise as possible (e.g., it has to contain all code that is needed for that behavior, and at the same time only the code that is specific to the genotype). Specifically, to a model the genotype the authors use a number of techniques. The first one is slicing, in which they start from relevant system calls, and adds backward and forward slices, disregarding control flow dependencies. The second technique is filtering, where they filter out code that is not specific to a genotype (e.g., libc). Additionally, the authors limit how back backward slicing can go. Otherwise, it would include initialization and unpacking code. This effect is not desired, because a genotype that includes unpacking code would potentially match against all malware programs that use the same packer. Instructions that are specific to a genotype are those that manipulate tainted data every time they are executed. Authors also discard whitelisted code, which is obtained from other executions of the same binary. Next, germination is performed. The slice is not complete, because it only uses data flow dependencies (it disregards control flow dependencies). Hence, in germination instructions add instructions that cannot be executed without executing at least one instruction in slice set. This is based on graph reachability analysis on the control flow graph (CFG). For their evaluation, the authors investigated accuracy, robustness, and detection without ground truth. For accuracy, ground truth is a dataset of 208 bots with source code. Authors use MOSS, a source code plagiarism detection tool, to compare solution against ground truth. Their results show a small number of false negatives (1.5%), in mostly small genotypes, and 0 false negatives. For robustness, the results show that their approach is robust against change of compiler options and version. It is not robust, however, to change of compiler, in which case it generates more than 80% false negatives. Finally, in the detection without ground truth, Reanimator detected more security-relevant behaviors than dynamic analysis. Discussion: Ian Goldberg, from University of Waterloo, was interested on the consequences of Reanimator not being robust to change of compiler. He then asked what happens if the bad guys start coming up with automated CFG modifications (e.g., another binary that has the same behavior but CFG is different, or CFG obfuscators). Paolo answered that the authors model the code with whatever way they can think of, including optimizations that could make it more robust against different CFG. However, he pointed out, attackers could also write their malware to simply download malicious code, which can defeat reanimator. Jingpeng Wei, from Florida International University asked how much effort is involved in generating genotype models, and if it is completely automated. Paolo answered that once they have the dynamic detection rules, everything is automated. Finally, Jon Giffin pointed out that all presentaitons are about automatically detecting malware behavior. He then asked if this a reasonable assumption, that is, that malware will always present behavior that will be present in all or at least many malware programs. Paolo replied that in their case, they are not trying to do detection in an end-user system. Instead, we are doing it in a controlled environment, which makes our life easier. In general, what characterizes malicious behavior and valid user behavior is all heuristics as of now. SESSION 2: INFORMATION FLOW Chair: David Molnar, Microsoft Research PAPER #4: Reconciling Belief and Vulnerability in Information Flow AUTHORS: Sardaouna Hamadou (University of Southampton), Vladimiro Sassone (University of Southampton), Catuscia Palamidessi (École Polytechnique); Sardaouna presented. WRITEUP BY: SB Summary: Preventing information flow leaks is an important topic especially relevant in the context of several applications. Some interesting examples are electronic voting, auctions, bill payments, etc. However some applications leak information by design. For example, the fact that one person gets more votes than the other would tell us something about the probability that a voter voted for a particular application. Another example is of a password program where even the wrong password removes the password from the possible password list thereby leaking information. Information hiding approaches try to hide the link between the hidden inputs and public outputs generally using randomization techniques using probabilistic and information theoretic approaches like entropy, etc. Most of these approaches have a simplistic view of an attacker where they assume that he only has access to the information provided by the application. However, a real world attacker could have additional information which would increase his chance of deducing the hidden information. An example of this can be found in related work on anonymity in social networks, where information from one social network can be used in deanonymizing another social network. The *main* idea of the paper is to be able to measure and characterize the accuracy of the extra information the adversary has by measuring the adversary belief. The next step is to try to incorporate the adversary belief in the notion of the protocol and study its effect on the security of the protocol. In the talk, Sardaouna then explained the different definitions for quantifying information leakage and explained their characterization of belief-vulnerability. Their model allows to characterize the levels of accuracy for the adversary's beliefs. If uncertainity is inserted into the results of the application, then the attack accuracy decreases. On the other hand, if the adversary can notice that the results diffier a lot from his beliefs, he can throw away his belief and use the actual values instead. Discussion: Fred Cohen commented that there have been some past approaches where cognitive models were used to describe the belief system of the adversary. He asked if the authors used those models or if their characterization of belief is purely mathematical? Sardouna answered that the goal is to determine impact of adversary's knowledge. If accuracy is higher than a certain value then the program is vulnerable. Miro Enev asked if differential privacy can be used to decrease the effectiveness of the vulnerability? Sardouna answered that if one knows that the adversary can have access to extra knowledge, it's a good idea to introduce noise into the output to confuse the adversary. PAPER #5: Towards Static Flow-Based Declassification for Legacy and Untrusted Programs AUTHORS: Bruno P.S. Rocha (Eindhoven University of Technology), Sruthi Bandhakavi (University of Illinois at Urbana Champaign), Jerry I. den Hartog (Eindhoven University of Technology), William H. Winsborough (University of Texas at San Antonio), Sandro Etalle (Eindhoven University of Technology) WRITEUP BY: BR Bruno P.S. Rocha spoke about the problem of declassifying information. He argued that correct declassification is too important to leave entirely in the hands of programmers. Rocha introduced declassification in the context of non-interference, a familiar concept in information flow: changes to a system's publicly readable inputs cause changes in publicly readable outputs, but changes to private inputs do not. Strict non-interference is not an appropriate goal when the system can make some information newly publicly available, such as ciphertext resulting from an encryption. Some existing security-typed languages, such as Jif, allow programmers to effect declassification, but such languages fail to separate code from policy. The authors' contribution is a new static flow-based analysis tool that supports declassification, separate policy specification, and unannotated input programs. The authors' analysis tool identifies input/output operations from a program, builds expressions encoding the inputs and outputs of those operations, and compares those expressions against a policy database that is separate from the program. Rocha detailed the code analysis techniques the authors used, then formally described the system's mechanism for policy-controlled release (PCR), which replaces non-interference and adds support for declassification. He concluded by pointing out that certain aspects of the system remained unimplemented at the time of the talk, most crucially an implementation of the policy matching algorithm. In the question and answer period, Venkata Pingali expressed concern that policies would eventually approach the complexity of programs to which they referred, and asked why one would not simply program in the policy language. Rocha answered that separating program from policy was a key goal of the work that programming in the policy language would contradict, and pointed out that a policy maker's job is to encode only the exceptions (which are few) to a default rule, which naturally keeps policy databases small. PAPER #6: Non-Interference Through Secure Multi-Execution AUTHORS: Dominique Devriese, Frank Piessens (K. U. Leuven) WRITEUP BY: JT Summary: Dominique presented. Information flow has received much attention. There are two types of techniques in this area: static analysis methods, and dynamic analysis methods. Static analysis methods requires substantial programmer effort (e.g., typed languages, trust programmer), and in general non-interference is undecidable. Dynamic analysis methods, on the other hand, are more practical. However, usually they are not sound. In the case of sound methods, these usually use approximation (e.g., good for large code base). In both types of methods, it is hard to handle exceptions. To address non-interference, the authors propose secure multi-execution, which is a dynamic enforcement technique for non-interference. In addition, to the best of the author's knowledge, it is the first sound and precise enforcement method for non-interference. As an example of application, consider an untrusted javascript code in the browser. Furthermore, assume it can create an image url, which is allowed by same-origin policy. In classic information flow analysis, given a set of classified inputs, the goal is to prevent private inputs to influence public inputs. The idea in secure multi-execution is to take an existing program, and execute it multiple times, obtaining a low-execution, and a high-execution. In low-execution, high-security inputs are replaced with an undefined variable. In high-execution, the program is executed with high-security input, but the output is suppressed. With respect to input side effects, code is executed in low-security execution, and values are used in high-security execution. The authors prove that secure multi-execution is both sound, with respect to a stronger notion of non-interference (time insensitive), and precise. Evaluation results show a 100% overhead in memory usage and execution time over an I/O benchmark. In conclusion, authors believe that their precision and soundness definitions are those needed for securely execution code in browsers, and that performance overhead is acceptable in some use cases. Discussion: Helen Wang, from MSR, asked how do the authors tag I/O inputs. Dominique answered that they intercept calls in the browser, and analyze them. Helen followed saying that the api is very insensitive. Dominique replied that every single call has to be intercepted and analyzed, but he thinks it can be done. Venkata pointed out that the idea of using multiple executions to prevent information leaks has already been presented in one of his papers in 2008. Dominique responded that the novelty in his work is that you use both outputs from both executions, and select which ones you want. Fred Cohen commented that this is great if you execute one at a time (e.g., low-execution after high-execution), and asked what would happen if one executes both at the same time. Dominique replied that one just has to make sure that whatever one executes, it is executed completely. Finally, David Molnar asked what would change if interceptions are executed at a proxy instead of at a browser. Dominique answered that there would be a need for intercepting in both directions (low and high). PAPER #7: Object Capabilities and Isolation of Untrusted Web Applications AUTHORS: Sergio Maffeis (Imperial College London), John C. Mitchell (Stanford University), Ankur Taly (Stanford University). Ankur presented. WRITEUP BY: SB Summary: Mashups are formed by mixing content from different providers where untrusted applications can share the same page. This leads to the isolation problem where the page and it's components have to protect their resources from malicious components. In this talk Ankur talked about their static technique to prove inter-component isolation, where heap resources should be disjoint. They consider isolation in a programming language context in which mashups are different components where each component is a sequential program written with some operational semantics. The main idea is to consider heaps as a resource. Every program is then associated with capabilities that are a means of designating and accessing the heap resources. Ankur described their approach, where they formally define capability systems for programming languages, formally define capability safety and then they derive a sufficient check for inter-component isolation using capability safety. In the talk, Ankur described some features of their capability system and also defined the notions of capability-safety and authority-safety. He then gave an example of how inter-component isolation can be achieved using mashups by giving every component capabilities with disjoint authorities. To build safe mashups, one has to prove that the underlying language is capability-safe or authority-safe and derive an enforcement function that provides authority isolation. Their main contributions are describing the notions of capability safety and authority safety and showing how they lead to isolation. They also show that the language Jsafe is authority safe and Cajita is capability safe. In the future, they want to define the isolation problem for mashups with interacting components. Discussion: David Molnar asked what kind of sanitization could be added to make sharing possible in Facebook applications? Ankur replied that both state and code have to be encapsulated. Venkat Pingali asked how their approach can work for content rewriting systems like Greasemonkey. Ankur replied that Greasemonkey allow arbitray JavaScript to run. One has to restrict the JavaScript functions that could be used. Helen Wang asked how one to achieve isolation in cross-domain iframes? Ankur replied that cross-domain iframes already provide noscript policy with some authority isolation, but that is too restrictive. Authority should cover native libraries and allow readonly access to all system libraries from all objects. SESSION 3: ROOT OF TRUST Chair: Radu Sion, Stony Brook University PAPER #8: TrustVisor: Efficient TCB Reduction and Attestation AUTHORS: Jonathan McCune (Carnegie Mellon University), Yanlin Li (Carnegie Mellon University), Ning Qu (Nvidia), Zongwei Zhou (Carnegie Mellon University), Anupam Datta (Carnegie Mellon University), Virgil Gligor (Carnegie Mellon University), Adrian Perrig (Carnegie Mellon University) WRITEUP BY: JT Summary: Jonathan presented. Their motivation is to protect most critical data (e.g., ssl private keys, password files, ACL). However, OSes today have too much power, which makes it difficult to protect data. For example, computers have unrestricted access to application data. In addition, trusted computing base (TCB) for S (i.e., sensitive code region that has to be protected) includes majority of drivers and applications. S may be managing sensitive resources, similar to a Flicker session. TrustVisor is a tiny hypervisor for isolation of code S. Hypervisor does not have to handle IPC or scheduling. Here, the goal is to have efficient transitions between OS and S. In addition, secondary goal is to ensure trusted verification property (i.e., external verification that output equals S(input)). S may have sensitive data, thus it would require protected storage. External verification consists of remote attestation using a TPM. Verifier asks "what code is this machine running?", and the answer is a digital signature over a hash of inputs/outputs combined with a private key. External verification gives a client the ability to evaluate server before it sends data. For protected storage, the application can register S, which then becomes attestable. Access is controlled by identity of S through a hash computed over its binary code. The goal in this verification step is to keep TCB small. If compared to other approaches (e.g., VM, monolithic kernel, vTPM, Overshadow, microkernel and Flicker), TrustVisor is small, can provide fine-grained protections, and its performance is good. In terms of implementation, TrustVisor virtualizes memory space and processor, whilst device drivers are not virtualized (e.g., DMA). Because of that, TrustVisor implements rules for ensuring protection against DMA devices. In addition, it provides isolation using hardware virtualization support. Finally, to enable attestation and protected storage, TrustVisor provides a microTPM. S is registered to TrustVisor by using a hypercall. Timeline of an application sensitive code in TrustVisor consists of multiple invocations during a single registration cycle. Once the application finishes, it can be unregistered. Long-term secrets have to be protected with micro-TPM, which provides a small subset of hardware TPM operations for protected storage and external verification. Current performance of micro-TPM is not good, and it can become a bottleneck if put in critical path (latency). However, a software implementation of micro-TPM reduces latency by orders of magnitude. Evaluation results over a prototype SSL webserver with protections for private key show an overhead of 12%, due to context switching between trusted system and sensitive code. Discussion: Ruby Lee from Princeton asked how the authors could scale TrustVisor to support multiple sessions, and how would they protect memory without hardware support. Jonathan answered that current implementation does support multiple sessions in one processor (i.e., multiple S registered at the same time). Therefore, the authors protect S by taking advantage of hardware virtualization using nested paging with two full sets of page tables. Someone asked if the authors use memory integrity trees. Jonathan replied that they do not hash memory, and that Overshadow uses a crypto mechanism to keep the pages intact. Their implementation manages the nested tables such that the OS never has the opportunity to touch pages. Prashant asked how do the authors make sure someone has not modified the code. Jonathan answered that the code can be modified. When code is registered, memory protections are put in place, and then TPM seals it. Cynthia Irvine from NPS asked how do the authors control entries to the secure code once it has been verified, and if applications could potentialy bypass it. Jonathan said that this is part of the registration process. Specifically, all the pages that are part of sensitive code are marked into a nested table, and TrustVisor has to decide if it would allow application to continue. Someone from TUBerlin asked if it is possible to a system to intercept the registration and claim that code has been registered when it has not. Jonathan answered that it is important to architect the application to determine whether the code pages have the correct hash value. In addition, he said that they are at the mercy of the OS. If the OS does not want to let a session run, then it will not run. PAPER #9: Overcoming an Untrusted Computing Base: Detecting and Removing Malicious Hardware Automatically AUTHORS: Matthew Hicks (University of Illinois), Murph Finnicum (University of Illinois), Samuel T. King (University of Illinois), Milo M. K. Martin (University of Pennsylvania), Jonathan M. Smith (University of Pennsylvania) WRITEUP BY: BR Matthew Hicks presented a photo-filled talk on BlueChip, a hardware and software system that detects and protects against the design-time insertion of malicious circuitry. Hicks argued that hardware, which is becoming increasingly complex, should no longer be assumed trustworthy by software. BlueChip, which is intended for use by managers or researchers with access to design descriptions, examines VHDL descriptions of hardware and identifying suspicious circuits via a technique the authors call unused circuit identification: circuits whose inputs always equal their outputs during testing are considered suspicious. BlueChip replaces VHDL descriptions of suspicious circuits with exception generation hardware that BlueChip's software component catches at run time. Hicks described the run-time operation of BlueChip. When an instruction triggers a security exception, BlueChip uses alternative instruction sequences (e.g., XOR and NAND instructions replacing a faulting OR instruction) to emulate the instruction's behavior, enabling forward program progress without re-triggering the exception. Hicks demonstrated BlueChip's effectiveness by describing a prototype implementation and some of the attacks it prevented (e.g., a privilege escalation attack triggered by a special sequence of instructions). He asserted that BlueChip added less than 5% of overhead to otherwise unmodified programs in the authors' testing. In the question and answer period, Peng Ning suggested that attacks using arbitrarily obscure activation sequences might not be detected by BlueChip; Hicks responded that sequences not observed during testing would not be allowed by BlueChip during run time. Aleksey Nogin suggested that BlueChip generate new test cases when it detects malicious circuitry, which Hicks agreed was a good idea for future work. PAPER #10: Tamper Evident Microprocessors AUTHORS: Adam Waksman, Simha Sethumadhavan (Columbia University). Adam presented. WRITEUP BY: SB Summary: The talk is about the design of microprocessors that are able to detect hardware backdoors. Hardware is trusted and it is usually the root of trust. However, it is complex and designed by a large number of people. In the talk, Adam gave an overview of the chip design and said that most of the previous approaches assumed that the designs are benign and malicious hardware can be implanted during fabrication. In this talk, Adam talked about their threat model where the designers can easily implant backdoors to the hardware and their approach of detecting their attacks. Adam then presented a taxonomy of attacks and said that their approach can detect all the attacks. Their approach assumes that there is large design team involved in the chip design, with full knowledge of the design. They also have an equal distrust assumption, where they have no clue which designer is malicious i.e. every unit is equally untrusted. The key idea is that the microprocessor pipeline trusts everything that is given to it; there is no way for the chips at the end of the pipeline to check up on previous chips that send data to it. If there is a way in which the chips at the end communicate with the previous chips to verify certain high level properties about the information passed the them, then the attacks could be prevented. Their solution does exactly this. Adam then demonstrated their approach using a simple example. For evaluating their technique, they developed several attacks and they had no false positives or negatives. They also their evaluation of the cost of their approach with respect to the extra space on the board and coverage where they analyzed the area on the chip they cover. To cover areas for which they could not find easy properties to verify, they used duplication. They also did not cover components like microcode, ALU, etc. Adam then ended the talk by discussing some of the extensions that could be done to their approach. Discussion: Fred Cohen commented that their approach is still vulnerable to electrical side-channel attacks and microcode attacks. Adam agreed that they don't cover microcode and the explanation to this is given in the paper. He said that, although they are looking at logic level attacks, the attacks below logic levels which break the patterns could still be caught by their approach. Simon Ou asked what would happen if two designers collude with each other? Adam agreed that they assume that there is only one bad guy. He said that their approach can be generalized to detect multiple attackers, however that would incur them a penalty of increasing the area cost. Peter Neumann commented that he liked the research to begin with and the area looks promising, but he had reservations about building trustworthy systems over untrustworthy platforms. Adam agreed, but he said that they are trying to build a golden model at the register transfer level. Mohamed Aboutabl asked what metrics they are using to measure performance and whether they consider the fact that power and heat could reduce the performance. Adam replied that they are not adding any new gates or wires on critical paths and therefore don't affect the performance so much. He agreed that they could increase power, heat, etc., but this doesn't affect the clock frequency. Ross Anderson said that since the 12th century people were interested in double-entry bookkeeping; 8 years ago his team did a project for G3 cards, which had hardware redundancy to prevent attacks where they required three times as many gates as the original design. He had reservations about the proposed approach because if they implemented it end-to-end (including the ALUs), it might need at least twice the number of gates on the CPU. Adam said that ALUs are a special case, where there is no simple property to verify the correctness and therefore the whole ALU has to be duplicated. However, in other cases simple high level properties exist for verification and there is no need for duplication. SESSION 4: INFORMATION ABUSE Chair: Patrick Traynor, Georgia Institute of Technology PAPER #11: Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow AUTHORS: Shuo Chen (Microsoft Research), Rui Wang (Indiana University Bloomington), XiaoFeng Wang (Indiana University Bloomington), Kehuan Zhang (Indiana University Bloomington) WRITEUP BY: BR Shuo Chen spoke on a type of side-channel information leak that endangers privacy on a number of popular websites. The authors discovered that they could determine a user's activities on those websites simply by sniffing the user's browser's encrypted network traffic. The key idea is that highly interactive websites that incur round-trip traffic while users enter information can exhibit packet length and timing patterns that leak information about what the user is entering. To illustrate the leak, Chen described a leading tax preparation website that allows users to navigate through a large state space by answering questions and filling forms. After measuring network traffic while mapping the state space themselves, the authors found that they could determine a user's position in the state space -- which may be parameterized on private information -- by measuring the user's packets. Similarly, for an investing website, the authors found that they could identify the mutual funds in a user's portfolio by examining packets from a price chart image server. Chen suggested that fixing the problem he described would be difficult. For the time being, he recommended content or packet padding to defend against eavesdropping, but he also pointed out some subtle difficulties packet padding imports. He suggested that padding would be applicable only on a case-by-case basis once vulnerabilities had been identified. In the question and answer period, Patrick Traynor asked how the vendors responded to the authors' findings; Chen remarked that the investment website in particular was interested in reinforcing their users' privacy by mitigating the flaw. Emiliano De Cristofaro asked whether the traffic patterns were different for authenticated and unauthenticated users on search engines like Google that suggest completions for partially entered queries. Chen responded in the affirmative and remarked that logging into such sites might be a good mitigation factor for the leak because different users might induce different, harder-to-guess patterns depending on their query histories. PAPER #12: Investigation of Triangular Spamming: a Stealthy and Efficient Spamming Technique AUTHORS: Zhiyun Qian (University of Michigan), Z. Morley Mao (University of Michigan), Yinglian Xie (Microsoft Research Silicon Valley), Fang Yu (Microsoft Research Silicon Valley) WRITEUP BY: JT Summary: Zhiyun presented. Spam is an arms race, where new techniques are proposed, and counter techniques are invented that bypass it. For the network level spamming arms race, existing work focuses on ip-based blacklist, and port 25 blocking. In ip-based blacklisting, ip addresses are important resources to spammers. If spam is too fast, it will be blacklisted too quickly. Port 25 blocking, on the other hand, is a way in which smtp servers attempt to reduce spam traffic originated by their own networks (e.g., AT&T). In this case, defense effectively elliminates ip addresses for spammers to use. Triangular spamming is a pretty much unknown attack, but it is a real one. The author's goal in this work is to study how prevalent is triangular spamming. In this kind of attack, there are 3 entities involved: a high bandwidth bot, a relay bot, and a legitimate mail server. In addition, it requires ip spoofing and network level package relay. To establish a tcp connection, high bandwidth bot puts relay bot as source bot. The legitimate mail server will respond to relay bot, instead of high bandwidth bot. As a result, attack can be very stealthy and efficient. It can evade port 25 policy, because relay bot ISP would not block port 25 traffic. (thus it can relay through there). Goals of this study is to evaluate how stealthy triangular spamming can be by looking at mainly 2 factors. First one is how easy it is to evade ip-based blacklisting. The authors do this by comparing 2 strategies: (i) all bots directly send spam at their full speed, and (ii) triangular spamming is used where only high bandwidth bots send spam. Second factor is how easy it is to evade port 25 blocking, and their experiments show that 97% of blocking networks are vulnerable. The authors used two techniques, mainly selectively relaying packets, and aggressive pipelining. In addition, for port 25 blocking, two settings were experimented: blocking outgoing traffic with dst port 25, and not blocking incoming traffic with src port 25. Specifically, they instrumented multiple websites to check if ISP is blocking port 25 or not by inserting flash scripts that attempt a connection at that port. If the connection is not successful, it could be either because isp is blocking, or because a host firewall is blocking. To determine which, the authors use active probing to check if ISP is blocking. Only 9.8% block port 25, which shows that ISPs are not doing a good job of spam blocking. In the case of triangular spamming, results show that only 3.2% of prefixes performed IN blocking, and the remaining number of hosts are vulnerable. For ip blacklisting, only 44% of prefixes are listed in the blacklist, and most of them are not coverted by port 25 blocking, nor blacklists. To prevent and detect triangular spamming, the authors suggest that prevention could be deployed in ISPs to prevent IP spoofing. In addition, ISPs could block incoming traffic with src port 25, as well as have a stateful firewall to disable relay bots. Detection is achieved by looking at ip addresses that are blocked for port 25, and at different network characteristics (e.g., topology, delay). The authors have collected network traces over a 7-day period at a mail server. Their results show that 1% of all ip addresses have port 25 blocking behavior, and spam ratio for these ip addresses is 99.9%. In conclusion, the authors have shown that today's isp port 25 blocking policy is vulnerable, and it allows triangular spamming. Discussion: Patrick Traynor asked if the authors had considered any of the ingress filtering. Zhiyun answered that spammers do not have to get a lot of high bandwidth bots, and they can use bots with fair amount of speed, which do not do blocking. Juan Caballero asked what is the reason for high bandwidth bots not sending as much spam as possible. Zhiyun replied that high bandwidth bots would not be blocked for port 25 traffic. In addition, he said that they assume relay bot ISPs would block port 25 traffic, but only in outgoing direction. Hence, relay bot could related it to the appropriate packet. Jinpeng Wei asked how do the authors differentiate inbound and outbound filtering. Spefically, he asked if the ipid is a global or local property. If it is global, then in the meantime some other pc in the internet could contact the mail server and generate a different ipid. Zhiyun answered that they have to get the ipid ahead of time without the probing packet. Then, they can check if the ipid has increased. In addition, he said that they do nott expect the bandwidth to increase significantly. PAPER #13: A Practical Attack to De-Anonymize Social Network Users AUTHORS: Gilbert Wondracek (Vienna University of Technology), Thorsten Holz (Vienna University of Technology), Engin Kirda (Institute Eurecom), Christopher Kruegel (University of California, Santa Barbara) WRITEUP BY: BR Gilbert Wondracek described a way in which website operators can de-anonymize visitors to their websites who are also members of social networking sites. The attack builds on a previous history-stealing attack. In the history-stealing attack, a website stealthily executes yes-or-no queries against users' browsing histories by generating links to URLs and testing the resulting links' colors to determine whether those URLs have been visited. The authors' new attack uses history stealing to find visited social network URLs; a website operator can refine her history queries until she discovers a pattern that is unique, in some cases, to a single person. The attack benefits from several properties of the web: first, most web browsers are configured with long history expiration times. Second, many social networks are constructed such that user and group identifiers appear in the URLs users visit. Third, many affinity groups on social networks have public user listings or are easy to join, meaning that some users can be identified by their group memberships. The authors tested their technique against several major websites including Xing, Facebook, and LinkedIn. In a public experiment with around 10,000 volunteers, the authors de-anonymized 12.1% of the volunteers. Wondracek suggested several mitigation techniques. If social networks included less identifying information in URLs (by, e.g., using POST instead of GET to transmit data), identifying users by their URL histories would be more difficult; however, such a change threatens to make public pages more difficult to find. Wondracek also suggested that browsers should adopt a "same-origin" policy for history queries similar to the policy browsers use to protect web documents from interacting unsafely. In the question and answer session, Steven Murdoch pointed out that the default configuration of Tor disables browsing history and therefore stops the attack. Leo Meyerovich asked about the effect of history queries on a user's bandwidth; Wondracek remarked that there are many ways to generate links and that some combination of techniques might make the bandwidth hit small. Kosta Beznosov asked whether users care about being de-anonymized; Wondracek responded that the volunteers in the authors' user study were shocked. Collin Jackson asked whether a similar attack could exploit cache timing differences even in browsers that adopt a same-origin policy for history queries, and Wondracek said yes. Daphne Yao asked whether the authors considered using geographical cues to de-anonymize people further; Wondracek responded that such information is indeed helpful to the attack. PAPER #14: SCiFI - A System for Secure Face Identification AUTHORS: Margarita Osadchy, Benny Pinkas, Ayman Jarrous, Boaz Moskovich (University of Haifa). Benny presented. WRITEUP BY: SB Summary: There are two types of application for face recognition; one is identification of faces during login, which happens in a controlled environment and the other one is during surveillance to identify suspects while they pass cameras placed in public locations, which is harder. Traditionally surveillance is done by an operator who tries to find scanning faces to find suspects. This could have privacy implications where the operator can track the activities of non-suspects. One idea to prevent this is to store the suspect list at the client side on the cameras and perform the face recognition at the client side. But, what if the suspect list is confidential? The solution then is to place suspect list at server and allow the camera to do the surveillance and matching. The problem now is how to define the matching. In this talk Benny presented their solution for this problem. He described their new face identification algorithm for secure matching which employs state of the art recognition techniques and uses only a single image for matching. He also described the design and implementation of the protocol SciFI, for the matching process. During the face-recognition and matching process it is important to minimize the online latency and therefore they proposed methods to pre-process the face information and use efficient functions for different computations. In their experiments, their algorithm was robust to illumination changes, small changes to pose and partial occlusions. Benny also showed an experiment with a list of 15 faces. The suspect's eyes were occluded because of a glare from his glasses and showed that the protocol identifies the suspect. Discussion: Deb Downs asked how they handle profile pictures? Benny replied that they cannot handle profiles yet. Somebody asked if the protocol is secure in honest but curious case and what the implciations were for the malicious case. Benny replied that they have a version for covert case which assumes that if one of the involved parties tries to break the protocol, they can be caught with reasonable certainty. Lior Malka asked if something could be inferred if one of the parties is cheating. Benny replied that they have proof that if someone does not follow the protocol then they can catch them with visible probability. Xuxiang Jiang commented that there could be a simple attack where the attacker can fool the camera using someone else's picture. Would that attack work here? Benny said that that particular attack works for login application and does not hold for surveillance tasks. SPECIAL GALA DINNER CELEBRATING THE 30TH ANNIVERSARY OF SECURITY AND PRIVACY Master of Ceremonies: Peter G. Neumann WRITEUP BY: SB The celebration of the 30th anniversary of the IEEE Symposium on Security and Privacy took place at the Pauley Ballroom, UC Berkeley on Monday, May 17, 2010. This even was sponsored by the IEEE Computer Society Technical Committee on Security and Privacy. It started with a reception, followed by dinner. Awards and recognitions followed the dinner. Peter G. Neumann was the master of ceremonies. Jerome H. Saltzer was presented the National Computer Security Award; Jerome was introduced by Tim Grance. Carl Landwehr was awarded the IEEE Computer Security Distinguished Service award and Computer Society TC-SP Outstanding Community Service Award. W. Douglas Maughan and John Markoff were also awarded the Computer Society TC-SP Outstanding Community Service Award. Terry Benzel was awarded the Computer Society Award for Continuing Service. Everybody in the audience, young and old members and attendees of the symposium, were commended and applauded. The awards ceremony ended with Hillarie Orman and Sven Dietrich presenting a specially made security blanket to Peter Neumann. The celebrations ended with a comedy sketch by the Galileo players, a Chicago based sketch comedy troupe. SESSION 5: NETWORK SECURITY Chair: Nikita Borisov, University of Illinois at Urbana-Champaign PAPER #15: Round-Efficient Broadcast Authentication Protocols for Fixed Topology Classes AUTHORS: Haowen Chan, Adrian Perrig (Carnegie Mellon University) WRITEUP BY: BR Haowen Chan presented an efficient authentication protocol for sensor networks with fixed topologies. The authors considered the problem of authenticating messages in sensor networks that have multiple receivers and a designated sender all arranged in a specific predetermined (fixed) way. Traditional approaches for sender authentication involve either computationally intensive public key cryptography or a shared-key arrangement in which the sender transmits per-node message authentication codes (MACs) in a series whose length grows with the number of receivers. In search of a communication-efficient protocol that makes minimal assumptions about nodes' computational power, the authors fix their assumptions as follows: the sender knows the network's topology and shares a unique symmetric key with each receiver. If the nodes are arranged in a tree topology, each attempt at transmission induces a hash tree that allows every node to authenticate the message after three passes up and down the tree of nodes. The authors designed a scheme that requires only one pass if the tree is flattened into a simple path in which each node has at most two neighbors. Chan argued that a path topology is common as a subtopology of other topologies. He described several simple optimizations that reduced the constant factor associated with path traversal from 3n (with n nodes) to 2n to 1n, at a cost of increased communication overhead. In the question and answer session, Mohamed Aboutabl remarked that the authors' assumptions about the network seemed significant, to which Chan remarked on the amenability of path-structured networks with a fixed sender to be robust against certain failures. Peng Ning remarked that such networks might be vulnerable to attacks by misbehaving nodes; Chan replied that a denial-of-service attack would be possible with misbehaving nodes, but that the authors had optimized for the common case in which nodes do not misbehave. PAPER #16: Revocation Systems with Very Small Private Keys AUTHORS: Allison Lewko (University of Texas at Austin), Amit Sahai (University of California, Los Angeles), Brent Waters (University of Texas at Austin) WRITEUP BY: JT Summary: Allison presented. She presented a scenario where sender would like to send an encrypted message to multiple receivers. Here, a sender sends a ciphertext, and each user can decrypt it with its own key. The process of blocking a user from decrypting a message is called revocation. A simple solution for revocation is to encrypt a message to each user. However, this does not scale. The authors' goal is to devise a revocation system and an encryption algorithm that allows everyone, except the revoked users, to decrypt a message. Such a system should also be efficient for a large number users and a small number of revoked users. Basic algorithms in revocation systems have a threat model that is comprised of collusion attacks (i.e., two revoked users, where each of their private keys alone cannot decrypt it, but combined they can). In addition, they have an adaptive security definition. The authors' technique considers two equations. The revocation system provides users with these two equations, and two unknowns. In this case, equations are dependent, and if an user is revoked, she cannot isolate the two unknowns. Otherwise, the user receives 2 independent equations, which can be solved. Nevertheless, this solution is still vulnerable to collusion, because 2 users together could potentially solve it. To address that, the authors personalize unknowns to each user, thus they cannot be combined to perform a collusion attack. Furthermore, shares are assigned to each user, and shares cannot be combined because each share is personalized. With this, the authors can obtain adaptive security. In addition, the authors chose small public keys because the public key sie does not grow with number of users, and adding new users does not require changing public key. With respect to previous work, the authors state that it is selectively secure, while theirs is both selectively secure and adaptively secure. Finally, the authors claim that their work applies to attribute based encryption (ABE) in the following manner. A ciphertext is associated with attributes, and secret keys are associated with access formulas. To achieve monotonic ABE and revocation, for each negater attribute, system parameter alpha is replaced by secret share of alpha for that negated attribute. Discussion: David Jacobsen from Qualcomm asked whether sending out the revocation list revealed the identities of the revoked parties; Lewko said it did. Nikita Borisov asked whether it was possible to define the security of a standard revocation scheme in the UC framework; Lewko said she did not know. PAPER #17: Authenticating Primary Users' Signals in Cognitive Radio Networks via Integrated Cryptographic and Wireless Link Signatures AUTHORS: Yao Liu, Peng Ning, Huaiyu Dai (North Carolina State University) WRITEUP BY: JT Summary: Yao presented. There is an increasing demand for wireless bandwidth. As they become more popular, unlicensend channels become over crowded. A naive solution is to allocate more channels. However, spectrum has a limited capacity, and can only take so many channels. To control usage of channel capacity, the authors focus on primary user detection. That is, a secondary user monitors for the presence of a primary user's signal on target channels. Here, two methods exist: energy detection, and feature detection. These are vulnerable to primary user emulation attacks (PUE), however, and this is mainly possible because of lack of authentication. The threat model consists of primary users at fixed locations, under physical protection. In addition, an attacker's objective is to get an unfair share of the bandwidth by blocking other user's accesses to target channels. The attacker's capacities include forging packets, inserting packets, and high transmission power. The author's approach is to assume primary user and attacker are at different location, hence generate thus different location/link signatures. Based on this, acquisition of authentic link signatures is performed by authenticating users, and authentication is performed by acquisition of authentic link signatures (chicken/egg problem). In addition, their system requires two levels of authentication: authentication at a secondary user through link signature, and authentication at a helper node. The authentication at a helper node can have both false negatives and false alarms. Both false negatives and false alarm decrease exponentially as a a function of the increase in minimum distance. Evaluation consisted of using a CDF of width (amplitude) threshold versus the probability of a false alarm. The authors also conducted a feasibility study by implementing a prototype using the GNUradio toolkit. The results based on their prototype simulator show that the distance between helper node and primary user is on average 0.5 m, and the distance between the helper node and the attacker is 15 m. Finally, authors have also measured a number of performance parameters for cryptographic operations. As future work, the authors envision a study on users in a moving vehicle. Discussion: Someone from University of waterloo asked if the link signature varies over time. Yao answered that link signature changes over time, and also that their experiments have shown that the distance between attacker and receptor varies. As a result, the authors can detect the dist of legitimate user with a probability of 1. However, she said, they would need a training process to do it correctly. The person then followed saying that the authors assume that helper node signature will be similar to second user's, but there is a difference in transmit power if compared to primary user. As a result, helper node would not be heard.. He then asked if Yao was sure that link signature would be the same even if transmit power were different among nodes. Yao replied that helper node and primary user operate on the same channel due to normalization, so the helper node would be heard, because it does not need a high transmit power precisely because of the normalization. Someone asked if the attacker could add another helper node of its own. Yao replied that they assume public keys are known before-hand. Another person asked if the authors assume that there is only one attacker. Yao said that they use one attacker and one helper user in their system and evaluation, but she also said that their scheme could be applied to multiple users. This person then asked if changing the assumption to multiple users would break their scheme. Yao answered that, as long as distance between attacker and helper node is large enough, their scheme still works. According to her, number of users is not a problem, and the distance between nodes is more important. Finally, someone asked if the attacker could emulate the signature, in case an attacker stays between helper node and receiver node. Yao replied that even if an attacker sits in the middle, as long as the distance between attacker and helper node is large enough, their system still works. However, if attacker gets closer to primary user, then their system might be compromised. Even in this case, it is reasonable to assume that distance between these nodes would be large enough. SESSION 6: SYSTEMATIZATION OF KNOWLEDGE I Chair: Z. Morley Mao PAPER #18: Outside the Closed World: On Using Machine Learning For Network Intrusion Detection AUTHORS: Robin Sommer (International Computer Science Institute / Lawrence Berkeley National Laboratory), Vern Paxson (International Computer Science Institute / University of California, Berkeley). Robin presented. WRITEUP BY: SB Summary: In this talk Robin describes reasons why Machine Learning techniques have not seen much success in network intrusion detection systems, while they are very popular in many other domains. The classic machine learning setting is a classification problem and trains the classifier with many instances of all possible cases. NIDS however need to do outlier detection, for which the system can be trained only with normal activity. Such an approach has a much smaller margin for errors. This leads to a semantic gap between what the system reports and what the operator expects, resulting in too many false positives and thereby reducing the operator confidence in the IDS. Additionally verifying false positives is not cheap as compared to other domains where ML is successfully employed like OCR, translation, image analysis, etc. Robin went on to present some of their suggestions for building a good ML based NIDS. He claimed that ML could be useful if enough care is used to select the right features and the right approaches. Discussion: Susan Landau commended the authors saying that it's terrific to see a paper that says something doesn't work. She asked if the authors have tried talking to the machine learning community? Robin replied that they did. Infact, several of the ideas presented in the paper resulted from the discussions with the ML community and he learned that machine learning is better at finding commonalities. The other thing is that you just need to understand what you are doing. Vijay commented that in some cases, for example with neural networks it is impossible to determine how the algorithm works. He asked how one could bridge this gap where one doesn't understand why a particular algorithm works? Robin agreed that there are some approaches where one can't understand how they come to their conclusions. If that is the case, his suggestion is to use the classifier as an exploratory tool but not as the final detector. Vijay also pointed out that a lot of ML techniques retrain themselves once they make a mistake. He asked if Robin knew of any work that uses such approaches? Robin said that although there is certainly such work, it will not solve the problem because the false alarms are really bad for the operator so much so that he begins not to trust the system. Rob Cunningham said that he is from the ML community and he knows that there are some ML infrastructures that could make the results better. Robin agreed and said that he is not dismissing using ML but just saying that it should be pursued carefully in this domain. PAPER #19: All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but might have been afraid to ask) AUTHORS: Thanassis Avgerinos, Edward Schwartz, David Brumley (Carnegie Mellon University) WRITEUP BY: JT Summary: Thanassis presented. Motivation is how to make computers automatically analyze programs at runtime. The most commonly used analysis techniques are dynamic taint analysis and forward symbolic execution. Given a running program, dynamic analysis aims at analyzing the program given a flow of its execution (i.e., discover what values can be derived from user input). Forward symbolic execution, on the other hand, attempts to find what input would have to be provided to a program to make its execution reach an specific line. The authors' contribution is to take all these english descriptions and turn them into an algorithm, and define its operational semantics. One of the challenges is that there is an extensible amount of work in these two areas. Hence, there are common patterns that repeat themselves time and again. The authors systematically extracted a number of these common problems and solutions, and summarized them. First, Thanassis described how dynamic analysis works, what are its desired properties, and what are the main issues in this area. He displayed an example source code, with a mapping of variables to values at runtime. Variable taint analysis introduces a new mapping from variables to a taint status. Whenever there is an input expression, the result of this expression is tainted if and only if the source of this input is untrusted. Following this definition, whenever a computation is based on tainted values, the result is also going to be tainted. As an example of policy, he showed a goto that is only allowed to execute if the target address is not tainted. For memory tainting, a new mapping is introduced that maps addresses to taint status. As another example of policy, he showed one where if either the address or the memory cell is tainted, then its value is also tainted. One of the main problems in taint analysis is undertaiting (i.e., values that should be tainted are not), and overtainting (i.e., unaffected values are tainted). Following these explanations, Thanassis described how forward symbolic execution works, its desired properties and its main problems. The question here is what inputs will make the execution reach a particular line of code. An x symbol is defined (x symbolic), and it can have any value. As an input traverses the control flow graph (CFG), constraints are added to path (e.g., symbol will go to a path if x is positive, and to another one if x is negative). Hence, all that needs to be done is to find an anser that will satisfy all constraints of a particular path. A main problem in this context is state space explosion when exploring all paths (i.e., exponential size). The solution for this is to apply a number of path selection heuristics (e.g., bounded DFS, random search on symbolic execution tree). Another solution is to use concolic testing. Nevertheless, all proposed approaches are heuristics. Another two issues are the exponential number of formulas, which are also exponentially sized. In addition, solving such a formula is NP-complete. In conclusion, even though these techniques are used in many cases (e.g., security, testing, fault tolerance), there are many open problems in taint analysis and forward symbolic execution that have not yet been efficiently addressed. Discussion: Joana Trindade from University of Illinois pointed out that the the problem of overtainting has been shown to scale out of proportions in recent work called "Pointless Tainting". She asked if Thanassis believes that this problem will ever be effectively solved, and if he sees any future for taint analysis. Thanassis answered that defining a magic policy that addresses both undertainting and overtainting is an open problem. Devdatta Akhawe from UC Berkeley asked if Thanassis knew any tools that could be used for analyzing browsers. Thanassis replied that current techniques do not scale for several MB of lines of code, which is the case for most existing browsers. Right now these techniques only scale for a few thousands of lines of code. KLEE was done in 2008, and they managed to analyze up to 6-7k lines. To use it for browsers, Thanassis said, there is still a long way to go. PAPER #20: State of the Art: Automated Black-Box Web Application Vulnerability Testing AUTHORS: Jason Bau, Elie Bursztein, Divij Gupta, John Mitchell (Stanford University) WRITEUP BY: BR To close the first Systematization of Knowledge session, Jason Bau presented a comparative evaluation of vulnerability testing services. Some certifying organizations, such as the Payment Card Industry (PCI) body, require prospective members to subject their own web sites to a sanctioned third-party vulnerability test as part of the certification process. The authors tested eight such third-party testing services to determine what they test, how effectively they find known vulnerabilities, and how closely the set of tested vulnerabilities resembles those found in the wild. Bau summarized the landscape of existing vulnerability testing techniques including black-box testing, static analysis, penetration tests, and manual analysis. The technique favored by the services the authors examined is automated black-box testing. The authors designed (and plan to release) a testbed bearing various known vulnerabilities, then pointed each scanning service at their testbed and tallied results in three categories: speed, coverage, and vulnerability detection rate. The slowest scanning service took roughly eight hours to scan their testbed, while the fastest took just over an hour. All of the services exchanged less than 1 GB of data with the testbed. The maximum detection rate for all scanners was 62%. No scanning service was a top-3 performer in all three categories. Morley Mao asked whether the black-box testing approach imports fundamental limitations on the kinds of vulnerabilities the services can detect. Bau confirmed that certain vulnerabilities such as blind SQL injection were too obscure for the scanners to detect. Rob Cunningham asked whether the authors shared their results with the services they tested, and Bau said they had. Cunningham remarked that many scanners stipulate that they are not to be used for evaluations like the authors', to which the authors responded that they had eventually talked to the right people to get their testing approved. Carl Landwehr asked whether the authors' results suggest that we should not put much faith in compliance badges that depend on vulnerability testing services. Bau responded by suggesting (to uproarious audience laughter) that anyone with a little Photoshop knowledge could make an official-looking badge. BUSINESS MEETING WRITEUP BY: BR David Evans gave a lively presentation about the review process that led to the 2010 Symposium's 31 paper acceptances (26 of 237 research papers and 5 of 30 for the Systemization of Knowledge track). Of the 136 reviews of research papers that were eventually accepted, only 3 strongly recommended acceptance, while 32 recommended acceptance, 58 weakly recommended acceptance, 34 weakly recommended rejection, and 9 strongly recommended rejection. Researchers from the United States had the most submissions and acceptances. Luxembourg had the highest density of Oakland submissions per capita; India the lowest. Language-based security and web security had the highest per-topic acceptance rates, and distributed systems security the lowest. Evans presented the Best Practical Paper award to "Chip and PIN is Broken" by Steven Murdoch, Saar Drimer, Ross Anderson, and Mike Bond from the University of Cambridge. The Best Student Paper award went to "TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection" by Tielei Wang and Tao Wei of Peking University, Guofei Gu of Texas A&M University, and Wei Zou of Peking University. The Best Paper award went to "SCiFI -- A System for Secure Face Identification" by Margarita Osadchy, Benny Pinkas, Ayman Jarrous, and Boaz Moskovich of the University of Haifa. SESSION 7: SECURE SYSTEMS Chair: Jonathan McCune PAPER #21: A Proof-Carrying File System AUTHORS: Deepak Garg, Frank Pfenning (Carnegie Mellon University) WRITEUP BY: BR Deepak Garg presented work that applied the technique of proof-carrying authorization (PCA) to a filesystem; they call the result PCFS. In PCA, access requests carry logic-based credentials and a proof that combines the credentials with descriptive axioms or inference rules to authorize the access. Garg gave the example of the Grey system at CMU, where an office door asks a prospective entrant's mobile phone to prove that the door's owner permits them to enter. PCA offers high assurance and high accountability, but it had not been applied to a system interface before the present work because of concerns about its efficiency. Garg described the efficiency problem the authors faced when implementing PCFS. They considered a reasonable workload of 2000 to 3000 filesystem accesses per second. If credentials are encoded in certificates, and checking a certificate requires 10 microseconds, the time needed to check all certificates that authorize an access quickly exceeds the system's ability to meet its workload goal. PCFS offloads the job of proof checking to a trusted verifier that runs locally but outside the filesystem. The verifier caches and signs proof verification results (which the authors call "procaps," for proven capabilities), allowing PCFS to avoid checking proofs during file access. With the caching modification, PCFS meets its goal of supporting several thousand workloads per second. The paper also contributes a new logic called BL that supports logical statements about time-based validity (for expressing, e.g., that a principal's security clearance expires after N years) and logical statements that depend on system state, and presents a formal solution to the problem of procap staleness in the face of policies whose consequences depend on time and state. In the question and answer session, Mohamed Aboutabl asked how PCFS differs from Kerberos. Garg clarified that credentials in PCFS encode the full logical justification for authorization decisions, whereas Kerberos credentials contain only authentication information. Simon Ou asked how PCFS handles revocation. Garg remarked that revocation is partially supported by BL's notion of time ranges and stateful policies and will be completely supported by certificate revocation lists in a future extension to PCFS. Aleksey Nogin asked whether it is possible to express a negative policy in BL (e.g., a certain principal should not be given access to a resource); Garg responded that negative policies are fundamentally incompatible with PCA but can be encoded through support for system state in BL policies. Cynthia Irvine noted that it is impossible to revoke previously obtained information even though it is possible to revoke future access, and Garg agreed, saying that PCFS is concerned with access control but not information flow. PAPER #22: Scalable Parametric Verification of Secure Systems: How to Verify Reference Monitors without Worrying about Data Structure Size AUTHORS: Jason Franklin (Carnegie Mellon University), Sagar Chaki (Carnegie Mellon University), Anupam Datta (Carnegie Mellon University), Arvind Seshadri (IBM Research). Jason presented. WRITEUP BY: SB Summary: This talk is about verifying reference monitors that operate over large data structures in a scalable way. Reference monitors are used to observe the execution of processes and prevent bad actions i.e. violations to policy. The goal of the work is to verify reference monitors in the presence of adversaries. To do that, the paper presents a new verification technique called the small model theorems. This is done by equating verification of a one entry in the reference monitor data structure to the verification of n entries. This approach can model any reference monitors that are row independent and row uniform. Safety properties over rows can be modeled as the policy that needs to be verified. The talk describes this analysis technique with a case study of SecVisor. Jason also mentioned that the limitation of this approach is that the the analysis was done at the design level and needs to be extended to the code level. Also, the approach can only be used to verify row independent policies. Discussion: Zhi Wang commented that it was very interesting that the authors are using page tables as an example. However, the page tables are not row-independent. For example, one page can be mapped to mult addresses (one executable, one writable). Jason replied that, that is the issue that came up with SecVisor. However, in the case of SecVisor, it was a but not a feature. It was exactly the sort of attack that they targeted with their technique where the vulnerability is exactly that a page table allows for such a mapping. Sruthi Bandhakavi asked for examples of row-dependent properties. Verifying access control matrices is one examples given by Jason. Helen Wang asked what their findings were with respect to security properties and bugs. Jason said that they found 2 bugs. They took SecVisor paper and modeled what they did. While the authors of SecVisor were working on source code auditings, they were modeling it. While the authors of SecVisor found 2 bugs in SecVisor, Jason's model-checker independently found the 2 counterexamples. Petros Maniatis whether their system can be extended to properties with chaining, for example cryptographic integrity checking within a page table. Jason said that this kind of proof approach may not work on properties with chaining. The main bottleneck here is that the chains could be of arbitrary length and therefore the small model theorems will not work in this setting. PAPER #23: HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity AUTHORS: Zhi Wang, Xuxian Jiang (North Carolina State University) WRITEUP BY: JT Summary: Zhi presented. Virtualization has been widely adopted in industry and academia. As evidence of that, Zhi stated that 16% of existing workload is running in VMs. In addition, virtualization technology is applied in many problems, such as guest integrity monitoring. All previous works in virtualized systems shared one commom assumption: they require a trustworthy hypervisor (e.g., one that provides isolation between VMs). This assumption is commonly based on hypervisor code size, meaning that because it is smaller than OS's, hypervisor provides higher security. According to Zhi, however, this is a flawed assumption because there are many vulnerabilities listed in CVE for hypervisors. In addition, existing hypervisors have a code size that reaches roughly 200K. Existing approaches to secure hypervisors include reducing the TCB (but cannot prove inexistence of bugs and/or vulnerabilities), and formal verification (e.g., seL4). The authors' goal is to enable self-protection of commodity type-I (bare-metal) hypervisors. Their assumptions are namely (i) a trusted x86 hardware (i.e., one that provides IOMMU and SMM), and (ii) software bugs that can be exploited to overwrite data structures in hypervisor. Their approach consists of providing code integrity (non-bypassable memory lockdown), code data integrity (restricted pointer indexing), and non-bypassable memory lockdown. In addition, page tables determine memory properties. A property is a page that can be either writable or executable, but NOT both. All memory accesses by software are translated and controlled by page tables (including reads/writes of page tables). The authors guarantee that no code can modify the write-protected hypervisor code and data by using read-only page tables. One side-effect, however, is that hypervisor itself cannot update its own page tables, and has to map them to guest memory to do so. Regarding implementation details, the write-protect CR0 bit protects page table updates. A WP bit is set to 1 by default to lock down memory. Having that, one needs to update the page table atomically (i.e., disable interrupts, WP=0, verify proposed change, update read-only page table, WP=1, enable interrupts). Another technique they use in their secure hypervisor is restricted pointer indexing. The idea here is to provide control-flow integrity, or CFI (i.e., execution paths must follow CFG, which avoids return-oriented programming attacks). This technique ensures CFI by enforcing that only legitimate control data can be used (e.g., extract it from src code). The authors demonstrate the feasibility of their approach by implementing a prototype as extension of LLVM. Their prototype fully supports Bitvisor, and partially supports Xen. In addition, their performance results show at most 6% overhead. Finally, related work includes program analysis and formal verification, guest integrity monitoring or protection, and trusted computing. Discussion: Jinpeng asked if there are cases in which the authors would have to perform a manual analysis. If yes, what percentage of pointers required manual analysis? Zhi answered that this number is hypervisor specific, and that is 33% in their work. Jinpeng then asked if the authors do static or dynamic points-to analysis. Zhi replied that points-to analysis is performed in the LLVM compiler, so the authors also use manual analysis because they are not experts. Joana Trindade from University of Illinois asked if moving trust away from read only page bit to write protoected page bit in CR0 register would still leave the hypervisor vulnerable to hardware attacks. Zhi answered that they assume hardware is trusted. He also said that they do not prevent physical attacks, and that this would require more research to protect. Finally, Helen Wang asked if their technique can be directly applied to an OS. Zhi said that page table is more dynamic, because there are more states that are based in memory, and that they would need to protect those too. SESSION 8: SYSTEMATIZATION OF KNOWLEDGE II Chair: Ed Suh PAPER #24: How Good are Humans at Solving CAPTCHAs? A Large Scale Evaluation AUTHORS: Elie Bursztein, Steven Bethard, John C. Mitchell, Dan Jurafsky (Stanford University), Céline Fabry WRITEUP BY: BR Elie Bursztein presented an evaluation of CAPTCHAs -- small puzzles designed to be easy for humans to solve but hard for computers to solve -- from the perspective of human solvers. Bursztein set up the audience's expectations by displaying several CAPTCHAs that varied widely in difficulty. The authors compared 21 CAPTCHA schemes, 13 of which were image based and the rest audio based, by asking two sets of humans to solve over 300,000 CAPTCHAs. They measured the accuracy and speed of the humans' solutions. Bursztein summarized their results by noting that solution accuracy was far lower than the operators of the CAPTCHA-protected websites might hope: on average, image-based CAPTCHAs took humans 9.8 seconds to solve, but only 71% of the tested CAPTCHAs had three or more human testers agree on a correct solution; audio CAPTCHAs took an average of 28.4 seconds to solve and only 31.2% had three or more humans agree on a solution. Bursztein offered numerous other statistics on native speakers versus non-native speakers, the breakdown of Amazon Mechanical Turk workers, and the human testers' performance on many different CAPTCHA schemes. Thorsten Holz asked whether the authors planned to publish their annotated CAPTCHAs; Bursztein said that they did, though not publicly so as not to create a training set for automated solvers. Louis Kruger asked whether the authors had developed any intuitions about the relative difficulty of programmatically solving different CAPTCHA schemes, and Bursztein pointed to future work. Peter Neumann suggested that CAPTCHAs might be the wrong solution to the wrong problem, and challenged the audience to find a better way to approach the problem of distinguishing humans from computers. PAPER #25: Bootstrapping Trust in Commodity Computers AUTHORS: Bryan Parno, Jonathan M. McCune, Adrian Perrig (Carnegie Mellon University). Bryan presented. WRITEUP BY: SB Summary: In today's networked world, it is important to be able to trust computers. However this has several challenges as illustrated by the following questions: How does one trust the hardware? Even if hardware is trusted, how does one trust the computer's software, which is changing all the time? Finally, how does one communicate the trustworthiness of a computer to a human? The paper condenses over a hundred references that address the problem of bootstrapping trust in a computer; i.e., that help the user determine what the computer will do with her data. Topics covered include bootstrapping foundations, transmitting bootstrap data, interpreting the information, validating the process, applications of bootstrapping, and also the human factors involved in the process. The paper also discusses limitations and future directions for this research area. In the talk, Bryan covered some key points from the paper at a high level. He discussed how trust is currently established in hardware and software. He also explained that even if the software is proven to be correct with respect to properties such as type-safety and control flow, establishing code identity via a chain of trust from the hardware is still important to show which particular piece of software is controlling the computer. Attesting to the run-time execution of the software is difficult, so current approaches typically only perform load-time checks. Run-time guarantees could be provided by static program transformations that enforce a given policy at run-time or by implementing a run-time enforcement layer that enforces the policy. In conclusion, Bryan emphasized that code identity is a critical property when deciding whether to trust a computer. There are a wide variety of hardware roots of trust available, and there are many interesting question in the area that still need to answered. Discussion: Ed Suh asked what the main challenges are for TPMs to be widely used? Bryan replied that some techniques for bootstrapping trust are used more widely than others. For example, cellphones have been able to adopt secure boot because a cellphone is a self contained application. Another example is Microsoft's BitLocker; which has been widely adopted in part because it is a closed system. However, the idea of remote attestation has not been as widely adopted because it is an open system. It has a chicken and egg problem in that both the attestor and the verifier must make changes for the attestation to be useful. SHORT TALKS AND POSITION PAPERS Chair: Angelos Stavrou WRITEUP BY: SB There were 9 short talks in all. The first short talk was given by Terry Benzel titled "Advancing Cyber Security Experimental Research", where she talked about the DETECT testbed which is the next-generation of the DETER testbed that was running for 6 years. She listed the several features of DETECT that would allow one to create larger experiments to test systems for security. Carl Landwehr asked what "beyond reality" meant? Terry answered that today's testbeds have limits about realism, and they want to make bigger, more realistic experiments possible using the testbed, thereby making it "beyond reality". The next talk was by Fred Cohen titled "Progress on physics of digital systems (information physics)", about properties of digital information systems that can be understood without detailed understanding of the underlying mathematics, and for which there are calculation methods. He pointed the audience to the site infophys.com for further information about the project. The next talk was by Bryan Parno titled "Non-interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers". Bryan talked about the challenges and the proposed solution for outsourcing computations to the cloud and then trusting the results. The next talk was by Dipankar Dasgupta titled "Building Self-Monitoring Multi-Core Systems", where he introduced the problem of partitioning multi-core systems in such a way that some cores are allocated to security and can be used to monitor other cores. Dipankar talked about the research issues that need to be addressed for this idea to work. The next talk was by Vivek Nigam titled "Progressing Collaborative Systems", where Vivek talked about collaborative systems where people who don't trust each other want to collaborate to perform a common task. He formulated the problem as a planning problem and talked about the different actions in the system and their complexity of the planning algorithms. The next talk was by Kui Xu titled "Cognition and Memory in Computer Science", where Kui talked about their goal of building a robust memory-based authentication system, where the questions would be based on the users' current activity and also past activity over a period of time. The challenge here is to balance users' memories about their recent actions and their security. Someone asked if the previous transaction is itself compromised and the chain of trust is broken, won't it be a problem? Kui answered that the attacker only has temporary access to personal information. Questions are generated in the long time over different time periods. The next talk titled "Oakland Citation Graph" was given by Hilarie Orman. She showed several interesting statistics about the past Oakland papers' citation patterns. The next talk titled "Healthcare Data Hemorrhages: Inadvertent disclosure and HITECH" was given by Eric Johnson. Eric discussed about the unintentional data leaks that happen in the healthcare industry through lost laptops, paper, etc. They did a study of what they information do people end up leaking and if the HITECH act that came into effect on 23rd September 2009 slowing down the leakage of information. Their study did not show a lot of difference. The next talk titled "Webseclab: Web Application Hacker Testbed" was given by Elie Bursztein. Elie talked about the testbed created at webseclab.com at Stanford to teach web security. The short talks were followed by 4 lightning talks. The first talk titled "VOIP Phishing" was presented by Federico Maggi. Federico talked about his efforts to build a VOIP honeypot for attracting and analyzing phone phishing attacks. More info can be found at phonephishing.info. The next talk titled "Towards a formal foundation of web security" was given by Devdatta Akhawe, where he talked about their CSF 2010 paper about developing an abstraction of the web echo-system that is detail enough for finding bugs but abtract enough for formal analysis. The next talk titled "A Lightweight Approach to Enforcing Security Policies for JavaScript" was given by Phu Phung. Phu talked about the wrapping library he built to intercept built-in methods with policies in JavaScript to prevent malicious access. The last talk in this session titled " AdJail: Practical Enforcement of Confidentiality and Integrity Policies on Web Advertisements" was given by V. N. Venkakrishnan. Venkatakrishnan talked about AdJail, a mechanism to sanitize and confine the ads. This work is appearing in USENIX Security Symposium this year. SESSION 9: ANALYZING DEPLOYED SYSTEMS Chair: J. Alex Halderman PAPER #26: Chip and PIN is Broken AUTHORS: Steven J. Murdoch, Saar Drimer, Ross Anderson, Mike Bond (University of Cambridge) WRITEUP BY: BR Steven Murdoch began Wednesday's program with a visually rich presentation on gaping flaws in the EMV, or "Chip and PIN," system used worldwide -- though not in the US -- for point-of-sale, credit, and debit transactions. (The paper won the Best Practical Paper award the previous day.) The authors' more-gently-titled previous work on Chip and PIN revealed that it was vulnerable to relay attacks. The present work focuses on a protocol flaw that renders the system easily exploitable via a man-in-the-middle attack: the PIN verification step is never authenticated. The technical consequence is that a PIN-entry terminal can be tricked into believing that the person presenting a card has entered the correct PIN, when in fact she has not. The consequences for consumers are more dire; because banks believe that PIN entry is entirely secure, they shift blame onto the consumers whose cards are stolen. Murdoch described the man-in-the-middle attack in detail. A thief carries a false card bearing contacts identical to a real card's; the underlying circuit is connected to the contacts on a real, stolen card. The thief presents the false card to a PIN-entry terminal and relays a card authentication conversation (personal information, account number, expiration date, and a digital signature over these data) between the terminal and the real card. The terminal asks for a PIN and the thief enters an arbitrary one. The terminal sends the entered PIN to the false card. A real card would check the PIN against the PIN stored in its memory, then reply with a yes or no indicating whether the transaction is allowed, but the false card simply replies yes. The terminal relays the transaction information to the bank and receives the bank's response allowing or denying the transaction. Neither the bank nor the terminal authenticates the PIN verification step, although both believe that a correct PIN was entered. Murdoch presented evidence suggesting that such attacks had been carried out successfully at the expense of card-theft victims. He also described the banks' responses to the authors' findings (mostly denial) and presented evidence that their responses were nonsensical. He concluded his talk by challenging the community to figure out how thieves manage to commit ATM fraud without knowing cards' PINs; ATM transactions are different in that the PIN is sent to the bank rather than the card. In the question-and-answer period, Paolo Comparetti asked whether the authors thought ATM fraud was happening because of protocol vulnerabilities or because cards and PINs are easy to steal; Murdoch replied that it was difficult to tell. Simon Ou remarked that relying on the card to verify the PIN is "just wrong," to which Murdoch responded that while the current design is indeed flawed, on-card verification is appropriate for offline transactions; he pointed the audience to the paper's section on possible solutions. Alex Halderman asked what lessons the authors thought designers should learn about designing financial systems; Murdoch replied that designers should keep specifications simple and short and that those specifications should state all security-critical information and clearly articulate the threat model. PAPER #27: Experimental Security Analysis of a Modern Automobile AUTHORS: Karl Koscher, Alexei Czeskis, Franziska Roesner, Shwetak Patel, Tadayoshi Kohno (University of Washington), Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, Stefan Savage (University of California, San Diego). Karl presented. WRITEUP BY: SB Summary: This work performed an extensive security analysis of a car. The motivation for this analysis stems from the fact that the modern automobiles depend extensively on networked software components. Most components in a modern automobile including safety critical systems are computer controlled and these components benefit by talking to each other, which is currently done over a shared network. Moreover car networks can connect to the outside world through interfaces created for diagnostic and communication purposes. There is also a talk of allowing third party applications to be downloaded into the cars. There were some access controls implemented, but in many cases, they were either weak or incorrectly implemented. This provides ample opportunity for an attacker to subvert the different components of the car, which could be fatal. The talk described the typical computing architecture of a car and gave examples of different possible attacks scenarios. The speaker also showed video demonstrations of different attacks they could perform on the car. Karl concluded his talk by saying that their research team was surprised by the ease with which the attacks could be created. Some of the challenges that could prevent from having better security in the cars could be that the manufacturers are only integrators who get different components from different vendor and combine them to build the car. In such a scenario, adding security might be costly. Also it is not advisable to lock down diagnostic tools. Discussion: Fred Cohen asked how the authors got immunity from prosecution. Karl replied that they talked with lawers of University of Washington so that they were on the right side of the law. Susan Landau commented that it is easy to look at these problems and say that these guys screwed up; but what we as security are doing wrong? How do we communicate to the industry properly so that they don't repeat the same mistakes over and over again? Karl agreed that it is a very interesting point. One thing that he noticed is that the specification itself addresses what should and should not be done, the third party manufacturers are the people who cut some corners to get the product out. Peter Neumann in response to Susan's question, mentioned that the RISK forum was started specifically for this purpose, but still for the last 25 years the same mistakes are made over and over again. The problem is that minimizing cost is the fundamental argument and little bit of money is saved if they save a safety cost. The lack of education is not the reason. Due to the cost saving mentality, we notice that we never see remediation proactively and later the manufacturers pretend that they did something they didn't intend to. Rob Cunningham asked when the car logs the interactions between various components in car, do they do something that makes sure that the logs are not tampered with. Karl answered that they didn't look at black boxes, however he mentioned that the log information is stored in flash and no authentication is done. Someone asked as to how they went through the IRB process. Karl answered that since the experiments were on the cars not people, they did not have to go through the IRB process. PAPER #28: On the Incoherencies in Web Browser Access Control Policies AUTHORS: Kapil Singh (Georgia Institute of Technology), Alexander Moshchuk (Microsoft Research), Helen J. Wang (Microsoft Research), Wenke Lee (Georgia Institute of Technology) WRITEUP BY: JT Summary: Kapil presented. Policies are developed in an ad-hoc manner (e.g, web applications give unrestricted access to history object, which leads to user privacy leaks). This talk focused on identifying such incoherencies in browsers today. In addition, once such incoherencies are identified, another goal is to measure the cost of eliminating these incoherencies. Kapil went on and explained how access control works in browsers. Essentially, a principal (subject) accesses a resource (object), and resources are labeled with principal IDs. The problem then is how access control decisions are made using these labels at runtime. The authors have identified a number of incoherencies, which fit into 3 types: (i) system identifies website as a principal and ignores role of user, (ii) inconsistent principal labels, and (iii) browsers allow principals to change the labels at runtime. Next, Kapil explained their webanalyzer measurement framework. It performs network interposition to examine http requests and user cookies. It also performs dom interposition, and display interposition to analyze visual layout of elements in a page. Their measurement methodology proceeded as follows. A user was emulated in the 100k most popular websites according to Alexia. In addition, the authors validated their emulation results against manual browsing, and concluded that both are roughly comparable. Regarding categories of inconsistencies, with respect to principal labeling, most objects follow same origin policy. In this case, incoherencies occur when resources interplay (e.g., dom and display, dom and cookies). Also dom policies were inconsistent with same origin policy. Finally, in the relationship between dom objects and display, the authors discovered that malicious plugins can overlap fields with other plugin form fields (i.e., iframes that overlap), allowing it to steal credentials. According to the authors, this vulnerability is present in 6% of analyzed websites. Changing principal at runtime violates principle of least privilege. In addition, it is possible to access cookies from other domains. Finally, resources are used by user of the browser, but APIs allow websites to control resources that belong to the user (e.g., browsing history, geolocation). In conclusion, the authors' main contributions are threefold: (i) they provide a systematic principal-driven analysis of browser access control incoherencies, (ii) they have developed a web-centric compatibility measurement framework, and (iii) it is the first large-scale measurement of the cost of fixing access control policies. As future work, they plan to use this approach as policy guidelines for future browsers, and to perform a security analysis of web applications. Discussion: Charlie asked where can one draw the line. Kapil answered that their purpose was to give a view to the browser vendors, which consisted of security vs. backward compatibility. Specifically, if there is a security hole and this is the cost of fixing it., if cost is low browser vendors could drop the feature. If not, then vendors could consider moving to a safer alternative. Someone asked if Kapil had gotten a chance to see any trends for certain features. Kapil replied that they had measured access to user resources, and resize by is not used for sites after position 10.000 in the ranking. David Jacobson from Qualcomm asked if overlapping frames were intentional or accidental. Kapil responded that what they wanted to measure was if there were genuine uses of some of these features. Overlapping frames are mostly used for ads. In Kapil's opinion, since there are attacks, one should give up some features. Paolo Milani from TU Vienna asked how removing features of the sites would affect them. Kapil said that they analyze only some results. Their conclusion was that if some features were removed, there would be differences in look and feel, or it can leave the site unaffacted. In any case, they have not analyzed this completely. Finally, Steve Hanna from UC Berkeley asked what was the depth of the exploration, and how could the authors be sure that measurements accurately reflected the features of the sites. Kapil answered that they have measured this by performing a manual analysis of the top 100 sites, and then comparing these numbers to those obtained from user emulation, and they were comparable. Steve then asked if the manual analysis explore the site in the same depth. Kapil replied that this is a limitation of their work, and that they plan to address it in the future. SESSION 10: LANGUAGE-BASED SECURITY Chair: David Brumley PAPER #29: ConScript: Specifying and Enforcing Fine-Grained Security Policies for JavaScript in the Browser AUTHORS: Leo Meyerovich (University of California, Berkeley), Benjamin Livshits (Microsoft Research). Leo presented. WRITEUP BY: SB Summary: This work is motivated by the fact that current web pages are built combining content and JavaScript code from several different web servers. This would mean that the web server hosting the particualar page could potentially be vulnerable to attacks because the JavaScript in the page is either buggy or malicious. Additionally, the code could be constantly evolving making it tough to get any kind of guarantees about how the different scripts in the page interact with each other. This is especially dangerous in the context of JavaScript which has some notorious constructs like eval, which can only be completely checked dynamically. The key challenge addressed in the paper is to give the web application developer control over the page content by allowing him to enforce different kinds of policies on JavaScript code on the page. The paper presents CONSCRIPT, a client side advice implementation for security which allows the hosting page to specify application-specific security policies which is then enforced using CONSCRIPT. An example of a policy is to switch the functionality of eval to parse code instead of executing it. CONSCRIPT provides deep advice for enforcement of the policy and is implemented in the browser. In the talk Leo described the contributions of the paper. He walked through an example of implementing advise for JS functions. He also showed that the policies are easy to get wrong. So they enforce the correctness of policy by modifying the JS interpreter to disallow some accesses to the JavaScript objects. Additionally, they propose an ML-like type system to enforce reference isolation and access path integrity for function calls. CONSCRIPT introduced 17 handwritten policies and it is also possible to automatically generate policies of interest. The CONSCRIPT system was implemented on IE8 JavaScript interpreter with certain construct disallowed. It was tested with several micro and macro benchmarks and the run-time overhead was less than 1%, which was much less than the policy enforcement overhead for the JavaScript rewriting approach proposed by DoCoMo. Discussion: David Brumley asked what the obstacles are for a large scale deployment of CONSCRIPT? Leo said that it does not have many obstacles. It is just a matter of putting it in different browsers. Someone observed that the performance of CONSCRIPT was the worst in the case where the performance of DoCoMo approach was the best, and asked Leo to comment on it. Leo said that testing browser performance is very tricky and that they did not show error lines, which were huge and error plays a big part. David Brumley asked if they thought about proving the progress and preservation theorems for their type system for policies. Leo said that even though there is some existing work on Operational Semantics of JavaScript [Maffeis, Mitchell and Taly - APLAS 08; Guha, Saftoiu, Krishnamurthi - ECOOP 2010], the researchers are still building the infrastructure (e.g., core proofs) for more interesting proofs. PAPER #30: TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection AUTHORS: Tielei Wang (Peking University), Tao Wei (Peking University), Guofei Gu (Texas A & M University), Wei Zou (Peking University) WRITEUP BY: BR Tielei Wang spoke about a novel fuzzing technique the authors used to find numerous zero-day vulnerabilities in popular software. The paper received the Best Student Paper award the previous day. Fuzz testing involves passing malformed inputs to programs while monitoring the security properties of the system. The authors noticed that current fuzz-testing techniques fall short when dealing with checksum-based integrity checks: altering a well-formed input that is later subjected to such a check causes the check to fail, often causing the program to terminate before it processes the malformed input. The program's early termination leaves open the possibility of flaws in that input's processing that could have been uncovered if the check had succeeded. The authors considered the problem of making execution continue despite the fuzzing of these sensitive inputs. The solution Wang described is to find and bypass checksum checks in executable code. To find checksum checks, the authors used the intuition that checksums are typically computed over large inputs. Their system, TaintScope, looks for conditional jumps that depend on large amounts of data. When TaintScope finds such a conditional jump, it observes its execution with well-formed and malformed inputs to see whether the condition is true on the well-formed inputs and false on the malformed ones. The authors also considered the problem of repairing checksums: when a malformed input is found to trigger a vulnerability after the bypassed checksum check, TaintScope uses a combination of concrete and symbolic execution (as opposed to the conventional approach of completely symbolic execution) to calculate a new checksum that can be tacked onto the input before it is handed to an un-fuzzed copy of the program. Wang described the authors' "directed" fuzzing strategy in more detail as well; TaintScope focuses on the "hot bytes" in an input that are accessed more than the others. Wang showed tables from the paper to provide evidence that TaintScope found checksum checks, identified hot bytes, and correctly repaired checksums. The authors used TaintScope to find 27 previously unknown vulnerabilities in software that was familiar to much of the audience. In the question-and-answer period, Zhenkai Liang asked whether TaintScope found any false positives or false negatives. Wang acknowledged that those were possible and suggested that a future version of TaintScope would address the problem. Louis Kruger noted that the checksum algorithms the authors tested against were simple checksums (e.g., CRC) rather than cryptographic ones (e.g., MD5); Wang remarked that TaintScope could support cryptographic checksums if it were extended to recognize string comparisons. Juan Caballero asked why constraint solving was necessary at all; why not just re-run the checksum routine after fuzzing an input? Wang answered that they wanted TaintScope to be agnostic of the checksum function. PAPER #31: A Symbolic Execution Framework for JavaScript AUTHORS: Prateek Saxena, Devdatta Akhawe, Steve Hanna, Stephen McCamant, Dawn Song, Feng Mao (University of California, Berkeley) WRITEUP BY: JT Summary: The last paper was "A Symbolic Execution Framework for JavaScript" by Prateek Saxena, Devdatta Akhawe, Steve Hanna, Stephen McCamant, Dawn Song, Feng Mao (University of California, Berkeley). Prateek presented. Motivation is increasing client-side JavaScript (JS) complexity in rich web applications, and the high degree of data exchange between untrusted components. A sample technique application is to find client-side code injection bugs. The authors' goal is to automatically find code injection vulnerabilities in JS. In this context, one of the main challenges is to perform an automatic exploration of execution space, and automatically execute code if data is sufficiently sanitized. Previous work has primarily focused on static analysis and taint-enhanced blackbox fuzzing. Drawbacks for these include the assumption of an external test-suite to explore execution paths. Here, the authors' contribution is twofold: (i) a symbolic analysis approach that addresses the limitations of previous work, and (ii) the Kudzu system, which has developed a "theory of strings" that is sufficiently expressive, including the Kaluza decision procedure component. Kudzu efficiently explores event space (GUI explorer), and value space with dynamic symbolic execution. Specifically, the authors' approach is to take program, run it entirely in the browser, and then perform symbolic analysis. Symbolic formulas are feeded to Kaluza decision procedure, and fed back to program if it is decidable. In addition, Kudzu eliminates false positives by specifying attack grammar, and asking if any of the provided inputs can be an attack input. In their evaluation, the authors have classified JS functions that take integers and return strings. Results show that regular expression based operations are 1/3 of the JS functions. As a result, constraint solvers should deal with reg expressions. For Kudzu's evaluation, 18 live applications were used, and untrusted data from cross domain channels were treated as unstrusted. Kudzu automatically found all known vulnerabilities with no false positives. In summary, Kudzu separates input space analysis into 2 components. In addition, the authors have identified a theory of strings expressive enough for JS. Discussion: Paolo Milani from TU Vienna asked how much work went into specifying the attack grammar, and how confident the authors were with it. Prateek responded that they used attacks publicly available, and that they err on the side of false negatives. He also said that they want to make sure that everything they detect are indeed bugs. If one can automatically come up with a grammar, that would be great. Someone from UIC asked what is the decision procedure, and if the authors supported strings of arbitrary length. Prateek answered that they cannot work with arbitrary lenght strings, because in this case it is not possible to do string concatenation. Therefore, they bound the string size to 300 characters to make it possible for constraint solvers to solve it. Finally, David Brumley from CMU asked if there were any cases in which the authors ran out of memory, or limit. Prateek said that they could have run it longer, but that they had not done it yet. In addition, he mentioned that they had had several examples in which they ran out of memory. A good fraction of these were infinite loops. CLOSING REMARKS WRITEUP BY: BR Ulf Lindqvist closed the conference by thanking all of the organizers, then passed the mantle to Deb Frincke, next year's general chair. ==================================================================== News Briefs ==================================================================== News briefs from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/NewsBriefs.html ==================================================================== Listing of academic positions available by Cynthia Irvine ==================================================================== Recent postings: Posted May 2010 University of Massachusetts Computer Science Department Amherst, Massachusetts PostDoc Application materials should be sent to postdoc-spqr@cs.umass.edu by May 28, 2010. http://www.cs.umass.edu/~kevinfu/postdoc.html Posted April 2010 Dartmouth College Hanover, NH USA Postdoctoral research fellow Applications received by June 1 have the best chance http://www.cs.dartmouth.edu/~dfk/postdoc.html Posted March 2010 Radboud University Nijmegen, the Netherlands Assistant Professor in Computer Security (0.8 fte) Deadline for applications: 24 April 2010 http://www.ru.nl/ds/about_ds/vacancies/ Posted March 2010 University of Versailles St-Quentin-en-Yvelines and CNRS PRiSM Laboratory - "Cryptology and Information Security" group Versailles, France Assistant Professor position (Chaire d'Excellence CNRS) Deadline for applications: 25 March 2010 http://www.prism.uvsq.fr/~logo/MCF-0781944P-0424.htm English translation: http://translate.google.com/translate?js=y&prev=_t&hl=en&ie=UTF-8&layout=1&eotf=1&u=http%3A%2F%2Fwww.prism.uvsq.fr%2F~logo%2FMCF-0781944P-0424.htm&sl=auto&tl=en Full list at: http://cisr.nps.edu/jobscipher.html This job listing is maintained as a service to the academic community. If you have an academic position in computer security and would like to have in it included on this page, send the following information: Institution, City, State, Position title, date position announcement closes, and URL of position description to: irvine@cs.nps.navy.mil ==================================================================== Conference and Workshop Announcements Upcoming Calls-For-Papers and Events ==================================================================== The complete Cipher Calls-for-Papers is located at http://www.ieee-security.org/CFP/Cipher-Call-for-Papers.html The Cipher event Calendar is at http://www.ieee-security.org/Calendar/cipher-hypercalendar.html ____________________________________________________________________ Cipher Event Calendar ____________________________________________________________________ Calendar of Security and Privacy Related Events maintained by Hilarie Orman Date (Month/Day/Year), Event, Locations, web page for more info. 5/24/10: CSET, 3rd Workshop on Cyber Security Experimentation and Test, Washington, DC, USA; http://www.usenix.org/cset10/cfpa/; Submissions are due 5/26/10- 5/28/10: MOBISEC, 2nd International ICST Conference on Security and Privacy in Mobile Information and Communication Systems, Catania, Sicily, http://mobisec.org/ 5/28/10: WOOT, 4th USENIX Workshop on Offensive Technologies, Washington, DC, USA; http://www.usenix.org/woot10/cfpa/; Submissions are due 5/30/10: SIDEUS, 1st International Workshop on Securing Information in Distributed Environments and Ubiquitous Systems, Fukuoka, Japan; http://www.sideus-conf.org; Submissions are due 5/30/10: eCRS, eCrime Researchers Summit, Dallas, Texas, USA; http://www.ecrimeresearch.org/2010/cfp.html; Submissions are due 5/31/10: GameSec, The Inaugural Conference on Decision and Game Theory for Security, Berlin, Germany; http://www.gamesec-conf.org/; Submissions are due 6/ 1/10: NeFX, 2nd Annual ACM Northeast Digital Forensics Exchange, Washington, DC, USA; http://nefx.cs.georgetown.edu/; Submissions are due 6/ 4/10: NPSec, 6th Workshop on Secure Network Protocols, Held in conjunction with ICNP 2010, Kyoto, Japan; http://webgaki.inf.shizuoka.ac.jp/~npsec2010/; Submissions are due 6/ 7/10: SafeConfig, 2nd Workshop on Assurable & Usable Security Configuration, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA; http://hci.sis.uncc.edu/safeconfig/; Submissions are due 6/ 7/10- 6/ 8/10: WEIS, 9th Workshop on the Economics of Information Security (WEIS), Harvard University, Cambridge, MA, USA; http://weis2010.econinfosec.org/cfp.html 6/ 8/10- 6/10/10: RFIDSec, 6th Workshop on RFID Security, Istanbul, Turkey; http://www.projectice.eu/rfidsec10/index.html 6/ 9/10- 6/11/10: SACMAT, 15th ACM Symposium on Access Control Models and Technologies, Pittsburgh, PA, USA; http://www.sacmat.org 6/10/10: CWECS, 1st International Workshop on Cloud, Wireless and e-Commerce Security, Fukuoka, Japan; http://dblab.csie.thu.edu.tw/CWECS; Submissions are due 6/11/10: Pairing, 4th International Conference on Pairing-based Cryptography, Yamanaka Hot Spring, Japan; http://www.thlab.net/pairing2010/; Submissions are due 6/11/10: TrustCol, 5th International Workshop on Trusted Collaboration, Held in conjunction with the CollaborateCom 2010, Chicago, Illinois, USA; http://scl.cs.nmt.edu/trustcol10/; Submissions are due 6/13/10: SA&PS4CS, 1st International Workshop on Scientific Analysis and Policy Support for Cyber Security, Held in conjunction with the 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security (MMM-ACNS 2010), St. Petersburg, Russia; http://www.comsec.spb.ru/saps4cs10/; Submissions are due 6/13/10: FAST, 7th International Workshop on Formal Aspects of Security & Trust, Pisa, Italy; http://www.iit.cnr.it/FAST2010/; Submissions are due 6/13/10- 6/14/10: HOST, IEEE International Symposium on Hardware-Oriented Security and Trust, Anaheim, California, USA; http://www.engr.uconn.edu/HOST/ 6/14/10: ACSAC, 26th Annual Computer Security Applications Conference, Austin, Texas, USA; http://www.acsac.org; Submissions are due 6/14/10: STC, 5th Annual Workshop on Scalable Trusted Computing, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA; http://stc2010.trust.rub.de/; Submissions are due 6/14/10: D-SPAN, 1st International Workshop on Data Security and PrivAcy in wireless Networks, Held in conjunction with WoWMoM 2010, Montreal, QC, Canada; http://home.gwu.edu/~nzhang10/DSPAN2010/ 6/14/10- 6/15/10: MIST, 2nd International Workshop on Managing Insider Security Threats, Held in conjunction with IFIPTM 2010, Morioka, Iwate, Japan; http://isyou.hosting.paran.com/mist10/ 6/15/10: ISC, 13th Information Security Conference, Boca Raton, Florida; http://math.fau.edu/~isc2010/; Submissions are due 6/15/10: International Journal of Secure Software Engineering (IJSSE), Special Issue on Software Security Engineering Education; http://www.igi-global.com/ijsse; Submissions are due 6/15/10: DPM, International Workshop on Data Privacy Management, Held in conjunction with the ESORICS 2010, Athens, Greece; http://dpm2010.dyndns.org/; Submissions are due 6/15/10: SETOP, 3rd International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2010, Athens, Greece; http://www.infres.enst.fr/wp/setop2010/; Submissions are due 6/16/10- 6/18/10: IFIP-TM, 4th IFIP International Conference on Trust Management, Morioka, Japan; http://www.ifip-tm2010.org/ 6/20/10: STM, 6th International Workshop on Security and Trust Management, Athens, Greece; http://www.isac.uma.es/stm10; Submissions are due 6/21/10- 6/23/10: Trust, 3rd International Conference on Trust and Trustworthy Computing, Berlin, Germany; http://www.trust2010.org/ 6/21/10- 6/23/10: DBSec, 24th Annual IFIP WG 11.3 Working Conference on Data and Applications Security, Rome, Italy; http://dbsec2010.dti.unimi.it 6/21/10- 6/24/10: OWASP-AppSec-Research, OWASP AppSec Research 2010, Stockholm, Sweden; http://www.owasp.org/index.php/OWASP_AppSec_Research_2010_-_Stockholm,_Sweden 6/22/10- 6/25/10: ACNS, 8th International Conference on Applied Cryptography and Network Security, Beijing, China; http://www.tcgchina.org/acns2010/ 6/25/10: ICDCS-SPCC, 1st International Workshop on Security and Privacy in Cloud Computing (ICDCS-SPCC), Held in conjunction with the IEEE International Conference on Distributed Computing Systems (ICDCS 2010), Genoa, Italy; http://www.ece.iit.edu/~ubisec/workshop.htm. 6/27/10: PRITS, Workshop on Pattern Recognition for IT Security, Held in conjunction with DAGM 2010, Darmstadt, Germany; http://www.dagm2010.org/ws_prits.html; Submissions are due 6/28/10: PSDML, ECML/PKDD Workshop on Privacy and Security issues in Data Mining and Machine Learning, Barcelona, Spain; http://fias.uni-frankfurt.de/~dimitrakakis/workshops/psdml-2010/; Submissions are due 6/28/10- 6/30/10: IH, 12th Information Hiding Conference, Calgary, Alberta, Canada; http://ih2010.cpsc.ucalgary.ca 6/28/10- 7/ 2/10: SHPCS, 5th Workshop on Security and High Performance Computing Systems, Held in conjunction with the 6th International Wireless Communications and Mobile Computing Conference (IWCMC 2010), Caen, Normandy, France; http://leibniz.diiga.univpm.it/~spalazzi/caen/ 6/29/10- 7/ 1/10: TSP, 3rd IEEE International Symposium on Trust, Security and Privacy for Emerging Applications, Bradford, UK; http://trust.csu.edu.cn/conference/tsp2010/Call_for_Papers.htm 6/30/10: International Journal of Information Technologies and Systems Approach, Special Issue on Privacy and Security Issues in IT; http://www.igi-global.com/journals/details.asp?ID=6720&v=callForPapersSpecial; Submissions are due 6/30/10: Malware, 5th IEEE International Conference on Malicious and Unwanted Software, Nancy, France; http://malware10.loria.fr/; Submissions are due 7/ 2/10: EC2ND, 6th European Conference on Computer Network Defense, Berlin, Germany; http://2010.ec2nd.org; Submissions are due 7/ 8/10- 7/ 9/10: DIMVA, 7th Conference on Detection of Intrusions and Malware & Vulnerability Assessment, Bonn, Germany; http://www.dimva.org/dimva2010 7/ 9/10: TrustCom, IEEE/IFIP International Symposium on Trusted Computing and Communications, Hong Kong SAR, China; http://trust.csu.edu.cn/conference/trustcom2010; Submissions are due 7/14/10- 7/16/10: SOUPS, Symposium On Usable Privacy and Security, Redmond, WA, USA; http://cups.cs.cmu.edu/SOUPS/ 7/16/10: ICISS, 6th International Conference on Information Systems Security, Gandhinagar, India; http://www.cs.wisc.edu/iciss10/; Submissions are due 7/20/10: FCC, 6th Workshop on Formal and Computational Cryptography, Edinburgh, UK; http://research.microsoft.com/~fournet/fcc2010/ 7/21/10- 7/23/10: POLICY, IEEE International Symposium on Policies for Distributed Systems and Networks, Fairfax, Virginia, USA; http://www.ieee-policy.org 7/26/10- 7/28/10: SECRYPT, 5th International Conference on Security and Cryptography, Athens, Greece; http://www.secrypt.icete.org 8/ 1/10: Journal of Network and Computer Applications, Special Issue on Trusted Computing and Communications; http://www.elsevier.com/locate/jnca; Submissions are due 8/ 1/10: INTRUST, International Conference on Trusted Systems, Beijing, China; http://www.tcgchina.org; Submissions are due 8/ 9/10: CSET, 3rd Workshop on Cyber Security Experimentation and Test, Washington, DC, USA; http://www.usenix.org/cset10/cfpa/ 8/ 9/10: WOOT, 4th USENIX Workshop on Offensive Technologies, Washington, DC, USA; http://www.usenix.org/woot10/cfpa/ 8/ 9/10- 8/13/10: LIS, Workshop on Logics in Security, Copenhagen, Denmark; http://lis.gforge.uni.lu/index.html 8/10/10: HealthSec, 1st USENIX Workshop on Health Security and Privacy, Washington DC, USA; http://www.usenix.org/healthsec10/cfpa/ 8/10/10: HotSec, 5th USENIX Workshop on Hot Topics in Security, Washington DC, USA; http://www.usenix.org/events/hotsec10/cfp/ 8/11/10- 8/13/10: USENIX-Security, 19th USENIX Security Symposium, Washington, DC, USA; http://www.usenix.org/events/sec10/cfp/ 8/17/10- 8/19/10: PST, 8th International Conference on Privacy, Security and Trust, Ottawa, Canada; http://pstnet.unb.ca/pst2010 8/21/10: FTDC, 7th Workshop on Fault Diagnosis and Tolerance in Cryptography, Held in conjunction with the CHES 2010, Santa Barbara, CA, USA; http://conferenze.dei.polimi.it/FDTC10/ 8/30/10- 9/ 3/10: TrustBus, 7th International Conference on Trust, Privacy & Security in Digital Business, Bilbao, Spain; http://www.isac.uma.es/trustbus10 8/31/10: Wiley Security and Communication Networks (SCN), Special Issue on Defending Against Insider Threats and Internal Data Leakage; http://isyou.hosting.paran.com/mist10/SCN-SI-10.pdf; Submissions are due 9/ 6/10- 9/ 9/10: MMM-ACNS, 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security, St. Petersburg, Russia; http://comsec.spb.ru/mmm-acns10/ 9/ 7/10- 9/10/10: SECURECOMM, 6th International Conference on Security and Privacy in Communication Networks, Singapore; http://www.securecomm.org/ 9/ 7/10- 9/11/10: SIN, 3rd International Conference on Security of Information and Networks, Taganrog, Rostov-on-Don, Russia; http://www.sinconf.org/sin2010/ 9/ 9/10: SA&PS4CS, 1st International Workshop on Scientific Analysis and Policy Support for Cyber Security, Held in conjunction with the 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security (MMM-ACNS 2010), St. Petersburg, Russia; http://www.comsec.spb.ru/saps4cs10/ 9/13/10: ESSoS, International Symposium on Engineering Secure Software and Systems, Madrid, Spain; http://distrinet.cs.kuleuven.be/events/essos2011/; Submissions are due 9/13/10- 9/14/10: NeFX, 2nd Annual ACM Northeast Digital Forensics Exchange, Washington, DC, USA; http://nefx.cs.georgetown.edu/ 9/13/10- 9/15/10: SCN, 7th Conference on Security and Cryptography for Networks, Amalfi, Italy; http://scn.dia.unisa.it/ 9/13/10- 9/16/10: SCC, 2nd International Workshop on Security in Cloud Computing, Held in Conjunction with ICPP 2010, San Diego, California, USA; http://bingweb.binghamton.edu/~ychen/SCC2010.htm 9/14/10: VizSec, 7th International Symposium on Visualization for Cyber Security, Ottawa, Ontario, Canada; http://www.vizsec2010.org 9/15/10: IEEE Transactions on Information Forensics and Security, Special Issue on Using the Physical Layer for Securing the Next Generation of Communication Systems; http://www.signalprocessingsociety.org/publications/periodicals/forensics/forensics-authors-info/; Submissions are due 9/15/10: CODASPY, 1st ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA; http://www.codaspy.org/; Submissions are due 9/15/10: WIFS, International Workshop on Information Forensics & Security, Seattle, WA, USA; http://www.wifs10.org; Submissions are due 9/15/10: MetriSec, 6th International Workshop on Security Measurements and Metrics, Held in conjunction with the International Symposium on Empirical Software Engineering and Measurement (ESEM 2010), Bolzano-Bozen, Italy; http://www.cs.kuleuven.be/conference/MetriSec2010/ 9/15/10- 9/17/10: RAID, 13th International Symposium on Recent Advances in Intrusion Detection, Ottawa, Canada; http://www.RAID2010.org 9/16/10- 9/17/10: FAST, 7th International Workshop on Formal Aspects of Security & Trust, Pisa, Italy; http://www.iit.cnr.it/FAST2010/ 9/20/10- 9/22/10: ESORICS, 15th European Symposium on Research in Computer Security, Athens, Greece; http://www.esorics2010.org 9/20/10- 9/23/10: IFIP-TC9-HCC9, IFIP TC-9 HCC-9 Stream on Privacy and Surveillance, Held in conjunction with the IFIP World Computer Congress 2010, Brisbane, Australia; http://www.wcc2010.org/migrated/HCC92010/HCC92010_cfp.html 9/20/10- 9/24/10: ADBIS, 14th East-European Conference on Advances in Databases and Information Systems, Track on Personal Identifiable Information: Privacy, Ethics, and Security, Novi Sad; http://perun.im.ns.ac.yu/adbis2010/organization.php 9/20/10- 9/21/10: NIST Cryptographic Key Management Workshop http://csrc.nist.gov/groups/ST/key_mgmt/index.html Gaithersburg Maryland 9/21/10: PRITS, Workshop on Pattern Recognition for IT Security, Held in conjunction with DAGM 2010, Darmstadt, Germany; http://www.dagm2010.org/ws_prits.html 9/23/10: DPM, International Workshop on Data Privacy Management, Held in conjunction with the ESORICS 2010, Athens, Greece; http://dpm2010.dyndns.org/ 9/23/10: SETOP, 3rd International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2010, Athens, Greece; http://www.infres.enst.fr/wp/setop2010/ 9/23/10- 9/24/10: STM, 6th International Workshop on Security and Trust Management, Athens, Greece; http://www.isac.uma.es/stm10 9/24/10: PSDML, ECML/PKDD Workshop on Privacy and Security issues in Data Mining and Machine Learning, Barcelona, Spain; http://fias.uni-frankfurt.de/~dimitrakakis/workshops/psdml-2010/ 10/ 4/10: SafeConfig, 2nd Workshop on Assurable & Usable Security Configuration, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA; http://hci.sis.uncc.edu/safeconfig/ 10/ 4/10: STC, 5th Annual Workshop on Scalable Trusted Computing, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA; http://stc2010.trust.rub.de/ 10/ 4/10-10/ 8/10: ACM-CCS, 17th ACM Conference on Computer and Communications Security, Chicago, IL, USA; http://www.sigsac.org/ccs/CCS2010/cfp.shtml 10/ 5/10: NPSec, 6th Workshop on Secure Network Protocols, Held in conjunction with ICNP 2010, Kyoto, Japan; http://webgaki.inf.shizuoka.ac.jp/~npsec2010/ 10/ 9/10: TrustCol, 5th International Workshop on Trusted Collaboration, Held in conjunction with the CollaborateCom 2010, Chicago, Illinois, USA; http://scl.cs.nmt.edu/trustcol10/ 10/18/10-10/20/10: ICTCI, 4th International Conference on Trusted Cloud Infrastructure, Shanghai, China; http://ppi.fudan.edu.cn/ictci2010/index.html 10/18/10-10/20/10: eCRS, eCrime Researchers Summit, Dallas, Texas, USA; http://www.ecrimeresearch.org/2010/cfp.html 10/20/10-10/21/10: Malware, 5th IEEE International Conference on Malicious and Unwanted Software, Nancy, France; http://malware10.loria.fr/ 10/25/10-10/28/10: ISC, 13th Information Security Conference, Boca Raton, Florida; http://math.fau.edu/~isc2010/ 10/28/10-10/29/10: EC2ND, 6th European Conference on Computer Network Defense, Berlin, Germany; http://2010.ec2nd.org 11/ 4/10-11/ 6/10: SIDEUS, 1st International Workshop on Securing Information in Distributed Environments and Ubiquitous Systems, Fukuoka, Japan; http://www.sideus-conf.org 11/ 4/10-11/ 6/10: CWECS, 1st International Workshop on Cloud, Wireless and e-Commerce Security, Fukuoka, Japan; http://dblab.csie.thu.edu.tw/CWECS 11/ 8/10-11/10/10: HST, 10th IEEE International Conference on Technologies for Homeland Security, Waltham, MA, USA; http://ieee-hst.org/ 11/18/10-11/19/10: IDMAN, 2nd IFIP WG 11.6 Working Conference on Policies & Research in Identity Management, Oslo, Norway; http://ifipidman2010.nr.no/ifipidman2010/index.php5/Main_Page 11/22/10-11/23/10: GameSec, The Inaugural Conference on Decision and Game Theory for Security, Berlin, Germany; http://www.gamesec-conf.org/ 12/ 6/10-12/10/10: ACSAC, 26th Annual Computer Security Applications Conference, Austin, Texas, USA; http://www.acsac.org 12/11/10-12/13/10: TrustCom, IEEE/IFIP International Symposium on Trusted Computing and Communications, Hong Kong SAR, China; http://trust.csu.edu.cn/conference/trustcom2010 12/12/10-12/15/10: WIFS, International Workshop on Information Forensics & Security, Seattle, WA, USA; http://www.wifs10.org 12/13/10-12/15/10: Pairing, 4th International Conference on Pairing-based Cryptography, Yamanaka Hot Spring, Japan; http://www.thlab.net/pairing2010/ 12/13/10-12/15/10: INTRUST, International Conference on Trusted Systems, Beijing, China; http://www.tcgchina.org 12/15/10-12/19/10: ICISS, 6th International Conference on Information Systems Security, Gandhinagar, India; http://www.cs.wisc.edu/iciss10/ 2/ 9/11- 2/10/11: ESSoS, International Symposium on Engineering Secure Software and Systems, Madrid, Spain; http://distrinet.cs.kuleuven.be/events/essos2011/ 2/21/11- 2/23/11: CODASPY, 1st ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA; http://www.codaspy.org/ ____________________________________________________________________ Journal, Conference and Workshop Calls-for-Papers (new since Cipher E87) ___________________________________________________________________ ------------------------------------------------------------------------- CSET 2010 3rd Workshop on Cyber Security Experimentation and Test, Washington, DC, USA, August 9, 2010. (Submissions due 24 May 2010) http://www.usenix.org/cset10/cfpa/ The workshop invites you to submit papers on the science, design, architecture, construction, operation, and use of cyber security experiments in network testbeds and infrastructures. Topics of interest include but are not limited to: - Science of security/testbed experimentation (Data and tools to achieve realistic experiment setup/scenarios, Diagnosis of and methodologies for dealing with experimental artifacts, Support for experimentation on a large scale (virtualization, federation, high fidelity scale-down), Tools and methodologies to achieve, and metrics to measure, correctness, repeatability, and sharing of experiments - Testbeds and methodologies (Tools, methodologies, and infrastructure that support risky experimentation, Support for experimentation in emerging security topics (cyber-physical systems, wireless, botnets, etc.), Novel experimentation approaches (e.g., coupling of emulation and simulation), Experience in designing or deploying secure testbeds, Instrumentation and automation of experiments; their archiving, preservation, and visualization, Fair sharing of testbed resources) - Hands-on security education (Experiences teaching security classes that use hands-on security experiments for homework, in-class demonstrations, or class projects, Experiences from red team/blue team exercises) ------------------------------------------------------------------------- WOOT 2010 4th USENIX Workshop on Offensive Technologies, Washington, DC, USA, August 9, 2010. (Submissions due 28 May 2010) http://www.usenix.org/woot10/cfpa/ Computer security is unique among systems disciplines in that practical details matter and concrete case studies keep the field grounded in practice. WOOT provides a forum for high-quality, peer-reviewed papers discussing tools and techniques for attack. Submission topics include: - Vulnerability research (software auditing, reverse engineering) - Penetration testing - Exploit techniques and automation - Network-based attacks (routing, DNS, IDS/IPS/firewall evasion) - Reconnaissance (scanning, software, and hardware fingerprinting) - Malware design and implementation (rootkits, viruses, bots, worms) - Denial-of-service attacks - Web and database security - Weaknesses in deployed systems (VoIP, telephony, wireless, games) - Practical cryptanalysis (hardware, DRM, etc.) ------------------------------------------------------------------------- SIDEUS 2010 1st International Workshop on Securing Information in Distributed Environments and Ubiquitous Systems, Fukuoka, Japan, November 4-6, 2010. (Submissions due 30 May 2010) http://www.sideus-conf.org At present time, the maturity of research in the field of distributed systems, such as P2P, Grid, Cloud or Internet computing, has pushed through new problems such us those related with security. In systems where the information freely flows across the network, the task of securing it becomes a real concern, and thus an interesting research challenge. For that reason, security is becoming one of the key issues when evaluating such systems and it is important to determine which security mechanisms are available, and how they fit to every particular scenario. The aim of this workshop is to provide a forum for the discussion of ideas on regards to the current challenges and solutions to security in an environment that is rapidly developing such as P2P, Grid, Cloud or Internet computing. The main topics include (but are not limited to): - Securing the Internet of Things (IoT) - Membership and access control - Identity management in distributed systems - Security in JXTA-based applications - Privacy and anonymity technologies - Secure distributed storage - Security issues in Vehicular Networks (VANETs) - Securing P2P networks against third-party attacks - Security and privacy in Delay-Tolerant Networks (DTN) - Integrating security in protocols - Assessment of information security ------------------------------------------------------------------------- eCRS 2010 eCrime Researchers Summit, Dallas, Texas, USA, October 18-20, 2010. (Submissions due 30 May 2010) http://www.ecrimeresearch.org/2010/cfp.html eCRS 2010 will bring together academic researchers, security practitioners, and law enforcement to discuss all aspects of electronic crime and ways to combat it, Topics of interests include (but are not limited to): - Phishing, rogue-AV, pharming, click-fraud, crimeware, extortion and emerging attacks - Technical, legal, political, social and psychological aspects of fraud and fraud prevention - Malware, botnets, ecriminal/phishing gangs and collaboration, or money laundering - Techniques to assess the risks and yields of attacks and the success rates of countermeasures - Delivery techniques, including spam, voice mail and rank manipulation; and countermeasures - Spoofing of different types, and applications to fraud - Techniques to avoid detection, tracking and takedown; and ways to block such techniques - Honeypot design, data mining, and forensic aspects of fraud prevention - Design and evaluation of user interfaces in the context of fraud and network security - Best practices related to digital forensics tools and techniques, investigative procedures, and evidence acquisition, handling and preservation ------------------------------------------------------------------------- GameSec 2010 The Inaugural Conference on Decision and Game Theory for Security, Berlin, Germany, November 22-23, 2010. (Submissions due 31 May 2010) http://www.gamesec-conf.org/ Securing complex and networked systems and managing associated risks become increasingly important as they play an indispensible role in modern life at the turn of the information age. Concurrently, security of ubiquitous communication, data, and computing pose novel research challenges. Security is a multi-faceted problem due to the complexity of underlying hardware, software, and network inter- dependencies as well as human and social factors. It involves decision making in multiple levels and multiple time scales, given the limited resources available to both malicious attackers and administrators defending networked systems. GameSec conference aims to bring together researchers who aim to establish a theoretical foundation for making resource allocation decisions that balance available capabilities and perceived security risks in a principled manner. The conference focuses analytical models based on game, information, communication, optimization, decision, and control theories that are applied to diverse security topics. At the same time, the connection between theoretical models and real world security problems are emphasized to establish the important feedback loop between theory and practice. Observing the scarcity of venues for researchers who try to develop a deeper theoretical understanding of the underlying incentive and resource allocation issues in security, we believe that GameSec will fill an important void and serve as a distinguished forum of highest standards for years to come. Topics of interest include (but are not limited to): - Security games - Security and risk management - Mechanism design and incentives - Decentralized security algorithms - Security of networked systems - Security of Web-based services - Security of social networks - Intrusion and anomaly detection - Resource allocation for security - Optimized response to malware - Identity management - Privacy and security - Reputation and trust - Information security and watermarking - Physical layer security in wireless networks - Information theoretic aspects of security - Adversarial machine learning - Distributed learning for security - Cross-layer security - Usability and security - Human behavior and security - Dynamic control of security system - Organizational aspects of risk management - Cooperation and competition in security ------------------------------------------------------------------------- NeFX 2010 2nd Annual ACM Northeast Digital Forensics Exchange, Washington, DC, USA, September 13-14, 2010. (Submissions due 1 June 2010) http://nefx.cs.georgetown.edu/ Practitioners in digital forensics face many challenges and problems, be they from law enforcement, the intelligence or government community, or private practice. Criminal activity, system intrusions, and computer misuse are endemic in today's networked world. Today's state-of-art digital forensic technology on correlating large amount of often distributed digital evidence, crime scene reconstruction, and eventually mapping them to physical criminal scenario can only be best described as ad hoc and fragmented. We have also seen that most criminal investigations have involved crime scenes that co-exist in both cyberspace and physical worlds. There is an urgent need to move the capabilities and foundation of digital forensics from an ad hoc basis to one of science. Digital forensics is an inherently complex cross-disciplinary field that deals with complicated and potentially inconsistent issues/goals cutting across technical, legal, and law enforcement domains. The ACM Northeast Digital Forensics Exchange (NeFX), sponsored in part by the National Science Foundation and the Army Research Office, is designed to foster collaboration on digital forensics and information assurance between federal and state law enforcement, academia, and industry. Our goal is to bring together leading practitioners and academics in order to yield partnerships that advance research on digital forensic science through mutual sharing of the problems of practice and research. All topic areas related to digital forensics are of interest and in scope, which include, but are not limited to: - Imaging/Monitoring - Network Forensics - Small-scale and Mobile Device Forensics - Data Processing and Analytics - Software Forensics and Malware Analysis - File Carving and File System Analysis - Anti-forensics Techniques - Digital Forensics (from signal processing perspective) - Evidence Modeling and Principles - Live and Memory Analysis - Multimedia Forensics - Database, Web, and Cloud Computing System Forensics - Digital Evidence Storage and Preservation - Forensic tool Validation: Methodologies and Principles - Cyber-crime Strategy Analysis & Modeling - Advanced search, analysis, and presentation of digital evidence - Courtroom expert witness and case presentation - Case studies - Legal and Sociological Issues - Intelligence Issues in Forensics ------------------------------------------------------------------------- NPSec 2010 6th Workshop on Secure Network Protocols, Held in conjunction with ICNP 2010, Kyoto, Japan, October 5, 2010. (Submissions due 4 June 2010) http://webgaki.inf.shizuoka.ac.jp/~npsec2010/ NPSec2010 focuses on two general areas. The first focus is on the development and analysis of secure or hardened protocols for the operation (establishment and maintenance) of network infrastructure, including such targets as secure multidomain, ad hoc, sensor or overlay networks, or other related target areas. This can include new protocols, enhancements to existing protocols, protocol analysis, and new attacks on existing protocols. The second focus is on employing such secure network protocols to create or enhance network applications. Examples include collaborative firewalls, incentive strategies for multiparty networks, and deployment strategies to enable secure applications. NPSec 2010 particularly welcomes new ideas on security in the context of future Internet design, such as architectural considerations for future Internet security and new primitives for supporting secure network protocol and application design. Topics of interest include but are not limited to: - Security in future Internet architectures (role of security in future architectures, integrating security in future protocols and applications) - Secure and/or resilient network protocols (e.g., internetworking/routing, MANETs, LANs and WLANs, mobile/cellular data networks, P2P and other overlay networks, federated trust systems, sensor networks) - Vulnerabilities of existing protocols and applications (both theoretical and case studies), including attacks - Key distribution/management - Intrusion detection and response - Incentive systems for P2P systems and manet routing - Secure protocol configuration and deployment - Challenges and security protocols for social networks ------------------------------------------------------------------------- SafeConfig 2010 2nd Workshop on Assurable & Usable Security Configuration, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA, October 4, 2010. (Submissions due 7 June 2010) http://hci.sis.uncc.edu/safeconfig/ A typical enterprise network might have hundreds of security appliances such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers and crypto systems. An enterprise network may also have other non-security devices such as routers, name servers, protocol gateways, etc. These must be logically integrated into a security architecture satisfying security goals at and across multiple networks. Logical integration is accomplished by consistently setting thousands of configuration variables and rules on the devices. The configuration must be constantly adapted to optimize protection and block prospective attacks. The configuration must be tuned to balance security with usability. These challenges are compounded by the deployment of mobile devices and ad hoc networks. The resulting security configuration complexity places a heavy burden on both regular users and experienced administrators and dramatically reduces overall network assurability and usability. This workshop will bring together academic as well as industry researchers to exchange experiences, discuss challenges and propose solutions for offering assurable and usable security. ------------------------------------------------------------------------- CWECS 2010 1st International Workshop on Cloud, Wireless and e-Commerce Security, Fukuoka, Japan, November 4-6, 2010. (Submissions due 10 June 2010) http://dblab.csie.thu.edu.tw/CWECS In the last few years, due to increase in number of Cloud computing, Wireless network and E-Commerce (CWEC) applications and studies, the security issues and pivotal challenges include integrity verification, authentication, access control, attack prevention, etc., are also increasing. Recently, security technologies are booming. However, to achieve the whole security target for Cloud computing, wireless network and e-commerce, it requires much more than the mere application of current core technologies. The main purpose of this workshop is to bring together the researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of Security Technology in Cloud computing, wireless network and e-commerce, particularly aiming to promote state-of-the-art research in this area. Topics (included, but are not limited to): - Handover Security - Network Mobility Security - VoIP Security - Mobile Agent Security - Wireless Grid Security - RFID Security - Cell Phone, PDA and Potable Device Security - Mobile, Ad Hoc and Sensor Network Security - 3G ~ 4G Communication Security - Access Control, Authentication and Authorization for CWEC - Cryptography, Cryptanalysis and Digital Signatures for CWEC - Key Management and Recovery for CWEC - Trust Negotiation, Establishment and Management for CWEC - Network Management for CWEC - Performance Evaluation for CWEC - Privacy, Data Integrity and Data Protection for CWEC - Computer Forensics for CWEC - Security Threats and Countermeasures for CWEC - Cross-layer Design for CWEC ------------------------------------------------------------------------- Pairing 2010 4th International Conference on Pairing-based Cryptography, Yamanaka Hot Spring, Japan, December 13-15, 2010. (Submissions due 11 June 2010) http://www.thlab.net/pairing2010/ The focus of Pairing 2010 is on all aspects of pairing-based cryptography, including: cryptographic primitives and protocols, mathematical foundations, software and hardware implementation, and applied security. ------------------------------------------------------------------------- TrustCol 2010 5th International Workshop on Trusted Collaboration, Held in conjunction with the CollaborateCom 2010, Chicago, Illinois, USA, October 9, 2010. (Submissions due 11 June 2010) http://scl.cs.nmt.edu/trustcol10/ The ongoing, rapid developments in information systems technologies and networking have enabled significant opportunities for streamlining decision making processes and maximizing productivity through distributed collaborations that facilitate unprecedented levels of sharing of information and computational resources. Emerging collaborative environments need to provide efficient support for seamless integration of heterogeneous technologies such as mobile devices and infrastructures, web services, grid computing systems, online social networks, various operating environments, and diverse COTS products. Such heterogeneity introduces, however, significant security and privacy challenges for distributed collaborative applications. Balancing the competing goals of collaboration and security is difficult because interaction in collaborative systems is targeted towards making people, information, and resources available to all who need it whereas information security seeks to ensure the availability, confidentiality, and integrity of these elements while providing it only to those with proper trustworthiness. The key goal of this workshop is to foster active interactions among diverse researchers and practitioners, and generate added momentum towards research in finding viable solutions to the security and privacy challenges faced by the current and future collaborative systems and infrastructures. We solicit unpublished research papers that address theoretical issues and practical implementations/experiences related to security and privacy solutions for collaborative systems. Topics of interest include, but are not limited to: - Secure dynamic coalition environments - Secure distributed multimedia collaboration - Privacy control in collaborative environments - Secure workflows for collaborative computing - Policy-based management of collaborative workspace - Secure middleware for large scale collaborative infrastructures - Security and privacy issues in mobile collaborative applications - Security frameworks and architectures for trusted collaboration - Secure interoperation in multidomain collaborative environments - Identity management for large scale collaborative infrastructures - Semantic web technologies for secure collaborative infrastructure - Trust models, trust negotiation/management for collaborative systems - Access control models and mechanisms for collaboration environments - Protection models and mechanisms for peer-to-peer collaborative environments - Delegation, accountability, and information flow control in collaborative applications - Intrusion detection, recovery and survivability of collaborative systems/infrastructures - Security of web services and grid technologies for supporting multidomain collaborative applications - Security and privacy challenges in cloud-based collaborative applications ------------------------------------------------------------------------- SA&PS4CS 2010 1st International Workshop on Scientific Analysis and Policy Support for Cyber Security, Held in conjunction with the 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security (MMM-ACNS 2010), St. Petersburg, Russia, September 9, 2010. (Submissions due 13 June 2010) http://www.comsec.spb.ru/saps4cs10/ The workshop is dedicated to the methods of scientific analysis and policy support for response to cyber intrusions and attacks. The main topics of the SA&PS4CS'2010 are detection, discrimination, and attribution of various activities of malefactors and response to cyber intrusions and attacks including national level information operations as well as identifying emergent cyber technologies supporting social and political activity management and trans-national distributed computing management. ------------------------------------------------------------------------- FAST 2010 7th International Workshop on Formal Aspects of Security & Trust, Pisa, Italy, September 16-17, 2010. (Submissions due 13 June 2010) http://www.iit.cnr.it/FAST2010/ The seventh International Workshop on Formal Aspects of Security and Trust (FAST2010) aims at continuing the successful efforts of the previous FAST workshops, fostering cooperation among researchers in the areas of security and trust. FAST focuses on the formal models of security and trust that are needed to state goals and policies for these interactions. We also seek new and innovative techniques for establishing consequences of these formal models. Implementation approaches for such techniques are also welcome. Suggested submission topics include, but are not limited to: - Formal models for security, trust and reputation - Security protocol design and analysis - Logics for security and trust - Trust-based reasoning - Distributed trust management systems - Digital asset protection - Data protection - Privacy and ID management issues - Information flow analysis - Language-based security - Security and trust aspects in ubiquitous computing - Validation/Analysis tools - Web/Grid services security/trust/privacy - Security and risk assessment - Resource and access control - Case studies ------------------------------------------------------------------------- ACSAC 2010 26th Annual Computer Security Applications Conference, Austin, Texas, USA, December 6-10, 2010. (Submissions due 14 June 2010) http://www.acsac.org ACSAC is an internationally recognized forum for practitioners, researchers, and developers in information systems security. ACSAC's technical track is well established for presenting academically oriented research results, particularly those that have tangible practical applications. Topics of interest include, but are not limited to: - access control - applied cryptography - audit and audit reduction - biometrics - certification and accreditation - cybersecurity - database security - denial of service protection - distributed systems security - electronic commerce security - enterprise security management - forensics - identification & authentication - identify management - incident response planning - information survivability - insider threat protection - integrity - intellectual property rights - intrusion detection - mobile and wireless security - multimedia security - operating systems security - peer-to-peer security - privacy and data protection - product evaluation/compliance - risk/vulnerability assessment - securing cloud infrastructures - security engineering and management - security in IT outsourcing - service oriented architectures - software assurance - trust management - virtualization security - VOIP security - Web 2.0/3.0 security ------------------------------------------------------------------------- STC 2010 5th Annual Workshop on Scalable Trusted Computing, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA, October 4, 2010. (Submissions due 14 June 2010) http://stc2010.trust.rub.de/ Built on the continued success of previous STC workshops (starting from ACM STC'06) this workshop focuses on fundamental technologies of trusted computing (in a broad sense, with or without TPMs) and its applications in large-scale systems -- those involving large number of users and parties with varying degrees of trust. The workshop is intended to serve as a forum for researchers as well as practitioners to disseminate and discuss recent advances and emerging issues. Topics of interests include but not limited to: Enabling scalable trusted computing - better approaches to measurement management - better approaches to attestation - cryptographic support for trusted computing - architectural support for trusted computing - security policies and models of trusted computing - access control for trusted computing - architecture and implementation technologies for trusted platform - virtualization technology for trusted computing - establishing trust on software, users and services - intrusion tolerance/resilience in trusted computing - hardware-based approach to trusted computing - software-based approach to trusted computing - censorship-freeness in trusted computing - principles and technologies for handling scales - tackling complexity introduced by scalability Applications of trusted computing - sustainable services based on trusted computing - trusted cloud computing - trusted embedded computing - killer applications of trusted computing - case study in trusted computing - scalable trust and services - large-scale trusted computing Pushing the limits - limitations, alternatives and tradeoffs regarding trusted computing - realizing trustworthy computing via trusted computing - understanding expectedness of system properties - understanding system-level trust and trustworthiness - novel architectures for putting pieces together for STC ------------------------------------------------------------------------- International Journal of Secure Software Engineering (IJSSE), Special Issue on Software Security Engineering Education, March/April 2011. (Submission Due 15 June 2010) http://www.igi-global.com/ijsse Guest editor: Nancy R. Mead (Carnegie Mellon University, U.S.A) and Dan Shoemaker (University of Detroit Mercy, U.S.A) We can improve software security by improving how we teach software security engineering. However, the problem with teaching correct software security engineering practice is that software security practices could be relevant in a number of places within the Software Engineering Body of Knowledge (SWEBOK). Consequently, secure software assurance content might legitimately fit into many different places in the software engineering education process and, in that respect, could be taught many different ways. This disjointed approach is not an acceptable method for systematically disseminating secure software engineering practice. There should be coordination and, when possible, standardization of the way that educators promulgate secure software assurance content. Otherwise, we run the risk of producing software engineers with potentially conflicting understandings of the same concepts. Unfortunately, there are two practical barriers to achieving coordinated and standardized software security engineering teaching. First, it is not absolutely clear what specific knowledge and skills should be taught and in what places. Second, there are currently no validated methods for delivering that knowledge once it has been identified. As a result, we are seeking insights in this special issue of the journal about how to confront the challenges of ensuring suitable and appropriate teaching of software security engineering content in higher education. This special issue is designed for software professionals and educators to explore innovative approaches to software security engineering education. The following are some suggested topics, as they relate to software security engineering: - curricula -undergraduate, graduate, or training - course materials - model delivery methods - student capstone projects and practical experience - model syllabi - learning models or unique learning interventions - distance or asynchronous delivery technologies or approaches - business or assurance - cases case study methodologies - cross-disciplinary collaborations - literature reviews or supporting materials - PowerPoint presentations ------------------------------------------------------------------------- DPM 2010 International Workshop on Data Privacy Management, Held in conjunction with the ESORICS 2010, Athens, Greece, September 23, 2010. (Submissions due 15 June 2010) http://dpm2010.dyndns.org/ The aim of this workshop is to discuss and exchange ideas related to privacy data management. We invite papers from researchers and practitioners working in privacy, security, trustworthy data systems and related areas to submit their original papers in this workshop. The main topics, but not limited to, include: - Privacy Information Administration - Privacy Policy-based Infrastructures and Architectures - Privacy-oriented Access Control Language - Privacy in Trust Management - Privacy Data Integration - Privacy Risk Assessment and Assurance - Privacy Services - Privacy Policy Analysis - Query Execution over Privacy Sensitive Data - Privacy Preserving Data Mining - Hippocratic and Water-marking Databases - Privacy for Integrity-based Computing - Privacy Monitoring and Auditing - Privacy in Social Networks - Privacy in Ambient Intelligence (AmI) Applications - Conciliation of Individual Privacy and Corporate/National Security - Privacy in computer networks - Privacy and RFIDs - Privacy in Sensor Networks ------------------------------------------------------------------------- ISC 2010 13th Information Security Conference, Boca Raton, Florida, USA, October 25-28, 2010. (Submissions due 15 June 2010) http://math.fau.edu/~isc2010/ ISC is an annual international conference covering research (both theory and applications) in Information Security. The conference seeks submissions from academia, industry, and government that present novel research on all theoretical and practical aspects of Information Security. Topics of interest include, but are not limited to: - access control - accountability - anonymity and pseudonymity - applied cryptography - authentication - biometrics - computer forensics - cryptographic protocols - database security - data protection - data/system integrity - digital right management - economics of security and privacy - electronic frauds - formal methods in security - identity management - information dissemination control - information hiding and watermarking - intrusion detection - network security - peer-to-peer security - privacy - secure group communications - security and privacy in pervasive/ubiquitous computing - security in information flow - security in IT outsourcing - security for mobile code - security of grid computing - security of eCommerce, eBusiness and eGovernment - security in location services - security modeling and architectures - security models for ambient intelligence environments - security in social networks - trust models and trust management policies - embedded security ------------------------------------------------------------------------- SETOP 2010 3rd International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2010, Athens, Greece, September 23, 2010. (Submissions due 15 June 2010) http://www.infres.enst.fr/wp/setop2010/ Security and reliability have become a major concern for service oriented applications as well as for communication systems and networks. With the need for evolution, if not revolution, of current network architectures and the Internet, autonomous and spontaneous management will be a key feature of future networks and information systems. In this context, security is an essential property. It must be thought at the early stage of conception of these systems and designed to be also autonomous and spontaneous. Future networks and systems must be able to automatically configure themselves with respect to their security policies. The security policy specification must be dynamic and adapt itself to the changing environment. Those networks and systems should interoperate securely when their respective security policies are heterogeneous and possibly conflicting. They must be able to autonomously evaluate the impact of an intrusion in order to spontaneously select the appropriate and relevant response when a given intrusion is detected. Autonomous and spontaneous security is a major requirement of future networks and systems. Of course, it is crucial to address this issue in different wireless and mobile technologies available today such as RFID, Wifi, Wimax, 3G, etc. Other technologies such as ad hoc and sensor networks, which introduce new type of services, also share similar requirements for an autonomous and spontaneous management of security. The SETOP Workshop seeks submissions that present research results on all aspects related to spontaneous and autonomous security. Topics of interest include, but are not limited to the following: - Security policy deployment - Self evaluation of risk and impact - Distributed intrusion detection - Autonomous and spontaneous response - Trust establishment - Selfish behaviour and collaboration enforcement - Security in autonomous networks - Security in ad hoc networks - Security in sensor/RFID networks - Security of Next Generation Networks - Security in Cloud Computing - Security of Service Oriented Architecture - Security of opportunistic networks - Privacy in self-organized networks - Secure localization - Context aware and ubiquitous computing - Secure interoperability and negotiation - Self-organization in secure routing - Identity management - Modelling and validation of security ------------------------------------------------------------------------- STM 2010 6th International Workshop on Security and Trust Management, Athens, Greece, September 23-24, 2010. (Submissions due 20 June 2010) http://www.isac.uma.es/stm10 STM (Security and Trust Management) is a working group of ERCIM (European Research Consortium in Informatics and Mathematics). Topics of interest include, but are not limited to: - access control - cryptography - digital right management - economics of security - key management - ICT for securing digital as well as physical assets - identity management - networked systems security - privacy and anonymity - reputation systems and architectures - security and trust management architectures - semantics and computational models for security and trust - trust assessment and negotiation - trust in mobile code - trust in pervasive environments - trust models - trust management policies - trusted platforms and trustworthy systems - trustworthy user devices ------------------------------------------------------------------------- PRITS 2010 Workshop on Pattern Recognition for IT Security, Held in conjunction with DAGM 2010, Darmstadt, Germany, September 21, 2010. (Submissions due 27 June 2010) http://www.dagm2010.org/ws_prits.html Graphical data, such as images or video streams, are of growing importance in several disciplines of IT security, ranging from biometric authentication over digital image forensics to visual passwords and CAPTCHAs. Consequently, methods of image analysis and pattern recognition are increasingly used in security-critical applications. The aim of the workshop is to bring together researchers from the pattern recognition and security communities in order to exchange latest research results. Topics of interest include, but are not limited to: - Novel biometric authentication techniques - Novel information hiding paradigms - Image authentication and robust hashing - Digital Forensics - Image and video analysis for security - Visual Passwords - CAPTCHAs ------------------------------------------------------------------------- PSDML 2010 ECML/PKDD Workshop on Privacy and Security issues in Data Mining and Machine Learning, Barcelona, Spain, September 24, 2010. (Submissions due 28 June 2010) http://fias.uni-frankfurt.de/~dimitrakakis/workshops/psdml-2010/ Privacy and security-related aspects of data mining and machine learning have been the topic of active research during the last few years, due to the existence of numerous applications with privacy and/or security requirements. Privacy issues have become a serious concern due to the collection, analysis and sharing of personal data by privately owned companies and public sector organizations for various purposes, such as data publishing or data mining. This has led to the development of privacy-preserving data mining and machine learning methods. More general security considerations arise in applications such as biometric authentication, intrusion detection and response, and malware classification. This has led to the development of adversarial learning algorithms, while parallel work in multi-agent settings and in low regret learning algorithms has revealed interesting interplays between learning and game theory. The aim of this workshop is to bring together scientists and practitioners who conduct cutting edge research on privacy and security issues in data mining and machine learning to discuss the most recent advances in these research areas, identify open problem domains and research directions, and propose possible solutions. We invite interdisciplinary research on cryptography, data mining, game theory, machine learning, privacy, security and statistics. Moreover, we invite mature contributions as well as interesting preliminary results and descriptions of open problems on emerging research domains and applications of privacy and security in data mining and machine learning. ------------------------------------------------------------------------- International Journal of Information Technologies and Systems Approach, Special Issue on Privacy and Security Issues in IT, 2011. (Submission Due 30 June 2010) http://www.igi-global.com/journals/details.asp?ID=6720&v=callForPapersSpecial Guest editor: Frank Stowell (University of Portsmouth, England) and Vasilis Katos Democritus (University of Thrace, Greece) The topic of this special issue is motivated by the ease of collection, processing and dissemination of personal data and the concern about the unintended use or misuse of these data. Monitoring technologies are a fundamental component in IS security that serve as a policy violation detection mechanism but the expanding scope of ICT now means that it is not just the client that is affected but often the wider community e.g. CCTV monitoring as what may have been designed for specific end-users now impacts itself upon the majority. Monitoring has turned into systematic surveillance of emails, telephone usage and through CCTV general citizen activities. In a society where privacy is a fundamental human right the antagonism between privacy and security is a research issue of significance IS researchers as IS itself constitutes the means for feeding such antagonism between security and the privacy of the individual. This special issue invites a range of topics related to Privacy and the associated security issues created by the technology. Topics to be discussed in this special issue include (but are not limited to) the following: - Privacy preservation technologies for the citizen - Methodologies for analysing privacy requirements of an Information System - Protection of biometric data - Analysis and development of a systems view of security and its impact upon individual privacy - The Economics of security and privacy - The behavioural impact of monitoring and surveillance technologies - Opportunities and threats in emerging applications utilizing personal data - Privacy-centric systems ------------------------------------------------------------------------- Malware 2010 5th IEEE International Conference on Malicious and Unwanted Software, Nancy, France, October 20-21, 2010. (Submissions due 30 June 2010) http://malware10.loria.fr/ The conference is designed to bring together experts from industry, academia, and government to present and discuss, in an open environment, the latest advances and discoveries in the field of malicious and unwanted software. Techniques, economics and legal issues surrounding the topic of Malware, and the methods to detect and control them will be discussed. This year's conference will pay particular attention to (and will also be extensively discussed in a panel session) the pressing topic of "Malware and Cloud Computing". As low-cost Netbooks become popular, Google's Chrome OS enters the mainstream, and social networks (Facebook, YouTube, Twitter, LinkedIn, and so forth) become ubiquitous, the security dangers associated with the new computing paradigm increase exponentially. In effect, "Cloud Computing", Multi-tenant, Single Schema, Single Server Platforms (C2S3P) increase vulnerabilities by providing a single point of failure and attack for organized criminal networks. Critical/sensitive/private information is at risk, and very much like previous technology adoption trends, such as wireless networks, the dash for success is trumping the need for security. ------------------------------------------------------------------------- EC2ND 2010 6th European Conference on Computer Network Defense, Berlin, Germany, October 28-29, 2010. (Submissions due 2 July 2010) http://2010.ec2nd.org EC2ND 2010 invites submissions presenting novel ideas in the areas of network defense, intrusion detection and systems security. Topics for submission include but are not limited to: - Intrusion Detection - Malicious Software - Web Security - Machine Learning for Security - Peer-to-Peer and Grid Security - Wireless and Mobile Security - Network Forensics - Network Discovery and Mapping - Incident Response and Management - Privacy Protection - Cryptography - Legal and Ethical Issues ------------------------------------------------------------------------- TrustCom 2010 IEEE/IFIP International Symposium on Trusted Computing and Communications, Hong Kong SAR, China, December 11-13, 2010. (Submissions due 9 July 2010) http://trust.csu.edu.cn/conference/trustcom2010 With the rapid development and the increasing complexity of computer and communications systems and networks, traditional security technologies and measures can not meet the demand for integrated and dynamic security solutions. As a challenging and innovative research field, trusted computing and communications target computer and communications systems and networks that are available, secure, reliable, controllable, dependable, and so on. In a word, they must be trustworthy. If we view the traditional security as identity trust, the broader field of trusted computing and communications also includes behavior trust of systems and networks. In fact, trusted computing and communications have become essential components of various distributed services, applications, and systems, including ad-hoc networks, peer-to-peer networks, social networks, semantic webs, e-commence, e- government, pervasive, ubiquitous, and cyber-physical systems. TrustCom-10 is an international forum for presenting and discussing emerging ideas and trends in trusted computing and communications in computer systems and networks from both the research community as well as the industry. Topics of interest include, but are not limited to: - Trust semantics, metrics, and models - Trust inference, computation, and - Trusted computing platform - Trusted network computing - Trusted operating systems - Trusted software - Trusted database - Trusted services and applications - Trusted communications - Trust in e-commerce and e-government - Trust in mobile and wireless networks - Reliable and fault-tolerant computer systems/networks - Survivable computer systems/networks - Cryptography and security protocols - Authentication in computer systems/networks - Access control in computer systems/networks - Key management in computer systems/networks - Anonymity and privacy in computer systems/networks - Trust in emerging applications - Miscellaneous trust issues ------------------------------------------------------------------------- ICISS 2010 6th International Conference on Information Systems Security, Gandhinagar, India, December 15-19, 2010. (Submissions due 16 July 2010) http://www.cs.wisc.edu/iciss10/ The ICISS 2010 encourages submissions addressing theoretical and practical problems in information and systems security and related areas. We especially like to encourage papers in domains that have not been represented much in the past at the conference, such as database security/privacy, usability aspects of security, operating systems security, and sensor networks security. Papers that introduce and address unique security challenges or present thought-provoking ideas are also welcome. ------------------------------------------------------------------------- Journal of Network and Computer Applications, Special Issue on Trusted Computing and Communications, 2nd Quarter, 2011. (Submission Due 1 August 2010) http://www.elsevier.com/locate/jnca Guest editor: Laurence T. Yang (St. Francis Xavier University, Canada) and Guojun Wang (Central South University, China) With the rapid development and the increasing complexity of computer and communications systems and networks, traditional security technologies and measures can not meet the demand for integrated and dynamic security solutions. As a challenging and innovative research field, trusted computing and communications target computer and communications systems and networks that are available, secure, reliable, controllable, dependable, and so on. In a word, they must be trustworthy. If we view the traditional security as identity trust, the broader field of trusted computing and communications also includes behavior trust of systems and networks. In fact, trusted computing and communications have become essential components of various distributed services, applications, and systems, including self-organizing networks, social networks, semantic webs, e-commence, and e-government. Research areas of relevance would therefore include, but not only limited to, the following topics: - Trusted computing platform and paradigm - Trusted systems and architectures - Trusted operating systems - Trusted software - Trusted database - Trusted services and applications - Trust in e-commerce and e-government - Trust in mobile and wireless networks - Trusted communications and networking - Reliable and fault-tolerant computer systems/networks - Survivable computer systems/networks - Autonomic and dependable computer systems/networks ------------------------------------------------------------------------- INTRUST 2010 International Conference on Trusted Systems, Beijing, China, December 13-15, 2010. (Submissions due 1 August 2010) http://www.tcgchina.org INTRUST 2010 conference focuses on the theory, technologies and applications of trusted systems. It is devoted to all aspects of trusted computing systems, including trusted modules, platforms, networks, services and applications, from their fundamental features and functionalities to design principles, architecture and implementation technologies. The goal of the conference is to bring academic and industrial researchers, designers, and implementers together with end-users of trusted systems, in order to foster the exchange of ideas in this challenging and fruitful area. INTRUST 2010 solicits original papers on any aspect of the theory, advanced development and applications of trusted computing, trustworthy systems and general trust issues in modern computing systems. The conference will have an academic track and an industrial track. This call for papers is for contributions to both of the tracks. Submissions to the academic track should emphasize theoretical and practical research contributions to general trusted system technologies, while submissions to the industrial track may focus on experiences in the implementation and deployment of real-world systems. ------------------------------------------------------------------------- Wiley Security and Communication Networks (SCN), Special Issue on Defending Against Insider Threats and Internal Data Leakage, 2011. (Submission Due 31 August 2010) http://isyou.hosting.paran.com/mist10/SCN-SI-10.pdf Guest editor: Elisa Bertino (Purdue university, USA), Gabriele Lenzini (SnT-Univ. of Luxembourg, Luxembourg), Marek R. Ogiela (AGH University of Science & Technology, Poland), and Ilsun You (Korean Bible University, Korea) This special issue collects scientific studies and works reporting on the most recent challenges and advances in security technologies and management systems about protecting an organization's information from corporate malicious activities. It aims to be the showcase for researchers that address the problems on how to prevent the leakage of organizations' information caused by insiders. The contributions to this special issue can conduct state-of-the-art surveys and case-analyses of practical significance, which, we wish, will support and foster further research and technology improvements related to this important subject. Papers on practical as well as on theoretical topics are invited. Topics include (but are not limited to): - Theoretical foundations and algorithms for addressing insider threats - Insider threat assessment and modeling - Security technologies to prevent, detect and avoid insider threats - Validating the trustworthiness of staff - Post-insider threat incident analysis - Data breach modeling and mitigation techniques - Authentication and identification - Certification and authorization - Database security - Device control system - Digital forensic system - Digital right management system - Fraud detection - Network access control system - Intrusion detection - Keyboard information security - Information security governance - Information security management systems - Risk assessment and management - Log collection and analysis - Trust management - Secure information splitting and sharing algorithms - Steganography and subliminal channels - IT compliance (audit) - Continuous auditing - Socio-Technical Engineering Attack to Security and Privacy ------------------------------------------------------------------------- ESSoS 2011 International Symposium on Engineering Secure Software and Systems, Madrid, Spain, February 9-10, 2011. (Submissions due 13 September 2010) http://distrinet.cs.kuleuven.be/events/essos2011/ Trustworthy, secure software is a core ingredient of the modern world. Unfortunately, the Internet is too. Hostile, networked environments, like the Internet, can allow vulnerabilities in software to be exploited from anywhere. To address this, high-quality security building blocks (e.g., cryptographic components) are necessary, but insufficient. Indeed, the construction of secure software is challenging because of the complexity of modern applications, the growing sophistication of security requirements, the multitude of available software technologies and the progress of attack vectors. Clearly, a strong need exists for engineering techniques that scale well and that demonstrably improve the software's security properties. The Symposium seeks submissions on subjects related to its goals. This includes a diversity of topics including (but not limited to): - scalable techniques for threat modeling and analysis of vulnerabilities - specification and management of security requirements and policies - security architecture and design for software and systems - model checking for security - specification formalisms for security artifacts - verification techniques for security properties - systematic support for security best practices - security testing - security assurance cases - programming paradigms, models and DLS's for security - program rewriting techniques - processes for the development of secure software and systems - security-oriented software reconfiguration and evolution - security measurement - automated development - trade-off between security and other non-functional requirements - support for assurance, certification and accreditation ------------------------------------------------------------------------- IEEE Transactions on Information Forensics and Security, Special Issue on Using the Physical Layer for Securing the Next Generation of Communication Systems, June 1, 2011. (Submission Due 15 September 2010) http://www.signalprocessingsociety.org/publications/periodicals/forensics /forensics-authors-info/ Guest editor: Vincent Poor (Princeton University, USA), Wade Trappe (Rutgers University, USA), Aylin Yener (Pennsylvania State University,USA), Hisato Iwai (Doshisha University, Japan), Joao Barros (University of Porto, Portugal), and Paul Prucnal (Princeton University, USA) Communication technologies are undergoing a renaissance as there is a movement to explore new, clean slate approaches for building communication networks. Although future Internet efforts promise to bring new perspectives on protocol designs for high-bandwidth, access-anything from anywhere services, ensuring that these new communication systems are secure will also require a re-examination of how we build secure communication infrastructures. Traditional approaches to building and securing networks are tied tightly to the concept of protocol layer separation. For network design, routing is typically considered separately from link layer functions, which are considered independently of transport layer phenomena or even the applications that utilize such functions. Similarly, in the security arena, MAC-layer security solutions (e.g. WPA2 for 802.11 devices) are typically considered as point-solutions to address threats facing the link layer, while routing and transport layer security issues are dealt with in distinct, non-integrated protocols like IPSEC and TLS. The inherent protocol separation involved in security solutions is only further highlighted by the fact that the physical layer is generally absent from consideration. This special issue seeks to provide a venue for ongoing research area in physical layer security across all variety of communication media, ranging from wireless networks at the edge to optical backbones at the core of the network. The scope of this special issue will be interdisciplinary, involving contributions from experts in the areas of cryptography, computer security, information theory, signal processing, communications theory, and propagation theory. In particular, the areas of interest include, but are not limited to, the following: - Information-theoretic formulations for confidentiality and authentication - Generalizations of Wyner's wiretap problem to wireless and optical systems - Physical layer techniques for disseminating information - Techniques to extract secret keys from channel state information - Secrecy of MIMO and multiple-access channels - Physical layer methods for detecting and thwarting spoofing and Sybil attacks - Techniques to achieve covert or stealthy communication at the physical layer - Quantum cryptography - Modulation recognition and forensics - Security and trustworthiness in cooperative communication - Fast encryption using physical layer properties - Attacks and threat analyses targeted at subverting physical layer communications ------------------------------------------------------------------------- CODASPY 2011 1st ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA, February 21-23, 2011. (Submissions due 15 September 2010) http://www.codaspy.org/ Data and the applications that manipulate data are the crucial assets in today's information age. With the increasing drive towards availability of data and services anytime anywhere, security and privacy risks have increased. New applications such as social networking and social computing provide value by aggregating input from numerous individual users and/or the mobile devices they carry with them and computing new information of value to society and individuals. Data and applications security and privacy has rapidly expanded as a research field with many important challenges to be addressed. The goal of the conference is to discuss novel exciting research topics in data and application security and privacy and to lay out directions for further research and development in this area. The conference seeks submissions from diverse communities, including corporate and academic researchers, open-source projects, standardization bodies, governments, system and security administrators, software engineers and application domain experts. ------------------------------------------------------------------------- WIFS 2010 International Workshop on Information Forensics & Security, Seattle, WA, USA, December 12-15, 2010. (Submissions due 15 September 2010) http://www.wifs10.org WIFS is an avenue for knowledge exchange that encompasses a broad range of disciplines and facilitates the flow of ideas between various disparate communities that constitute information security. With this focus, we hope that researchers will identify new opportunities for collaboration across disciplines and gain new perspectives. The conference will feature prominent keynote speakers, tutorials, and lecture sessions. Appropriate topics of interest include, but are not limited to: - Computer Security - Forensics Analysis - Biometrics - Network Security - Cryptography for Multimedia content - Usability aspects of security - Information theory and security - Privacy - Data hiding - Surveillance - Digital Rights Management - Secure applications - Hardware Security ------------------------------------------------------------------------- ==================================================================== Information on the Technical Committee on Security and Privacy ==================================================================== ____________________________________________________________________ Information for Subscribers and Contributors ____________________________________________________________________ SUBSCRIPTIONS: Two options, each with two options: 1. To receive the full ascii CIPHER issues as e-mail, send e-mail to cipher-admin@ieee-security.org (which is NOT automated) with subject line "subscribe". OR send a note to cipher-request@mailman.xmission.com with the subject line "subscribe" (this IS automated - thereafter you can manage your subscription options, including unsubscribing, yourself) 2. To receive a short e-mail note announcing when a new issue of CIPHER is available for Web browsing send e-mail to cipher-admin@ieee-security.org (which is NOT automated) with subject line "subscribe postcard". OR send a note to cipher-postcard-request@mailman.xmission.com with the subject line "subscribe" (this IS automated - thereafter you can manage your subscription options, including unsubscribing, yourself) To remove yourself from the subscription list, send e-mail to cipher-admin@ieee-security.org with subject line "unsubscribe" or "unsubscribe postcard" or, if you have subscribed directly to the xmission.com mailing list, use your password (sent monthly) to unsubscribe per the instructions at http://mailman.xmission.com/cgi-bin/mailman/listinfo/cipher or http://mailman.xmission.com/cgi-bin/mailman/listinfo/cipher-postcard Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.ieee-security.org/cipher.html CONTRIBUTIONS: to cipher @ ieee-security.org are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. Calendar and Calls-for-Papers entries should be sent to cipher-cfp @ ieee-security.org and they will be automatically included in both departments. To facilitate the semi-automated handling, please send either a text version of the CFP or a URL from which a text version can be easily obtained. For Calendar entries, please include a URL and/or e-mail address for the point-of-contact. For Calls for Papers, please submit a one paragraph summary. See this and past issues for examples. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. ____________________________________________________________________ Recent Address Changes ____________________________________________________________________ Address changes from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/AddressChanges.html _____________________________________________________________________ How to become <> a member of the IEEE Computer Society's TC on Security and Privacy _____________________________________________________________________ You may easily join the TC on Security & Privacy by completing the on-line for at IEEE at http://www.computer.org/TCsignup/index.htm ______________________________________________________________________ TC Publications for Sale ______________________________________________________________________ IEEE Security and Privacy Symposium The 2009 hardcopy proceedings are not available. The DVD with all technical papers from all years of the SP Symposium and the CSF Symposium is $12, plus shipping and handling. The 2008 hardcopy proceedings are $10 plus shipping and handling; the 29 year CD is $10.00 The 2007 proceedings are available in hardcopy for $10.00, the 28 year CD is $10.00, plus shipping and handling. The 2006 Symposium proceedings and 11-year CD are sold out. The 2005, 2004, and 2003 Symposium proceedings are available for $10 plus shipping and handling. Shipping is $5.00/volume within the US, overseas surface mail is $8/volume, and overseas airmail is $14/volume, based on an order of 3 volumes or less. The shipping charge for a CD is $1 per CD (no charge if included with a hard copy order). Send a check made out to the IEEE Symposium on Security and Privacy to the 2010 treasurer (below) with the order description, including shipping method and shipping address. Al Shaffer Treasurer, IEEE Symposium Security and Privacy 2010 Glasgow East Annex, Rm. 218 (GE-218) 1411 Cunningham Rd. Naval Postgraduate School Montrerey, CA 93943 831/656\3319, voice oakland10-treasurer@ieee-security.org IEEE CS Press You may order some back issues from IEEE CS Press at http://www.computer.org/cspress/catalog/proc9.htm Computer Security Foundations Symposium Copies of the proceedings of the Computer Security Foundations Workshop (now Symposium) are available for $10 each. Copies of proceedings are available starting with year 10 (1997). Photocopy versions of year 1 are also $10. Contact Jonathan Herzog if interested in purchase. Jonathan Herzog jherzog@alum.mit.edu ____________________________________________________________________________ TC Officer Roster ____________________________________________________________________________ Chair: Security and Privacy Symposium Chair Emeritus: Hilarie Orman Ulf Lindqvist Purple Streak, Inc. SRI 500 S. Maple Dr. Menlo Park, CA Woodland Hills, UT 84653 (650)859-2351 (voice) ieee-chair@purplestreak.com ulf.lindqvist@sri.com Vice Chair: Chair, Subcommittee on Academic Affairs: Sven Dietrich Prof. Cynthia Irvine Department of Computer Science U.S. Naval Postgraduate School Stevens Institute of Technology Computer Science Department, Code CS/IC +1 201 216 8078 Monterey CA 93943-5118 spock AT cs.stevens.edu (831) 656-2461 (voice) irvine@nps.edu Treasurer: Chair, Subcomm. on Security Conferences: Terry Benzel Jonathan Millen USC Information Sciences Intnl The MITRE Corporation, Mail Stop S119 4676 Admiralty Way, Suite 1001 202 Burlington Road Rte. 62 Los Angeles, CA 90292 Bedford, MA 01730-1420 (310) 822-1511 (voice) 781-271-51 (voice) tbenzel @isi.edu jmillen@mitre.org Newsletter Editor: Security and Privacy Symposium, 2010 Chair Hilarie Orman Deborah Frincke Purple Streak, Inc. Pacific Northwest National Laboratory 500 S. Maple Dr. deborah.frincke@pnl.gov Woodland Hills, UT 84653 cipher-editor@ieee-security.org ________________________________________________________________________ BACK ISSUES: Cipher is archived at: http://www.ieee-security.org/cipher.html Cipher is published 6 times per year