_/_/_/_/ _/_/_/ _/_/_/_/ _/ _/ _/_/_/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/_/ _/_/ _/_/_/_/ _/ _/ _/ _/ _/ _/ _/ _/ _/_/_/_/ _/_/_/ _/ _/ _/ _/_/_/_/ _/ _/ ============================================================================ Newsletter of the IEEE Computer Society's TC on Security and Privacy Electronic Issue 97 July 25, 2010 Hilarie Orman, Editor Sven Dietrich, Assoc. Editor cipher-editor @ ieee-security.org cipher-assoc-editor @ ieee-security.org Yong Guan Book Review Editor Calendar Editor cipher-bookrev @ ieee-security.org cipher-cfp @ ieee-security.org ============================================================================ The newsletter is also at http://www.ieee-security.org/cipher.html Cipher is published 6 times per year Contents: * Letter from the Editor * News: o Trust HUB, Resources for Hardware Security Researchers o NIST Announcement, Password-Based Key Derivation o DNS Root Zone: Signed and Delivered * Commentary and Opinion o Review of the Web 2.0 Security and Privacy Workshop (Claremont Hotel, Berkeley, CA, May 20, 2010) by Sruthi Bandhavkavi o Review of the Conference on Detection of Intrusions and Malware & Vulnerability Assessment (Bonn, Germany, July 8-9, 2010) by Asia Slowinska and Johannes Hoffmann o Richard Austin's review of "The Failure of Risk Management: Why Its Broken and How to Fix It" by Douglas Hubbard o Book reviews, Conference Reports and Commentary and News items from past Cipher issues are available at the Cipher website * Conference and Workshop Announcements and Calendar of Events * List of Computer Security Academic Positions, by Cynthia Irvine * Staying in Touch o Information for subscribers and contributors o Recent address changes * Links for the IEEE Computer Society TC on Security and Privacy o Becoming a member of the TC o TC Officers o TC publications for sale ==================================================================== Letter from the Editor ==================================================================== Dear Readers: The IEEE Computer Society will hold its elections soon, and for those of you who are members, I encourage you to vote. The Computer Society is a wonderful organization with dedicated staff members who make it possible for us to conduct conferences, to publish papers, and to influence the computer industry. However, the Society has many challenges ahead: financial stress, modernization of its Information Technology infrastructure, and reaching out to more students and practitioners. In my opinion, new executive management is a necessity, and the choice of Computer Society President is crucial. Please read the candidate statements when they are announced, and I recommend voting for change. This issue of Cipher, coming in mid-summer, probably finds many of you on vacation or combining conference-going with vacation. For your light reading, be it on a cellphone at the beach or an electronic book, we have our usual penetrating book review by Richard Austin and two reviews of recent conferences, casting light on the direction of the computer security research trends. Photographs from the 30th anniversary celebration of the Security and Privacy Symposium are now online via the url http://ieee-security.org/TC/SP2010 . Should BP be put in charge of spam clean-up? Hilarie Orman cipher-editor @ ieee-security.org ==================================================================== News Briefs ==================================================================== ------------------------------------------------------------------ Trust HUB, Resources for Hardware Security Researchers Announcement from Mohammad Tehranipoor, University of Connecticut ------------------------------------------------------------------ A group of NSF funded researchers have started a new resource for hardware security researchers. "Our goal in developing trust-HUB is to provide the community with a forum to exchange ideas, circuits, platforms, tools, and resources. This web site is a resource for research, education and collaboration in your science area. It hosts various resources which will help you learn about your science area, including online presentations, courses, learning modules, animations, teaching materials, and more. It also includes information about community activities such as technical events, workshops, seminars, and news stories. It also provide members with capability of scheduling online blogs and presentation, and development of working groups. The members will be able to broadcast their success stories, tools, benchmarks, and more. These resources come from contributors in our scientific community, and are used by visitors from all over the world. Most importantly, trust-HUB offers simulation tools which you can access from your web browser, so you can not only learn about but also simulate your science area." ------------------------------------------------------------------ NIST Announcement Password-Based Key Derivation Mon, 28 Jun 2010 13:51:34 -0400 ------------------------------------------------------------------ DRAFT Recommendation for Password-Based Key Derivation - Part 1: Storage Applications. NIST announces the release of draft Special Publication 800-132, Recommendation for Password-Based Key Derivation - Part 1: Storage Applications. This Recommendation specifies techniques for the derivation of master keys from passwords to protect electronic data in a storage environment. Please submit comments to draft-sp800-132-comments@nist.gov with "Comments on Draft SP800-132" in the subject line. The comment period closes on July 28, 2010. ------------------------------------------------------------------ ICANN Announcement Internet "Root" Signed and Delivered July 15, 2010 ------------------------------------------------------------------ It has been nearly 20 years in the making, but this summer marks the start of new era in Internet security with the implementation of public key authentication technology for the Internet "root" zone. This is arguably the most important aspect of security for mapping domain names to Internet addresses, and hundreds of people have been involved in the long process of defining the Domain Name System Security protocol (DNSSEC), implementing it, and setting in place the contractual and physical procedures for signing the fundamental data area for DNS, the root zone. From the ICANN announcement (at http://www.icann.org/en/announcements/announcement-2-07jun10-en.htm) "ICANN publishes the root zone trust anchor and root operators begin to serve the signed root zone with actual keys – The signed root zone is available." There is a web page (http://www.root-dnssec.org for DNSSEC status information. ------------------------------------------------------------------ News briefs from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/NewsBriefs.html ==================================================================== Commentary and Opinion ==================================================================== Book reviews from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/BookReviews.html, and conference reports are archived at http://www.ieee-security.org/Cipher/ConfReports.html ____________________________________________________________________ Review of Web 2.0 Security and Privacy Workshop Claremont Hotel, Berkeley, CA, May 20, 2010 by Sruthi Bandhavkavi ____________________________________________________________________ W2SP 2010: WEB 2.0 SECURITY AND PRIVACY 2010 The workshop started with opening remarks given by Larry Koved, the workshop co-chair. Keynote: Jeremiah Grossman was the keynote speaker. He talked about his experience as the CTO of WhiteHat Security where he gets to meet with enterprises on a day-to-day basis to discuss web security challenges. He said that ten years back, he could not get anyone to hear about web security, which is so different from the atmosphere now. In his keynote speech, Jeremiah talked about the WhiteHat website security statistics report on the topic, ''Which Web programming languages are most secure?''. The statistical analysis was done on nearly 1700 websites that are under WhiteHat Sentinel management from the year 2006 to 2010; some of them required human testing. They tested for about 24,000 uniquely identified vulnerabilities. They tested for several metrics like which classes of attacks are the programming languages prone to the most, how do they fare against popular vulnerabilities, are all the programming languages similar in their secure/insecure behavior? Jeremiah presented several interesting statistics about the different programming languages. For example, Perl has the maximum number of attack surfaces(inputs, POST data, cookies), while .NET is twice as good as Perl in this metric. 73-88% of all the websites had at least one serious vulnerability at some point of time and currently, 53-75% of all websites have one serious vulnerability. He also presented statistics about the average time it takes the developer to fix a vulnerability. In his experience, syntactic issues are fixed quicker than the business logic bugs. He concluded the talk by saying that for companies to take web application security seriously, security should be opt in not opt out and there should be some legal and financial liabilities on the companies to force them to prioritize security. In the discussion session, an audience member asked why the companies that hire the services of vulnerability scanners like WhiteHat security don't do anything when the bugs are found. Jeremiah said that for most of the companies, revenue generation is more important than security especially when they don't know whether the actual attacks are happening or not. Another reason could be because the developers don't know how to fix the bug. WhiteHat also implements training programs for the employees and provides consulting to the company. Session 1: Privacy (Session Chair: Ben Adida) The first paper in this session was titled "Enforcing User Privacy in Web Applications using Erlang" written by Ioannis Papagiannis, Matteo Migliavacca, David Eyers, Briand Shand, Jean Bacon, and Peter Pietzuch and presented by Ioannis presented. Ioannis motivated their work by presenting an example use case of a micro blogging application with several publisher and subscribers and a centralized dispatcher. The dispatcher's task is to process the different messages and control the flow of messages so that only authorized publishers and subscribers get the messages. The motivating example shows how information flow control tagging can be used to ensure that the dispatcher enforces the privacy policy. Ioannis also show how Erlang's features like isolation, asynchronous message passing and scalability can be used to practically enforcing the privacy requirements. During the discussion, an audience member asked if their work was to show that a language with strong guarantees can be used to enforce security properties? Sruthi Bandhakavi asked how the authors dealt with label creep. If a subscriber is a publisher of information and then wants to send the information sent to the subscriber back to publisher indirectly, then the label could quickly become too complex. Ioannis replied that this is a limitation of their work, the policies and tagging of information should be designed carefully to prevent label creep. Leo Meyerovich asked how their system interacts with the database and whether the tags are stored along with the data. Ioannis replied that it is indeed the case that the types need to be stored otherwise we would lose the flow information. Kapil Singh asked where the policy checks are done. Ioannis replied that the checks are done at the server side where the policy is enforced. The next paper titled "RT @IWantPrivacy: Widespread Violation of Privacy Settings in the Twitter Social Network" was written by Brendan Meeder, Jennifer Tam, and Patrick Gage Kelley and presented by Brendan. His talk was about how privacy is violated by re-tweeting in Twitter. In the discussion session, an audience member asked how retreating is different from sending a video to a friend and the friend posting it online. David Evans asked if there is any check to verify that the people who are tagged as originators of the tweet are actually the originators? Brendan replied that the original author's privacy is violated during re-tweets since the original user's policy about tweeting is not checked during re-tweets. The next version of Twitter introduces a mechanism where official retweets are displayed differently. Charlie Reis asked if the people who are re-tweeting know that the content they are re-tweeting is protected. Brendan said that they do. Travis made an observation that in case of non-official re-tweets, people could believe that there is a legitimate tweet from the genuine person. Brendan said that it is not possible to do it. Andy Steingruebl said that there is no way of knowing what the user expects, Twitter accounts can be made private to prevent follower spam, but the people might want to publicize their tweets (which account protection does not allow). Brendan said that Twitter needs to understand what the user expectations are and then use this information to provide appropriate account protection. Leo Meyerovich asked if the authors have any intuition about privacy violations in this case, for example the number of people whose how was broken into due to the privacy violation through Twitter. Brendan said that he does not have the information to answer the question and it is also tough to find this kind of information. Leo also asked why Twitter cannot fix this technically since it has information about public and private accounts. Brendan replied that Twitter does not want to solve it in a centralized manner, it wants to be hands off. The next paper titled "Feasibility and Real-World Implications of Web Browser History Detection" was written by Artur Janc and Lukasz Olejnik and presented by Artur. Artur talked about attacks on user privacy using css :visited pseudo class, which could be used to inspect users' web browsing history. He gave some background of this issue. He also presented an analysis of what can be detected and the performance of this detection mechanism. He gave insights on how a history detection system could be built. He also presented current work and countermeasures. In the discussion session, Sergio Maffeis asked whether in their attack technique they need to send a lot of data to the client(a list of all the interesting websites that could be visited) and consequently won't this require a lot of upload capacity to the client. Artur replied that the attack may not consist of 20K webpages, the attacker just needs to send some important websites. In their paper they show that network performance and the data transfer performance are comparable. An audience member asked why the page load performance shown in the presentation decreases over time. Artur replied that the browsers become slow when loading a large page to build a DOM tree. An audience member asked why in the graph with number of users visiting the top 5K websites, a quarter of the users visited none of the top websites. Artur said that they might visit the websites in the private browsing mode or there could have been a script error. Sergio asked if a malicious user can use this technique for a mass attack if he finds a website with a large user page. Is there any good example of a mass attack? Artur replied that one example could be if the attacker finds out that certain website it very popular, he could add features to the website that target the users. An audience member asked if there were any client side mitigation techniques for this attack to make the client understand that the user is being attacked. Artur said that he was not aware of this. It could be possible to have a client side mitigation technique but he thinks that it is easier and better to solve the problem. An audience member asked how the authors advertised their study to the different users. Artur said that they advertised the study on reddit and then only interested people came to the website where the attack was hosted. The morning session was followed by lunch and invited talk by Kurt Opsahl from the Electronic Frontier Foundation. He talked about "Social Networking Privacy Issues". Kurt started his talk by outlining why privacy is important and showed how Facebook privacy policies evolved over time. However the security measures proposed by Facebook are still not enough. In the end of the talk he proposed a Bill of Privacy Rights for social network users where he proposed having the right to informed decision making, right to control and right to leave. An interesting discussion ensued after the talk. An audience member commented that even if there is a good user interface, the sharing contexts are sufficiently subjective that people don't agree on them. Kurt agreed and said that defining an appropriate sharing context is extremely difficult. However care can be taken to define contexts so that people can see that there are limitations. Technical solutions can be provided to prevent individual users from making mistakes. If users limit information to friends, there is no way to guarantee that the friends don't share this information further in a different context. This is a hard problem. Kapil Singh commented that once the users are on the social networking sites, opting out from the site doesn't give any advantage. For example in Facebook opting out of using an application does not make any difference since the application already had access to all the user's data. Kurt said that Facebook has an old rule that the data has to be deleted by third party applications within 24 hours of getting it, however that rule has been repealed now. He agreed that this is a difficult problem. David Evans commented that in this context privacy policies are entirely meaningless. Most social networking sites give detailed messages about changed options and give the user an option to leave the site. Although this is misleading in some sense because the user data already known to the websites, there are some legal regulations that prevent the websites from breaking their promises about private information. Charlie Reis asked if there were any promising technical direction for solving the problem of deleting data. Kurt said that the problem of deleting data is challenging and deactivating the account does not mean that the information is deleted. He does not know any current technical solutions for the problem. An audience member commented that it is a losing battle to make something online to disappear completely and any guarantee is impossible. Kurt replied that even though guarantee is not possible, there could be a positive effect on privacy even with a little effect. Session 2: Mobile Web (Session Chair: Charlie Reis) The first paper in this session titled "What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors" was written by Jon Howell and Stuart Schechter. Jon presented. The talk was about privacy considerations in in-built cameras and microphones. Jon discussed the effect of permission to access sensor data given at one point of time on some other point of time where the data could be transmitted to unauthorized users. Jon described their proposed solution where a widget called sensor-access widget gives feedback about what the sensors are doing at each point of time with respect to each application that is running. For example, if the webcam is active and an application is accessing it, the live video is displayed on the screen along with the application. They also built a policy called SWAID which is used to control the feedback mechanism using the sensor-access widget mechanism. Sruthi asked if this won't be too restrictive if the image has to be shown for every application that is being displayed on the screen or a web page, for example in case of mashups. Jon said that mashups are a hairy case. An audience member asked how big the image should be and also if there is more than one kind of sensor information (camera, microphone, geo-location, accelerometer, etc.) how that information is displayed. Jon replied that image size and also handling different kinds of sensors simultaneously needs to be thought about. Kapil Singh asked if the authors were trusting the application to enforce the SWAID policy. Jon replied that the policy is enforced by a TCB like a browser, Operating System, etc. Leo Meyerovich asked how the authors handle the mashup or delegation case. Jon replied that this is a big can of worms where one doesn't know which application owns which chunk of the display. This a difficult problem to make progress on. David Evans commented that the most obvious alternative to protect the video streams is to have a lens cover and he asked how the proposed approach compare to that approach. Jon said that it is an excellent approach but does not work when there are multiple applications of which some are allowed access to the video streams. Dominique Devriese asked whether the authors have implemented the proposed techniques. Jon replied that they implemented it and studied the appropriateness of the approach. An audience member asked Jon to contrast the SWAID policy with the policy where the user is asked to opt in for each application that requires the information. Jon said that the opt in policy assumes that the user always wants to allow or disallow the information flow. This is not the case in SWAID. The next paper was a position paper titled "Empowering Browser Security for Mobile Devices Using Smart CDNs" written by Ben Livshits and David Molnar. David presented. The talk was to generate a discussion about improving security in mobile browsers. This area is different from implementing changes to the desktop web browsers because it is difficult to push changes to the mobile clients -- not everybody upgrades at the same time. One solution is to add the security primitives to CDNs in the middle tier so that the computation on the mobile devices is secure. There are two main research directions for this to work. First, we need to think about what kind of security services we can provide. Real time phish tracking, URL reputation, XSS filters, etc. are some examples of security services that could be provided by smart CDNs. Second, what if the middle-tier is not trustworthy? There are multiple vendors and operators and multiple web applications. How do these work together and what are the privacy considerations? In the discussion session, Leo Meyerovich commented about the role of the corporate world. For example, in Microsoft it is not allowed to use arbitrary internet connections. There could be differences between managed IT networks versus home networks. Arvind Narayanan and Charlie Reis asked in what way the deployment in the middle-tier is different from the deployment at the client. David replied saying that this is the way the the industry is going. 40% of content goes through CDN. Given this, it is imperative to ask what kind of services we can provide. If we get an opportunity to change the website to integrate better, it will be useful. An audience member asked how people could trust the middle-tier to render pages because the middle-tier is like an ISP which people are less likely to trust. David cited the example of Opera Mini, which people are voluntarily using because of power and performance benefits. The people however don't understand the implications of using opera mini. David Evans asked if there was a situation where people trust the middle-tier but not the client. David gave the example of coffee shops which cache most urls; maybe the people trust AT&T but not the local Starbucks. An audience member asked for the exact definition of trustworthiness. David replied that that there are several perspectives of trustworthiness depending on who one asks; the consumer's perspective could be different from the content provider's. The challenge is to find the different perspectives about problems. Susan Landau commented that this is not just a technical issue, legal issues could also be involved. The issues vary depending on the region also. David agreed and said that this could also be a social issue. Middle-tiers can span countries, so it is important to consider regional differences too. An audience member commented that some distinction needs to be kept between forcing the clients to use this versus the ISP offering this as a service. The ISP should not be able to force the user to use a middle-tier component. An audience member asked if the authors were thinking of examples transcoding videos to the resolution of the code. David agreed and said that one such example is iPad. We security people should think about how to do this the right way. Android uses third party libraries to render graphics etc. but these seem to be out of date. The CDN could be used to patch libraries for errors like buffer overflows. Session 3: Measuring Security (Session Chair: Adam Barth) The first paper in this session was titled "Hunting Cross-Site Scripting Attacks in the Network" written by Elias Athanasopoulos, Antonis Krithinakis, and Evangelos P. Markatos. Elias presented. In this talk Elias presented xHunter, a tool to detect suspicious URLs. The main idea of the tool is to identify all the URLs that contain JavaScript. xHunter scans a URL for fragments that produce a valid JavaScript syntax tree and assigns weights to any URL that contains a fragment that produces a valid JavaScript syntax tree with a high depth. The weights are assigned using certain heuristics like reversibility, giving more weight to certain nodes in the parse tree etc. In the discussion David Molnar asked if the performance of xHunter will improve with hardware support. Elias said he could not comment on that at the moment. Dominique Devriese asked how the tool compares to intrusion detection. Elias replied that their tool is like Snort but is not based on static signatures. An audience member asked how essential the reversibility heuristic is. Elias replied that the reversibility heuristic could be used to subvert the tool. An audience member asked how the tool compares to browser based XSS filters. Elias replied that their big aim is to have something that can possibly run on the network. If one creates a browser-based filter, it would be good to compare xHunter's accuracy and speed with it. Currently they are not looking at host based systems. Adam Barth asked what prevents JavaScript from being able to be parsed in reverse. Elias replied that this heuristic is true for any language; it is hard to parse a language in reverse order because the syntax is not well defined. Phu Phung asked how they deal with document.write, where character by character is combined. Elias replied that document.write itself gets a high score. An audience member asked what the highest score is. Elias replied that the score is about 6. This setup was in order to come up with less false positives and false negatives. An audience member commented that scanning the URLs does not tell if XSS exploit works or not and it is not even a measure of whether the attack can really happen or not. Elias replied that maybe there is more value in attacking but trying to exploit a website is also bad. The second paper in this session was a position paper titled "Critical Vulnerability in Browser Security Metrics" written by Mustafa Acer and Collin Jackson. Mustafa presented. The talk was about which metrics to use to evaluate browser security. A widely used metric is distribution of the number of known vulnerabilities. Mustafa contends that this metric is meaningless and actively harmful because it ignores patch deployment, discourages disclosure and ignores plug-in vulnerabilities. The authors propose a new vulnerability metric: the percentage of users who have at least one un-patched critical or high severity vulnerability on an average day. The authors collect live statistics using this metric at the website browserstats.appspot.com. Jon Howell pointed out that there is a small disparity in the graphs where the old metrics scale to 100%, while the new metrics generated by the authors don't. Mustafa replied that this is a disadvantage of using their metric because evaluation is done on a per browser basis. However, if the results are normalized, a similar trend will be seen. Jon Howell also commented that the flash player vulnerabilities are not orthogonal to the browser version because whether flash is vulnerable or not depends on what browser is underneath it. Brandon Sterne replied that the metrics considered versions of the flash player that the users of that particular browser have. Adrian Mettler commented that since this metric depends on how quickly users update their browser and also on the users' technical expertise, for example website used to collect these metrics might have more users with more technical expertise. Is it fair to blame firefox for users who don't update their browsers. Mustafa replied that this metric brings out discussion about which updating mechanisms are good and which are bad. If the browser is older than Chrome, it might have more flash players that are vulnerable. The mechanisms that browsers use to push an update also contributes. In firefox, the mechanism is not automated, while chrome updates silently. Artur Janc asked how the authors collect and adopt vulnerability and usage data. Mustafa replied that usage data is collected by showing a JavaScript advertisement on web pages. Vulnerability data is collected by maintaining a manual databased of vulnerabilities of each browser and plugin. The authors have an automatic solution for Firefox but this won't work for other browsers. There is no standard API for collecting all the vulnerabilities a browser has. Vendors don't provide this information and this might be a valuable tool. Artur Janc asked about the zero day vulnerabilities known to the vendor. Mustafa said that the metric itself has temporal implications; there is an uphill spike in the risk score when the browser is updated. If the spike reduces in a short period of time, then this is taken as a good thing about browser security. David Evans asked if the metric is skewed positively towards new browsers like Chrome versus old browsers like Firefox. Mustafa replied that the there are not many browsers with old versions of Firefox like Firefox 2.0 that might change the levels of the risk score. The point is that most of the browsers are latest versions and old data points don't affect their results. However, data might be biased as it works only at a certain time of the day. They do need more samples. Devdatta Akhawe asked if chrome keeps its security bugs secret and discloses them only after a patch and Firefox always keeps bug profile public and takes less time to patch, won't the authors' approach give more weight to Chrome vs Firefox. Mustafa replied that the old metrics already have the same problem. This metric is an improvement over the old metric but does not solve everything. Session 4: Usage of Existing Browser APIs (Session Chair: Helen Wang) The first paper in this session was titled "Busting Framebusting: a Study of Clickjacking Vulnerabilities at Popular Sites" written by Gustav Rydstedt, Elie Burzstein, Dan Boneh, and Collin Jackson. Gustav presented. This paper consists of an extensive survey of the framebusting code present in 500 popular websites obtained from alexa. In this talk Gustav introduced the term frame busting and said that it is used by websites to prevent clickjacking attacks. The authors found that almost all the framebusting code in the wild is broken and gave several examples of such code and how it could be broken. Some browsers have recently introduced options like X-Frames-Options(IE8) and Content security policy(Firefox 3.7), which could be used to solve this problem. Gustav ended the talk by saying that mobile versions of the websites don't do any framebusting and therefore this makes them highly vulnerable to attacks. Devdatta Akhawe asked if Twitter wasn't already doing proper framebusting. Gustav replied that Twitter does regular framebusting but also has three to four backup mechanisms. However, reflective XSS filter will still kill Twitter. Colin Jackson said that the authors reported these vulnerabilities to Twitter and they were fixed as a result of the authors' suggestions. An audience member asked how the websites would behave if the referrer header doesn't exist and what are the failure behaviors. Mustafa replied that if the websites fail the check, the websites can try to framebust the main page. The next talk was titled "The Emperor's New APIs: On the (In)Secure Usage of New Client Side Primitives" written by Steve Hanna, Richard Shin, Devdatta Akhawe, Prateek Saxena, Arman Boehm, and Dawn Song. Steve presented. The web landscape is changing. Users are demanding more functionality from the web applications and expect the web applications to perform similar to desktop applications. To this end, certain new primitives like postMessage, localStorage have been introduced. In this talk Steve discussed the how secure these two client-primitives are and provided examples of several attacks on these primitives. One fix for postMessage was to provide an origin whitelist in the content security policy. Steve also proposed enhancements to the design of new primates by using Economy of Liabilities as the guiding principle. Steve also suggested enhancements to postMessage and localStorage. Steve said that the browser vendors currently switched off the usage of postMessage primitive till the vulnerabilities could be fixed. An audience member asked since the fragment identifier approach is more vulnerable than postMessage, what is the advantage of switching off postMessage? Steve replied that even though fragment identifier was vulnerable, the application developers might be doing other checks on it, which is not the case for postMessage. Dominique Devriese commented that another guiding principle should be that the applications should be secure by default. Steve agreed and said that this is part of their design too. Dominique also asked what would happen if the whitelist had a *. Steve said that he would recommend broadcast over multicast and therefore * is fine. Dominique asked how the authors reverse engineer code. Steve replied that they use a tool called Kudzu to run the applications and collect the path constraints which can be used to reverse engineer code. The final paper in this session is a position paper titled "Why Aren't HTTP-only Cookies More Widely Deployed?" written by Yuchen Zhou and David Evans. Yuchen presented. In this talk Yuchen talked about why http-only primitive in the cookies is not widely deployed. "http-only" primitive prevents cookies being read via document.cookie. In their survey of the top 50 sites on alexa.com, only 50% of the sites use http-only. There are also two known attacks to circumvent http-only. The authors hypothesize that http-only cookie provides modest benefits but have some deployment costs and therefore it has not be widely deployed. However, it is better to use http-only than not. Adrian Mettler asked whether using http-only would provide a false sense of security and it some people prefer to focus on other diverse solutions. Yuchen agreed that http-only does not offer complete protection. Jeremiah Grossman commented that http-only prevents long term session theft. The application developers don't use it because they don't know about it. Kapil Singh commented that 40% of the websites chain their authentication cookies to be secure. So they don't feel the need to use http-only. Session 5: Next Generation Browser APIs (Session Chair: Thomas Roessler) The first paper in this session was a position paper titled "No Web Site Left Behind: Are We Making Web Security Only for the Elite?" written by Terri Oda and Anil Somayaji. Terri presented. Most web programmers come from artistic or non-programming backgrounds who want to include a lot of functionality in their site and usually do so by cut and paste techniques. For such people, understanding web security and protecting their sites is an uphill task. In this talk Terri urges security professionals to think about providing security in a visual way to facilitate easy adoption of security by these people. Another proposed solution is to separate site design from security so that relevant people can handle security or security can be outsourced. Thomas Roessler asked whether Terri was referring to the several patches provided for the different vulnerabilities. Terri agreed and said that a lot of stuff is very overwhelming. Devdatta Akhawe commented that not many people care if small websites get XSSed. Terri did not agree to this and said that getting into website defacement is becoming worthwhile. Lots of attackers are interested in sending spam, SEO, evading blacklists, etc. all of which can utilize smaller sites. An audience member commented that the use cases of small websites don't require sophisticated services. Terri said that it may be more appropriate to use a limited set of services in that case. Adrian Mettler asked what the applications needed since they seem to be simple. Terri said that the small website developers want flashy stuff and they copy and paste code from different places. As an example, some e-commerce websites were compromised to relay spam and therefore indirectly affected everyone. The second paper in this session was a position paper titled "The Need for Coherent Web Security Policy Framework(s)" written by Jeff Hodges and Andy Steingruebl. Andy presented. In this talk Andy emphasized the need for an integrated effort to create standards for implementing web security primitives. Currently, web security is implemented in patches, is different for different browsers and is spread across the code. Andy proposed a unified mechanism where the security should be implemented in the form of an easily configurable declaration and not code. Leo Meyerovich wondered how effective this mechanism will be if the JavaScript is exposed to all the users and policies are implemented by creating a special mechanism. To answer this question, Andy cited the example of Microsoft deployment wizards which step through the generation of code. We need something similar for web application development. Andy said that having security via programming constructs is wrong and that it needs to be configuration driven. An audience member asked how to ensure that any new mechanism is uniformly adopted by every implementation. Andy said that for any new security mechanism, there should be a way to create a configuration file using which the implementation can create extra headers or codes. David Evans commented that a lot of stuff can be implemented without extra configuration. Brandon Sterne asked where the right venue for the unification efforts is. Andy said that standards bodies like IETF have been working on it. He however warned that adopting any new approach will be extremely hard. The last talk in this session was a position paper titled "Secure Cooperative Sharing of JavaScript, Browser, and Physical Resources" written by Leo A. Meyerovich, David Zhu, and Benjamin Livshits and presented by Leo. In this talk Leo underlined the need for introducing new primitives for sharing between different web applications in a mash-up. He argued that there is a need to create a new mashup manifesto where it is understood that sharing requires control, sharing must be natural and sharing must be cheap. Leo also presented a few such primitives that they propose in the paper. Prateek Saxena asked how the proposed primitives compare to the approach of Google Caja. Leo said that security in Caja is based on source rewriting which is fragile. Additionally Caja is based on the notion of object-view and it cannot talk about browser or physical resources. Thomas Roessler asked if the authors want to support the use case of travel between arbitrary websites. Leo agreed. Thomas Roessler commented that Caja outsources JavaScript to JavaScript compilers and uses mechanisms that stop short of JavaScript rewriting. Ideally one would like to take inspiration from such mechanisms. Leo said that they looked at static analysis of JavaScript code. However natural JavaScript precludes object sharing. We need to find minimal changes in JavaScript to be comfortable with gadget sharing without rewriting. ____________________________________________________________________ Review of Conference on Detection of Intrusions and Malware & Vulnerability Assessment Bonn, Germany, July 8-9, 2010 by Asia Slowinska and Johannes Hoffmann ____________________________________________________________________ Program Chair, Christian Kreibich, gave some background on the program. There were 35 submissions from 19 countries. 12 papers got accepted, what gives the acceptance ratio of 34%. Eight-eight people participated in the conference. ----------------------------------------------------------------------------- Keynote: Jose' Nazario held the first talk with the title "Trends in Malevolence". This talk was about the strategies and consequences regarding the fight against malware. Jose' evaluated the question 'Are we breeding "superbugs"?' and compared the actions which are taken against malware with antibiotics. Later on, he concluded that every action has (unintended) consequences for the security and illustrated this with an example: Before Windows XP SP2 the services running on servers were the main-target of attacks and after the introduction of SP2, which enabled the Firewall by default, a heavy shift towards attacks against client-applications took place. At the end, he reminded the audience to evaluate the consequences of the actions that might be taken and to think of a good concept and the right order of actions that must be taken. Is it reasonable and fair to involve the normal users to do more for (their) security? Example: Do not klick on every link. Reasonable: no, fair: yes; people do not freely give their credit-card number away if you ask them and they do lock their homes. But the computer-systems are too complex. ----------------------------------------------------------------------------- The first session, Host Security, was chaired by Christian Kreibich (International Computer Science Institute, Berkeley, CA). The first paper was "HookScout: Proactive Binary-Centric Hook Detection" by Heng Yin, Pongsin Poosankam, Steve Hanna and Dawn Song. It was presented by Lok Yan. The talk began with a general introduction into kernel hooking mechanisms. The authors motivate the need for HookScout by enormous attack vector space - their study shows that there are almost 20K function pointers in Windows kernel, 7K of which are fixed, and do not change once initialized. In principle, all of these pointers could be altered when hooks are installed. Their hook detection approach is based on dynamic analysis implemented on top of the QEMU processor emulator. First, HookScout monitors kernel execution and seeks for function pointers which have constant values. Their locations within kernel objects are used to generate a hook detection policy for a given operating system binary. Next, the hook detection subsystem enforces the policy by assuring that pointers described previously as invariant are never changed in the runtime. This would suggest that a function pointer value has been altered by a hook installer. The detection engine is currently implemented as a kernel module and incurs approximately 6.5% performance overhead. However, the authors recognize the risk of HookScout being turned off by a skilled attacker, and leave moving it to a hypervisor as a future work. So far only a pretty modest evaluation of the system has been performed, and few false positives were reported. One question was whether user is to be entrusted with running HookScout. Lok suggested that the detection engine should be included in e.g., antivirus software. The next question challenged the issue of false positives, since it is very difficult to perform exhaustive testing and exercise enough paths of the kernel under analysis to properly map all function pointers. The first problem is caused by device drivers or dynamically loaded modules. These could benevolently install new function pointers in the kernel which have not been seen before. Another difficulty stems from unions prevalently used in (at least) Windows kernel. When they are stored in arrays, their order can be altered, which would again cause discrepancies. Lok did not have a good answer to this question and admitted that thorough testing of the system is required, and false positives might occur more often. The next paper was "Conqueror: Tamper-Proof Code Execution on Legacy Systems" by Lorenzo Martignoni, Roberto Paleari and Danilo Bruschi. Roberto presented. This work tackles the problem of software-based code attestation, i.e., verifying the integrity of a piece of code executing in an untrusted system. Code attestation is also used to execute an arbitrary piece of code with the guarantee that the code is run unmodified and in an untampered environment. The scheme is based on a challenge-response protocol: the verifier issues a challenge for the untrusted system, which uses the challenge to compute a function. The verifier relies on time to determine whether the function result is genuine or if it could have been forged. Conqueror enhances Pioneer, the state-of-the-art attestation scheme for this class of systems, and is not vulnerable to attacks designed to challenge Pioneer. The significant shortcoming of Pioneer is that the function computed by the untrusted systems is known in advance, giving attackers chance and time to deeply analyze it and find its weaknesses. Conqueror proposes instead to generate the function on demand (so that attackers cannot analyze it offline), and send it encrypted and obfuscated to the untrusted machine. Moreover, Conqueror replaces the interrupt descriptor table with a custom one. If executed, interrupt handlers corrupt the result of the checksum function being executed. This assures that the attacker cannot try to emulate or intercept function execution and remain hidden. A prototype of the system was implemented and tested on 32-bit systems running Windows XP. Someone raised the issue of relying on timing in the presence of unrelated network data occupying the system. Roberto commented that it does not cause difficulties since the verifier and untrusted machines are both on the local network. Only should the network switch be overloaded, one could expect problems. Another question was related to assuring that the function result is not forged by another machine. Here Roberto explained that it is unlikely since due to the custom IDT the attacker has no means of silently interrupting/monitoring execution of the function computing the checksum. The last paper of this session was "dAnubis - Dynamic Device Driver Analysis Based on Virtual Machine Introspection" by Matthias Neugschwandtner, Christian Platzer, Paolo Milani Comparetti and Ulrich Bayer. It was presented by Matthias. dAnubis is an extension to the Anubis (http://anubis.iseclab.org/) dynamic malware analysis system for the analysis of malicious Windows drivers. Matthias motivated their work by an observation that it is pretty convenient for an attacker to inject malicious code into kernel by loading and installing a device driver. Indeed, Windows XP machines are even often operated with Administrator privileges and provide APIs that allow loading an unsigned driver without any user interaction. In the light of that, dAnubis aims at providing a human-readable report of a device-driver behaviour. This includes information on driver's interaction with another drivers and the interface it offers to userspace, in addition to the information on the use of call hooking, kernel patching or Direct Kernel Object Manipulation (DKOM). dAnubis is built on top of the QEMU processor emulator. It monitors the execution of the guest OS, and observes events such as the execution of driver's code, invocation of kernel functions, or access to the guest's (virtual) hardware. Using dAnubis the authors analyzed over 400 rootkit samples. The paper provides some details on the results of this analysis. One question was whether dAnubis is able to tell the type of malware, e.g., a keylogger. No, it's not. It aims to provide a picture of a driver's behaviour, but detection (distinguishing malicious drivers from benign ones) or classification is outside the scope of this work. Another question was about the performance penalty incurred by the kernel memory monitoring. Matthias explained that the monitoring starts only once the driver is loaded, and then the overhead is between 13% and 14%. (However, that's the overhead on top of the QEMU processor emulator.) Further, somebody asked whether the authors noticed malware that tries to avoid their analysis. Not for dAnubis, but malware may try to detect that it is running in Anubis, based on the popular QEMU. ----------------------------------------------------------------------------- Carel van Straaten held the first invited talk about "Modern Spammer Infrastructure" In this talk the audience got a deep insight into the professional business of spammers. Carel told about the highly skilled people behind it and how they keep their distributed network up and running. Of course he also talked about the countermeasures that Spamhaus provides to hinder the spam flow. Several spam filtering features were presented that help to keep the 1-2 Mio unique computers spitting out spam each day in check. More information about Spamhaus can be found on their webpage, http://www.spamhaus.org/. ----------------------------------------------------------------------------- The second session, Trends, was chaired by Sven Dietrich (Stevens Institute of Technology). Kapil Singh presented the paper "Evaluating Bluetooth as a Medium for Botnet Command and Control" by Kapil Singh, Samrit Sangal, Nehil Jain, Patrick Traynor and Wenke Lee. This talk evaluated the capabilities of Bluetooth as a medium for command and control structures for botnets. Since BT gets more and more pervasive, it provides a stealthy alternative to currently used command and control infrastructeres. The authors assume that the device is already infected with malware and they evaluated the movements of the devices based on a trace based simulation. They conclude that it is possible to control a botnet over BT since most people visit the same places each day on a very regular basis. The command propagation in their scenarios enabled the botnet master to get most messages to the devices within a period of a day. The talk ended with a few defense mechanisms: pushed software-updates for the devices through the service provider, the time when the commands are mostly send is known (in the morning hours when people go to work) and may be used to mitigate the risks; desktop software may test the devices for malicious code. Q: What happens if the vulnerable devices have set a BT pin? A: This is not part of the research. All devices are assumed to be already infected with some kind of malware. Somebody commented that it would be interesting to evaluate their approach using real traces and not only simulations. He mentioned that some researchers in Italy built a similar tool, but when they wanted to test it in the real world environment, they were faced with serious problems. These could have been caused by too short contact with the device to be attacked. Kapil explained that they only use the aforementioned simulated traces, and they assume that two devices were able to communicate if they were close enough at the beginning and at the end of a 5 minutes interval. The questioner commented further that they really don't know what happens in between, and suggested a more thorough evaluation. Then Antonio Nappa gave the presentation about the paper "Take a Deep Breath: a Stealthy, Resilient and Cost-Effective Botnet Using Skype" by Antonio Nappa, Aristide Fattori, Marco Balduzzi, Matteo Dell'Amico and Lorenzo Cavallaro. In this talk a botnet model based on Skype is proposed. Since Skype uses p2p-technology, is encrypted, offers NAT and firewall traversal techniques and also offers an API it is an ideal botnet command and control network. The author described how a parasitic overlay over the normal Skype network is build and how hard it is to get hold of the botnet master. Q: What are the lessons you learned during the research? A: That the Skype API is dangerous if used in this way. Q: Have you contacted the company behind Skype regarding the dangerous potential behing their API? A: No. Q: How do you react when someone blames you for malicious actions performed which are based on your research? A: Malicious software is already using these kinds technologies; at least for messaging. Q: Although it is p2p-based it may be shut down? A: Legit user accounts may be used to perform the malicious actions. The session ended with the paper "Covertly Probing Underground Economy Marketplaces" by Hanno Fallmann, Gilbert Wondracek and Christian Platzer. Hanno Fallmann presented the results. This talk explained how cyber criminals trade their goods over publicy available channels over the internet. A tool was introcuded, that automaticly monitors those actions in IRC networks and web platforms. The evalution took place over 11 months. Q: How good is your bot? It looks similiar to Eliza. A. It started as an idea but turned out to get specific information about payment etc. Q: Why not switch to a human after a chat is initialized? A: It runs all the time and we do not have the resources. Q: Do you come by discussions about botnet/malware? A: That is not part of the research, but it exists elsewhere. Q: What about automaticly joining new channels/forums? A: That is difficult because some kind of reputation is needed and that is not part of the research. Q: What about bogus sells (rippers)? A: That is a hot topic in the community and you can find them everywhere. The distribution between real and bogus offers is unknown. Q: What about the risks doing this from an university ip? A: We started with an university ip and got DDOSed by a botnet; now we use a different network. Q: Do you have contact with law enforcement agencies? A: Not really. We want to analyze the data first. ----------------------------------------------------------------------------- The third session, Vulnerabilities, was chaired by Michael Meier (University of Dortmund). Adam Doupe' began with the presentation of the paper "Why Johnny Can't Pentest: An Analysis of Black-box Web Vulnerability Scanners" by Adam Doupe', Marco Cova and Giovanni Vigna. Adam told the audience about the challenges that modern web scanners face and how they perform. 11 scanners were tested, ranging from free open source scanners to products which cost more than 30.000$. After a short summary about their operation methods he presented the results of the tests which were executed against a self written vulnerable web application. From a total of 16 vulnerabilities (inlucding XSS, refl. XSS, SQL inj., weak passwords, ...) no scanner found more than 8 vulnerabilities in total. On the other hand, students found 15 of the vulnerabilities. Adam concluded his talk, that a good crawling behaviour is as import as a good detection engine. Q: Were there any incosistencies between two runs from the same scanner? A: I heard of incosistencies, but we did not test that. All scanner were only run once. Q: What about the runtime of the scanners? A: Some are pretty fast and only take 4-7 minutes, other are slow. The slowest scanner tested for 4 hours. Q: How did you chose the scanners? A: We did chose most of the big players. Q: Was there any infinite crawling because of links "directing back"? A: Yes, one scanner was fooled and we had to remove the corresponding page for this scanner. Q: How many differenent SQL injections have you implemented? A: 1 normal SQL injections and 1 stored SQL injection. The session ended with Bryce Boe presenting the paper "Organizing Large Scale Hacking Competitions" by Nick Childers, Bryce Boe, Lorenzo Cavallaro, Ludovico Cavedon, Marco Cova, Manuel Egele and Giovanni Vigna This talk gave a quick overview of the challenges that arise when hosting hacking competitions and why these competitions are useful for the participants. The talker described the scroring system and explained the goals of the different types of challenges. The talk ended with three tips: keep it simple and cost effectice and stress test the installation before going live. Q: Why change from defense and attack based challenges to attack based only? A: Because of the knowledge and tools that teams reuse in further competitions to gain an unfair advantage to new players. Q: What about attack statistics? E.g. the runtime of some attacks? A: We did not analyse the data but it is online available on our webpage. Some teams focus on side quests to earn many score points in less time and other teams focus on main quests. ----------------------------------------------------------------------------- Invited Talk Marc Dacier held the second invited talk about "TRIAGE: the WOMBAT attack attribution approach". In this talk a new method regarding "attack attribution" was introduced. Attack attribution aims at explaining which new attacks contribute to which (new) phenomena. TRIAGE is an acronym for "atTRIbution of Attacks using Graph based Event clustering", which itself is a multicriteria clustering method funded by the WOMBAT project. The project uses many different criteria (ip, email adresses used for registering domains, date of registration, domain names, ...) to build a "contextual fingerprint" which itself is used for the clustering and to draw a graphical representation of the collected data. The bottom line of the talk was to to use the collected data and search in it, but do not search for a problem with an already known solution. The collected data is available for research purposes (NDA required). Q: Are you correlating data from honeypots and other shady services? A: No. Q: Why a NDA, is there private data that must be protected? A: The NDA is 8 years old and protects companies from accusing other companies that they misused their data. A nice side effect of the NDA is the creation of a group of persons that shares data for analysis purposes. ----------------------------------------------------------------------------- The fourth session, Intrusion Detection, was chaired by Robin Sommer (International Computer Science Institute, Berkeley, CA). Ali Ghorbani started with his talk about "An Online Adaptive Approach to Alert Correlation" by Hanli Ren, Natalia Stakhanova and Ali Ghorbani. In this talk the problem of a very huge amount of alerts with a huge amount of false positives alarms when dealing with alert correlation is approached. Ali explains that not all features of an alert are relevant and that they build so called "hyper alerts", which consist of several "atomic alerts". In further steps, their proposed system extracts relevant features and calculates a correlation probability for alert types by leveraging a Bayesian propability analysis. Their adaptive online correlation module is now able to analyse and correlate new alerts to hyper alerts. This approach provides an unsupervised training method and is able to show why two alerts are correlated. Sasa Mrdovic presented his results of "KIDS - Keyed Intrusion Detection System". Sasa begins his talk with a simple statement: The better IDS are, the better the attacks will be. When attackers know how the underlying model for attack recognition in an IDS works, they will build packets that mimic "normal packets" in order to get by the IDS. KIDS uses random delimeters for network packet data instead of known delimeters to prevent mimicry attacks. This yields random words of the payload which are used to identify attacks. The random delimeters represent the key which is kept secret. As long as an attacker does not know the key, he will not be able to mimic normal packets. The authors tested their system with http traffic from an university and used the Metasploit framework for attacks, but it should work with other protocols. The detection rate with random delimeters were slightly less good, but still acceptable. The benefit however is, that mimicry attacks are infeaseble. Sasa concluded that this is a novel approach and that it needs a better implementation and that the key selection process needs a proper evalution. Q: Have you tested how hard it is to attack the system with random key selection? A: This was tested in a different paper with http traffic. Q: What is the individual influence of the 2 scores to the detection rate? A: It turned out that the multiplying of the scores improves the false positive rate. Former tests used only one score. ----------------------------------------------------------------------------- Rump session Sven Dietrich pointed the audience to the IEEE Cipher webpage. Michael Meier announced the next conferences and events this year. Armin Buscher demonstrated Monkeywrench which crawls webpages for malicious content. The URL is http://www.monkeywrench.de. Felix Leder introduced a new Python based sandbox. Herbert Bos advertised the DIMVA 2011 next year in Amsterdam and gave a quick overview of the dutch culture. ----------------------------------------------------------------------------- The final session on Web Security was chaired by Herbert Bos (Vrije Universiteit Amsterdam) "Modeling and Containment of Search Worms Targeting Web Applications" by Jingyu Hua and Kouichi Sakurai was presented by Jingyu. Unlike traditional worms which scan IP address space or use hitlists to locate vulnerable servers, search worms feed specially crafted queries to search engines (e.g., Google) and quickly create lists of attack targets based on the search results. Such worms might be looking for server pages with default titles, error pages generated by software, etc. These pages are called "eigenpages". For example, to exploit a vulnerability in phpBB bulletin service, a worm could search for keywords including "allinurl: "viewtopic.php"". The work presented aims to model search worms to understand how quickly they can spread. The authors also propose a solution to contain such worms. The analysis part considers two propagation models, depending on the distribution of eigenpages. The conclusion is that to maximize the spreading speed, eigenpages should be uniformly distributed on servers. In order to contain search worms, the authors propose to inject honey pages among search results. These are faked pages and point to honeypots luring malicious attackers. Further queries from infected machines are to be rejected. The approach was tested using simulations showing that it is enough to insert 2 honey pages in every 100 search results to contain a well known Santy search worm. Somebody asked how honey web pages are built. After all, they must appear attractive and worth attacking to a worm. Jingyu replied that doing it automatically is left as a future work. Another issue raised was that the approach proposes to block worms based on IP addresses used to issue the malicious query. However, this way all benign users having infected machines are simply blocked. The presenter admitted that it is a problem. The final presentation, "HProxy: Client-side detection of SSL stripping attacks" by Nick Nikiforakis, Yves Younan and Wouter Joosen was given by Nick. Nick started his talk with some background information on stripping attacks. This technique is not based on any specific error, but on the observation that users never explicitly request SSL protected websites (they usually do not type https prefix in the browsers), but rely on servers to redirect them to the appropriate version of a particular website. And now, if attackers can launch a man-in-the-middle (MITM) attack, they can suppress such messages and provide users with "stripped" version of the requested website forcing them to communicate over an insecure channel. The approach described in the paper proposes to use the browser's existing history as a detection mechanism. A client-side proxy creates a unique profile for each secure website visited by the user. This profile contains information about the usage of SSL, based on the security characteristics of a website and not on the contents. This enables the system to operate correctly both with static and dynamic webpages. HProxy uses the profile of a website to identify when a page is maliciously modified a MITM. No server side support or trusted third parties are required for the system to work. ----------------------------------------------------------------------------- All slides will be available on the DIMVA webpage (http://www.dimva.org/dimva2010/). The next DIMVA will be held in Amsterdam in 2011. ____________________________________________________________________ Book Review By Richard Austin July 22, 2010 ____________________________________________________________________ The Failure of Risk Management: Why Its Broken and How to Fix It by Douglas Hubbard Wiley 2009. ISBN 978-0-470-38795-5 Amazon.com USD 29.70 We've told generations of information security students that "security" is all about managing risk but we continue to struggle with exactly how to do that in a field that lacks a long history of actuarial data on just how frequently which can be expected to happen and just how bad it might be when it does. We commonly use qualitative methods based on high, medium, low scales, compliance with best practices, etc, to attempt to demonstrate that we're doing the right things with the scarce resources at our disposal. Douglas Hubbard takes the rather heretical view that we're doing it wrong and substituting "consensus of ignorance" for quantitative assessment. Hubbard presents his argument in three parts: "Introduction to the Crisis", "Why it's broken" and "How to fix it". In the crisis introduction he asks some really unpleasant questions about just how much we know about how well the risk management methods we use actually work. Too often we accept the occurrence that "nothing bad happened" as being equivalent to "successfully managing risk" and never ask the impolitic question as to whether we did the right things or were we just lucky that time around (to see how "lucky" you can be, spend some time playing with the Binomial distribution in your favorite spreadsheet program to see how many trials it takes for a risk to be realized at a given probability - the results will likely surprise you). He reviews the current stable of risk management methodologies (beware - many "naked emperors" will be revealed) and sets the stage for the following sections by describing how you might know if a methodology actually worked (assuming you actually looked). "Why it's broken" delves into how we came to be in such a fix despite the efforts of many bright and conscientious people. With honesty and a dash of humor here and there, the history of the risk management discipline is reviewed and the various approaches dissected. Everyone from management consultants to subject-matter-experts is placed under Hubbard's microscope and found wanting. He meets the common complaint "but we have no data" head on with a rather snide "you have more data than you think". He notes that we often equate "having data" with "having all the exact numbers" and they are not necessarily the same. In his first book, "How to Measure Anything: Finding the Value of Intangibles in Business", Hubbard makes the point that you always know something about the situation even if it's just a general idea of what the world would look like if whatever was true. Once you have this initial stake in the sand, you can begin to think about taking measurements - not to necessarily give you the exact value but to reduce the uncertainty in your knowledge. This incremental, probabilistic approach to the gathering the necessary data underlies the approach presented in the final section. By this point in the book, you will have probably seen your favorite approach to risk management lambasted and marked by Hubbard for the scrap heap. In the "How to Fix It" section he identifies three key factors to improve the practice of risk management (p. 202): * Adapt the language and philosophy of modeling uncertain systems * Be a scientist * Build the community as well as the organization Hubbard believes the way forward for risk management lies firmly in the hands of Monte Carlo models but tempers this advice with the second admonition to always evaluate the effectiveness of the model against the actual results observed. The third factor emphasizes the wider the approach is applied, the better the results. At the conclusion of part three, Hubbard has built an excellent case that we need to bring the quantitative methods back into risk management. We do have issues with the quantity of data but there is more data out there than we think and if we take the empirical, quantitative view of data as the input to a model whose results will be calibrated against reality, we will gain insight into where measurements can be taken to reduce the uncertainty in the data we have. Though "The Failure of Risk Management" can be read alone, "How to Measure Anything" would be an excellent preface. Though not primarily focused on the information security profession, this book holds much solid advice on how we can elevate our approach to risk assessment and management beyond the high-medium-low rating scales to an empirically-based, defensible basis for decision making. Definitely a recommended read. ------------------------ Before beginning life as an itinerant university instructor and security consultant, Richard Austin (http://cse.spsu.edu/raustin2/) was the storage network security architect for a Fortune 25 company. He welcomes your thoughts and comments at raustin2 at spsu dot edu. ==================================================================== Conference and Workshop Announcements Upcoming Calls-For-Papers and Events ==================================================================== The complete Cipher Calls-for-Papers is located at http://www.ieee-security.org/CFP/Cipher-Call-for-Papers.html The Cipher event Calendar is at http://www.ieee-security.org/Calendar/cipher-hypercalendar.html ____________________________________________________________________ Cipher Event Calendar (you can get calendar updates on Twitter; follow "ciphernews") ____________________________________________________________________ Calendar of Security and Privacy Related Events maintained by Hilarie Orman Date (Month/Day/Year), Event, Locations, web page for more info. 7/26/10: WESS, 5th Workshop on Embedded Systems Security, Scottsdale, AZ, USA; http://www.wess-workshop.org/ Submissions are due 7/26/10- 7/28/10: SECRYPT, 5th International Conference on Security and Cryptography, Athens, Greece; http://www.secrypt.icete.org 8/ 1/10: Journal of Network and Computer Applications, Special Issue on Trusted Computing and Communications; http://www.elsevier.com/locate/jnca; Submissions are due 8/ 1/10: IEEE Software, Special Issue on Software Protection; http://www.computer.org/portal/web/computingnow/swcfp2; Submissions are due 8/ 1/10: INTRUST, International Conference on Trusted Systems, Beijing, China; http://www.tcgchina.org; Submissions are due 8/ 6/10: NDSS, Network & Distributed System Security Symposium, San Diego, CA, USA; http://hotcrp.cylab.cmu.edu/ndss11/; Submissions are due 8/ 9/10: CSET, 3rd Workshop on Cyber Security Experimentation and Test, Washington, DC, USA; http://www.usenix.org/cset10/cfpa/ 8/ 9/10: WOOT, 4th USENIX Workshop on Offensive Technologies, Washington, DC, USA; http://www.usenix.org/woot10/cfpa/ 8/ 9/10- 8/13/10: LIS, Workshop on Logics in Security, Copenhagen, Denmark; http://lis.gforge.uni.lu/index.html 8/10/10: HealthSec, 1st USENIX Workshop on Health Security and Privacy, Washington DC, USA; http://www.usenix.org/healthsec10/cfpa/ 8/10/10: HotSec, 5th USENIX Workshop on Hot Topics in Security, Washington DC, USA; http://www.usenix.org/events/hotsec10/cfp/ 8/11/10- 8/13/10: USENIX-Security, 19th USENIX Security Symposium, Washington, DC, USA; http://www.usenix.org/events/sec10/cfp/ 8/17/10- 8/19/10: PST, 8th International Conference on Privacy, Security and Trust, Ottawa, Canada; http://pstnet.unb.ca/pst2010 8/20/10: CPSRT, International Workshop on Cloud Privacy, Security, Risk & Trust, Held in conjunction with the 2nd IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2010), Indianapolis, IN, USA; http://cpsrt.cloudcom.org/; Submissions are due 8/20/10: CT-RSA, RSA Conference, The Cryptographers' Track, San Francisco, CA, USA; http://ct-rsa2011.di.uoa.gr; Submissions are due 8/21/10: FTDC, 7th Workshop on Fault Diagnosis and Tolerance in Cryptography, Held in conjunction with the CHES 2010, Santa Barbara, CA, USA; http://conferenze.dei.polimi.it/FDTC10/ 8/24/10: SAC-TRECK, 26th ACM Symposium on Applied Computing, Track: Trust, Reputation, Evidence and other Collaboration Know-how (TRECK), TaiChung, Taiwan; http://www.trustcomp.org/treck/; Submissions are due 8/30/10- 9/ 3/10: TrustBus, 7th International Conference on Trust, Privacy & Security in Digital Business, Bilbao, Spain; http://www.isac.uma.es/trustbus10 8/31/10: Wiley Security and Communication Networks (SCN), Special Issue on Defending Against Insider Threats and Internal Data Leakage; http://isyou.hosting.paran.com/mist10/SCN-SI-10.pdf; Submissions are due 9/ 1/10: IEEE Internet Computing, Special Issue on Security and Privacy in Social Networks; http://www.public.asu.edu/~gahn1/icsn2011.htm; Submissions are due 9/ 1/10: In-Bio-We-Trust, International Workshop on Bio-Inspired Trust Management for Information Systems, Held in conjunction with the Bionetics 2010, Boston, MA, USA; http://inbiowetrust.org; Submissions are due 9/ 6/10- 9/ 9/10: MMM-ACNS, 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security, St. Petersburg, Russia; http://comsec.spb.ru/mmm-acns10/ 9/ 7/10- 9/10/10: SECURECOMM, 6th International Conference on Security and Privacy in Communication Networks, Singapore; http://www.securecomm.org/ 9/ 7/10- 9/11/10: SIN, 3rd International Conference on Security of Information and Networks, Taganrog, Rostov-on-Don, Russia; http://www.sinconf.org/sin2010/ 9/ 9/10: SA&PS4CS, 1st International Workshop on Scientific Analysis and Policy Support for Cyber Security, Held in conjunction with the 5th International Conference on Mathematical Methods, Models, and Architectures for Computer Networks Security (MMM-ACNS 2010), St. Petersburg, Russia; http://www.comsec.spb.ru/saps4cs10/ 9/10/10: SecIoT, 1st Workshop on the Security of the Internet of Things, Held in conjunction with the Internet of Things 2010, Tokyo, Japan; http://www.isac.uma.es/seciot10; Submissions are due 9/13/10: ESSoS, International Symposium on Engineering Secure Software and Systems, Madrid, Spain; http://distrinet.cs.kuleuven.be/events/essos2011/; Submissions are due 9/13/10- 9/14/10: NeFX, 2nd Annual ACM Northeast Digital Forensics Exchange, Washington, DC, USA; http://nefx.cs.georgetown.edu/ 9/13/10- 9/15/10: SCN, 7th Conference on Security and Cryptography for Networks, Amalfi, Italy; http://scn.dia.unisa.it/ 9/13/10- 9/16/10: SCC, 2nd International Workshop on Security in Cloud Computing, Held in Conjunction with ICPP 2010, San Diego, California, USA; http://bingweb.binghamton.edu/~ychen/SCC2010.htm 9/14/10: VizSec, 7th International Symposium on Visualization for Cyber Security, Ottawa, Ontario, Canada; http://www.vizsec2010.org 9/15/10: IEEE Transactions on Information Forensics and Security, Special Issue on Using the Physical Layer for Securing the Next Generation of Communication Systems; http://www.signalprocessingsociety.org/publications/periodicals/ forensics/forensics-authors-info/; Submissions are due 9/15/10: CODASPY, 1st ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA; http://www.codaspy.org/; Submissions are due 9/15/10: MetriSec, 6th International Workshop on Security Measurements and Metrics, Held in conjunction with the International Symposium on Empirical Software Engineering and Measurement (ESEM 2010), Bolzano-Bozen, Italy; http://www.cs.kuleuven.be/conference/MetriSec2010/ 9/15/10- 9/17/10: RAID, 13th International Symposium on Recent Advances in Intrusion Detection, Ottawa, Canada; http://www.RAID2010.org 9/16/10- 9/17/10: FAST, 7th International Workshop on Formal Aspects of Security & Trust, Pisa, Italy; http://www.iit.cnr.it/FAST2010/ 9/20/10- 9/22/10: ESORICS, 15th European Symposium on Research in Computer Security, Athens, Greece; http://www.esorics2010.org 9/20/10- 9/23/10: IFIP-TC9-HCC9, IFIP TC-9 HCC-9 Stream on Privacy and Surveillance, Held in conjunction with the IFIP World Computer Congress 2010, Brisbane, Australia; http://www.wcc2010.org/migrated/HCC92010/HCC92010_cfp.html 9/20/10- 9/24/10: ADBIS, 14th East-European Conference on Advances in Databases and Information Systems, Track on Personal Identifiable Information: Privacy, Ethics, and Security, Novi Sad; http://perun.im.ns.ac.yu/adbis2010/organization.php 9/21/10: PRITS, Workshop on Pattern Recognition for IT Security, Held in conjunction with DAGM 2010, Darmstadt, Germany; http://www.dagm2010.org/ws_prits.html 9/23/10: DPM, International Workshop on Data Privacy Management, Held in conjunction with the ESORICS 2010, Athens, Greece; http://dpm2010.dyndns.org/ 9/23/10: SETOP, 3rd International Workshop on Autonomous and Spontaneous Security, Held in conjunction with ESORICS 2010, Athens, Greece; http://www.infres.enst.fr/wp/setop2010/ 9/23/10- 9/24/10: STM, 6th International Workshop on Security and Trust Management, Athens, Greece; http://www.isac.uma.es/stm10 9/24/10: PSDML, ECML/PKDD Workshop on Privacy and Security issues in Data Mining and Machine Learning, Barcelona, Spain; http://fias.uni-frankfurt.de/~dimitrakakis/workshops/psdml-2010/ 10/ 1/10: FC, 15th International Conference on Financial Cryptography and Data Security, Bay Gardens Beach Resort, St. Lucia; http://ifca.ai/fc11/; Submissions are due 10/ 4/10: SafeConfig, 2nd Workshop on Assurable & Usable Security Configuration, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA; http://hci.sis.uncc.edu/safeconfig/ 10/ 4/10: STC, 5th Annual Workshop on Scalable Trusted Computing, Held in conjunction with ACM CCS 2010, Chicago, Illinois, USA; http://stc2010.trust.rub.de/ 10/ 4/10-10/ 8/10: ACM-CCS, 17th ACM Conference on Computer and Communications Security, Chicago, IL, USA; http://www.sigsac.org/ccs/CCS2010/cfp.shtml 10/ 5/10: NPSec, 6th Workshop on Secure Network Protocols, Held in conjunction with ICNP 2010, Kyoto, Japan; http://webgaki.inf.shizuoka.ac.jp/~npsec2010/ 10/ 9/10: TrustCol, 5th International Workshop on Trusted Collaboration, Held in conjunction with the CollaborateCom 2010, Chicago, Illinois, USA; http://scl.cs.nmt.edu/trustcol10/ 10/15/10: IFIP-DF, 7th Annual IFIP WG 11.9 International Conference on Digital Forensics, Orlando, FL, USA; http://www.ifip119.org; Submissions are due 10/18/10-10/20/10: ICTCI, 4th International Conference on Trusted Cloud Infrastructure, Shanghai, China; http://ppi.fudan.edu.cn/ictci2010/index.html 10/18/10-10/20/10: eCRS, eCrime Researchers Summit, Dallas, Texas, USA; http://www.ecrimeresearch.org/2010/cfp.html 10/20/10-10/21/10: Malware, 5th IEEE International Conference on Malicious and Unwanted Software, Nancy, France; http://malware10.loria.fr/ 10/24/10: WESS, 5th Workshop on Embedded Systems Security, Scottsdale, AZ, USA; http://www.wess-workshop.org/ 10/25/10-10/28/10: ISC, 13th Information Security Conference, Boca Raton, Florida; http://math.fau.edu/~isc2010/ 10/28/10-10/29/10: EC2ND, 6th European Conference on Computer Network Defense, Berlin, Germany; http://2010.ec2nd.org 10/31/10: SESOC, 3rd International Workshop on Security and Social Networking, Held in conjunction with the PerCom 2011, Seattle, WA, USA; http://www.sesoc.org; Submissions are due 11/ 4/10-11/ 6/10: SIDEUS, 1st International Workshop on Securing Information in Distributed Environments and Ubiquitous Systems, Fukuoka, Japan; http://www.sideus-conf.org 11/ 4/10-11/ 6/10: CWECS, 1st International Workshop on Cloud, Wireless and e-Commerce Security, Fukuoka, Japan; http://dblab.csie.thu.edu.tw/CWECS 11/ 8/10-11/10/10: HST, 10th IEEE International Conference on Technologies for Homeland Security, Waltham, MA, USA; http://ieee-hst.org/ 11/15/10: IEEE Network, Special Issue on Network Traffic Monitoring and Analysis; http://dl.comsoc.org/livepubs/ni/info/cfp/cfpnetwork0511.htm; Submissions are due 11/18/10-11/19/10: IDMAN, 2nd IFIP WG 11.6 Working Conference on Policies & Research in Identity Management, Oslo, Norway; http://ifipidman2010.nr.no/ifipidman2010/index.php5/Main_Page 11/22/10-11/23/10: GameSec, The Inaugural Conference on Decision and Game Theory for Security, Berlin, Germany; http://www.gamesec-conf.org/ 11/29/10: SecIoT, 1st Workshop on the Security of the Internet of Things, Held in conjunction with the Internet of Things 2010, Tokyo, Japan; http://www.isac.uma.es/seciot10 11/30/10-12/ 3/10: CPSRT, International Workshop on Cloud Privacy, Security, Risk & Trust, Held in conjunction with the 2nd IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2010), Indianapolis, IN, USA; http://cpsrt.cloudcom.org/ 12/ 1/10-12/ 3/10: In-Bio-We-Trust, International Workshop on Bio-Inspired Trust Management for Information Systems, Held in conjunction with the Bionetics 2010, Boston, MA, USA; http://inbiowetrust.org 12/ 6/10-12/10/10: ACSAC, 26th Annual Computer Security Applications Conference, Austin, Texas, USA; http://www.acsac.org 12/11/10-12/13/10: TrustCom, IEEE/IFIP International Symposium on Trusted Computing and Communications, Hong Kong SAR, China; http://trust.csu.edu.cn/conference/trustcom2010 12/12/10-12/15/10: WIFS, International Workshop on Information Forensics & Security, Seattle, WA, USA; http://www.wifs10.org 12/13/10-12/15/10: Pairing, 4th International Conference on Pairing-based Cryptography, Yamanaka Hot Spring, Japan; http://www.thlab.net/pairing2010/ 12/13/10-12/15/10: INTRUST, International Conference on Trusted Systems, Beijing, China; http://www.tcgchina.org 12/15/10-12/19/10: ICISS, 6th International Conference on Information Systems Security, Gandhinagar, India; http://www.cs.wisc.edu/iciss10/ 1/30/11- 2/ 2/11: IFIP-DF, 7th Annual IFIP WG 11.9 International Conference on Digital Forensics, Orlando, FL, USA; http://www.ifip119.org 2/ 6/11- 2/ 9/11: NDSS, Network & Distributed System Security Symposium, San Diego, CA, USA; http://hotcrp.cylab.cmu.edu/ndss11/ 2/ 9/11- 2/10/11: ESSoS, International Symposium on Engineering Secure Software and Systems, Madrid, Spain; http://distrinet.cs.kuleuven.be/events/essos2011/ 2/14/11- 2/18/11: CT-RSA, RSA Conference, The Cryptographers' Track, San Francisco, CA, USA; http://ct-rsa2011.di.uoa.gr 2/21/11- 2/23/11: CODASPY, 1st ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA; http://www.codaspy.org/ 2/28/11- 3/ 4/11: FC, 15th International Conference on Financial Cryptography and Data Security, Bay Gardens Beach Resort, St. Lucia; http://ifca.ai/fc11/ 3/21/11: SESOC, 3rd International Workshop on Security and Social Networking, Held in conjunction with the PerCom 2011, Seattle, WA, USA; http://www.sesoc.org 3/21/11- 3/25/11: SAC-TRECK, 26th ACM Symposium on Applied Computing, Track: Trust, Reputation, Evidence and other Collaboration Know-how (TRECK), TaiChung, Taiwan; http://www.trustcomp.org/treck/ ____________________________________________________________________ Journal, Conference and Workshop Calls-for-Papers (new since Cipher E96) ___________________________________________________________________ ------------------------------------------------------------------------- WESS 2010 5th Workshop on Embedded Systems Security, Scottsdale, AZ, USA, October 24, 2010. (Submissions due 26 July 2010) http://www.wess-workshop.org/ Embedded computing systems are widely found in application areas ranging from safety-critical systems to vital information management. This introduces a large number of security issues. Embedded systems are vulnerable to remote intrusion, local intrusion, fault-based and power/timing-based attacks, intellectual-property theft, subversion, hijacking and more. Due to their strong link to software engineering and hardware engineering, these security issues are different from the traditional security problems found on personal computers. For example, embedded devices are resource-constrained in power and performance, which requires them to use computationally efficient solutions. They have a very weak physical trust boundary, which enables many different implementation-oriented attacks. They use an intimate connection between hardware and software, often without the shielding of an operating system. This workshop provides a forum for researchers to present novel ideas on addressing security issues that arise in the design, the operation, and the testing of secure embedded systems. Of particular interest are security topics that are unique to embedded systems. Topics of Interest: - Trust models for secure embedded hardware and software - Isolation techniques for secure embedded hardware, hyperware, and software - System architectures for secure embedded systems - Metrics for secure design of embedded hardware and software - Security concerns for medical and other applications of embedded systems - Support for intellectual property protection and anti-counterfeiting - Specialized components for authentication, key storage and key generation - Support for secure debugging and troubleshooting - Implementation attacks and countermeasures - Design tools for secure embedded hardware and software - Hardware/software codesign for secure embedded systems - Specialized hardware support for security protocols ------------------------------------------------------------------------- Journal of Network and Computer Applications, Special Issue on Trusted Computing and Communications, 2nd Quarter, 2011. (Submission Due 1 August 2010) http://www.elsevier.com/locate/jnca Guest editor: Laurence T. Yang (St. Francis Xavier University, Canada) and Guojun Wang (Central South University, China) With the rapid development and the increasing complexity of computer and communications systems and networks, traditional security technologies and measures can not meet the demand for integrated and dynamic security solutions. As a challenging and innovative research field, trusted computing and communications target computer and communications systems and networks that are available, secure, reliable, controllable, dependable, and so on. In a word, they must be trustworthy. If we view the traditional security as identity trust, the broader field of trusted computing and communications also includes behavior trust of systems and networks. In fact, trusted computing and communications have become essential components of various distributed services, applications, and systems, including self-organizing networks, social networks, semantic webs, e-commence, and e-government. Research areas of relevance would therefore include, but not only limited to, the following topics: - Trusted computing platform and paradigm - Trusted systems and architectures - Trusted operating systems - Trusted software - Trusted database - Trusted services and applications - Trust in e-commerce and e-government - Trust in mobile and wireless networks - Trusted communications and networking - Reliable and fault-tolerant computer systems/networks - Survivable computer systems/networks - Autonomic and dependable computer systems/networks ------------------------------------------------------------------------- IEEE Software, Special Issue on Software Protection, March, 2011. (Submission Due 1 August 2010) http://www.computer.org/portal/web/computingnow/swcfp2 Guest editor: Paolo Falcarin (University of East London, UK), Christian Collberg (University of Arizona, USA), Mikhail Atallah (Purdue University, USA), and Mariusz Jakubowski (Microsoft Research) Software protection is an area of growing importance in software engineering and security: leading-edge researchers have developed several pioneering approaches for preventing or resisting software piracy and tampering, building a heterogeneous body of knowledge spanning different topics: obfuscation, information hiding, reverse engineering, source/binary code transformation, operating systems, networking, encryption, and trusted computing. IEEE Software seeks submissions for a special issue on software protection. We seek articles that present proven mechanisms and strategies to mitigate one or more of the problems faced by software protection. These strategies should offer practitioners appropriate methods, approaches, techniques, guidelines, and tools to support evaluation and integration of software protection techniques into their software products. Possible topics include: - Analysis of legal, ethical, and usability aspects of software protection - Best practices and lesson learned while dealing with different relevant threats - Case studies on success and/or failure in applying software protections - Code obfuscation and reverse-engineering complexity - Computing with encrypted functions and data - Protection of authorship: watermarking and fingerprinting - Remote attestations and network-based approaches - Security evaluation of software protection's effectiveness - Software protection methods used by malware (viruses, rootkits, worms, and botnets) - Source and binary code protections - Tamper-resistant software: mobile, self-checking, and self-modifying code - Tools to implement or defeat software protections - Trusted computing or other hardware-assisted protection - Virtualization and protections based on operating systems ------------------------------------------------------------------------- INTRUST 2010 International Conference on Trusted Systems, Beijing, China, December 13-15, 2010. (Submissions due 1 August 2010) http://www.tcgchina.org INTRUST 2010 conference focuses on the theory, technologies and applications of trusted systems. It is devoted to all aspects of trusted computing systems, including trusted modules, platforms, networks, services and applications, from their fundamental features and functionalities to design principles, architecture and implementation technologies. The goal of the conference is to bring academic and industrial researchers, designers, and implementers together with end-users of trusted systems, in order to foster the exchange of ideas in this challenging and fruitful area. INTRUST 2010 solicits original papers on any aspect of the theory, advanced development and applications of trusted computing, trustworthy systems and general trust issues in modern computing systems. The conference will have an academic track and an industrial track. This call for papers is for contributions to both of the tracks. Submissions to the academic track should emphasize theoretical and practical research contributions to general trusted system technologies, while submissions to the industrial track may focus on experiences in the implementation and deployment of real-world systems. ------------------------------------------------------------------------- NDSS 2011 Network & Distributed System Security Symposium, San Diego, California, USA, February 6-9, 2011. (Submissions due 6 August 2010) http://hotcrp.cylab.cmu.edu/ndss11/ The Network and Distributed System Security Symposium fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available network and distributed systems security technology. Special emphasis will be made to accept papers in the core theme of network and distributed systems security. Consequently, papers that cover networking protocols and distributed systems algorithms are especially invited to be submitted. Moreover, practical papers in these areas are also very welcome. Submissions are solicited in, but not limited to, the following areas: - Integrating security in Internet protocols: routing, naming, network management - High-availability wired and wireless networks - Security for Cloud Computing - Future Internet architecture and design - Security of Web-based applications and services - Anti-malware techniques: detection, analysis, and prevention - Security for future home networks, Internet of Things, body-area networks - Intrusion prevention, detection, and response - Combating cyber-crime: anti-phishing, anti-spam, anti-fraud techniques - Privacy and anonymity technologies - Security for emerging technologies: sensor networks, wireless/mobile (and ad hoc) networks, and personal communication systems - Security for Vehicular Ad-hoc Networks (VANETs) - Security for peer-to-peer and overlay network systems - Security for electronic commerce: e.g., payment, barter, EDI, notarization, timestamping, endorsement, and licensing - Implementation, deployment and management of network security policies - Intellectual property protection: protocols, implementations, metering, watermarking, digital rights management - Public key infrastructures, key management, certification, and revocation - Special problems and case studies: e.g., tradeoffs between security and efficiency, usability, reliability and cost - Security for collaborative applications: teleconferencing and video-conferencing - Security for large-scale systems and critical infrastructures (e.g., electronic voting, smart grid) - Applying Trustworthy Computing mechanisms to secure network protocols and distributed systems ------------------------------------------------------------------------- CPSRT 2010 International Workshop on Cloud Privacy, Security, Risk & Trust, Held in conjunction with the 2nd IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2010), Indianapolis, IN, USA, November 30 - December 3, 2010. (Submissions due 20 August 2010) http://cpsrt.cloudcom.org/ Cloud computing has emerged to address an explosive growth of web-connected devices, and handle massive amounts of data. It is defined and characterized by massive scalability and new Internet-driven economics. Yet, privacy, security, and trust for cloud computing applications are lacking in many instances and risks need to be better understood. Privacy in cloud computing may appear straightforward, since one may conclude that as long as personal information is protected, it shouldn't matter whether the processing is in a cloud or not. However, there may be hidden obstacles such as conflicting privacy laws between the location of processing and the location of data origin. Cloud computing can exacerbate the problem of reconciling these locations if needed, since the geographic location of processing can be extremely difficult to find out, due to cloud computing's dynamic nature. Another issue is user-centric control, which can be a legal requirement and also something consumers want. However, in cloud computing, the consumers' data is processed in the cloud, on machines they don't own or control, and there is a threat of theft, misuse or unauthorized resale. Thus, it may even be necessary in some cases to provide adequate trust for consumers to switch to cloud services. In the case of security, some cloud computing applications simply lack adequate security protection such as fine-grained access control and user authentication (e.g. Hadoop). Since enterprises are attracted to cloud computing due to potential savings in IT outlay and management, it is necessary to understand the business risks involved. If cloud computing is to be successful, it is essential that it is trusted by its users. Therefore, we also need studies on cloud-related trust topics, such as what are the components of such trust and how can trust be achieved, for security as well as for privacy. The CPSRT workshop will bring together a diverse group of academics as well as government and industry practitioners in an integrated state-of-the-art analysis of privacy, security, risk, and trust in the cloud. The workshop will address cloud issues specifically related to (but not limited to) the following topics of interest: - Access control and key management - Security and privacy policy management - Identity management - Remote data integrity protection - Secure computation outsourcing - Secure data management within and across data centers - Secure distributed data storage - Secure resource allocation and indexing - Intrusion detection/prevention - Denial-of-Service (DoS) attacks and defense - Web service security, privacy, and trust - User requirements for privacy - Legal requirements for privacy - Privacy enhancing technologies - Privacy aware map-reduce framework - Risk or threat identification and analysis - Risk or threat management - Trust enhancing technologies - Trust management ------------------------------------------------------------------------- CT-RSA 2011 RSA Conference, The Cryptographers' Track, San Francisco, CA, USA, February 14-18, 2011. (Submissions due 20 August 2010) http://ct-rsa2011.di.uoa.gr The RSA Conference is the largest annual computer security event, with over 350 vendors, and thousands of attendees. The Cryptographers' Track (CT-RSA) is a research conference within the RSA Conference. CT- RSA has begun in 2002, and has become an established venue for presenting cryptographic research papers. Original research papers pertaining to all aspects of cryptography are solicited. Submissions may present applications, techniques, theory, and practical experience on topics including, but not limited to: - public-key encryption - symmetric-key encryption - cryptanalysis - digital signatures - hash functions - cryptographic protocols - tamper-resistance - fast implementations - elliptic-curve cryptography - lattice-based cryptography - quantum cryptography - formal security models - network security - hardware security - e-commerce ------------------------------------------------------------------------- SAC-TRECK 2011 26th ACM Symposium on Applied Computing, Track: Trust, Reputation, Evidence and other Collaboration Know-how (TRECK), TaiChung, Taiwan, March 21-25, 2011. (Submissions due 24 August 2010) http://www.trustcomp.org/treck/ The goal of the ACM SAC 2011 TRECK track remains to review the set of applications that benefit from the use of computational trust and online reputation. Computational trust has been used in reputation systems, risk management, collaborative filtering, social/business networking services, dynamic coalitions, virtual organisations and even combined with trusted computing hardware modules. The TRECK track covers all computational trust/reputation applications, especially those used in real-world applications. The topics of interest include, but are not limited to: - Trust management, reputation management and identity management - Pervasive computational trust and use of context-awareness - Mobile trust, context-aware trust - Web 2.0 reputation and trust - Trust-based collaborative applications - Automated collaboration and trust negotiation - Trade-off between privacy and trust - Trust/risk-based security frameworks - Combined computational trust and trusted computing - Tangible guarantees given by formal models of trust and risk - Trust metrics assessment and threat analysis - Trust in peer-to-peer and open source systems - Technical trust evaluation and certification - Impacts of social networks on computational trust - Evidence gathering and management - Real-world applications, running prototypes and advanced simulations - Applicability in large-scale, open and decentralised environments - Legal and economic aspects related to the use of trust and reputation engines - User-studies and user interfaces of computational trust and online reputation applications ------------------------------------------------------------------------- Wiley Security and Communication Networks (SCN), Special Issue on Defending Against Insider Threats and Internal Data Leakage, 2011. (Submission Due 31 August 2010) http://isyou.hosting.paran.com/mist10/SCN-SI-10.pdf Guest editor: Elisa Bertino (Purdue university, USA), Gabriele Lenzini (SnT-Univ. of Luxembourg, Luxembourg), Marek R. Ogiela (AGH University of Science & Technology, Poland), and Ilsun You (Korean Bible University, Korea) This special issue collects scientific studies and works reporting on the most recent challenges and advances in security technologies and management systems about protecting an organization's information from corporate malicious activities. It aims to be the showcase for researchers that address the problems on how to prevent the leakage of organizations' information caused by insiders. The contributions to this special issue can conduct state-of-the-art surveys and case-analyses of practical significance, which, we wish, will support and foster further research and technology improvements related to this important subject. Papers on practical as well as on theoretical topics are invited. Topics include (but are not limited to): - Theoretical foundations and algorithms for addressing insider threats - Insider threat assessment and modeling - Security technologies to prevent, detect and avoid insider threats - Validating the trustworthiness of staff - Post-insider threat incident analysis - Data breach modeling and mitigation techniques - Authentication and identification - Certification and authorization - Database security - Device control system - Digital forensic system - Digital right management system - Fraud detection - Network access control system - Intrusion detection - Keyboard information security - Information security governance - Information security management systems - Risk assessment and management - Log collection and analysis - Trust management - Secure information splitting and sharing algorithms - Steganography and subliminal channels - IT compliance (audit) - Continuous auditing - Socio-Technical Engineering Attack to Security and Privacy ------------------------------------------------------------------------- IEEE Internet Computing, Special Issue on Security and Privacy in Social Networks, May/June 2011. (Submission Due 1 September 2010) http://www.public.asu.edu/~gahn1/icsn2011.htm Guest editor: Gail-Joon Ahn (Arizona State University, USA), Mohamed Shehab (UNC Charlotte, USA), and Anna Squicciarini (Penn State University, USA) Social networks where people exchange personal and public information have enabled users to connect with their friends, coworkers, colleagues, family and even with strangers. Several social networking sites have developed to facilitate such social interactions and sharing activities on the Internet over the past several years. The popularity of social networking sites on the Internet introduces the use of mediated­communication into the relationship development process. Also, online social networks have recently emerged as a promising area of research with a vast reach and application space. Users post information on their profiles to share and interact with their other friends in the social network. Social networks are not limited to simple entertaining applications; instead several critical businesses have adopted social networks to attract new customer spaces and to provide new services. The current trends of social networks are indirectly requiring users to become system and policy administrators for protecting their content in this social setting. This is further complicated by the rapid growth rate of social networks and by the continuous adoption of new services on social networks. Furthermore, the use of personal information in social networks raises entirely new privacy concerns and requires new insights on security problems. Several studies and recent news have highlighted the increasing risk of misuse of personal data processed by online social networking applications and the lack of awareness among the user population. The security needs of social networks are still not well understood and are not fully defined. Nevertheless it is clear these will be quite different from classic security requirements. It is important to bring a depth of security experience from multiple security domains and technologies to this field as well as depth and breadth of knowledge about social networks. The aim of this special issue is to encompass research advances in all areas of security and privacy in social networks. We welcome contributions relating to novel technologies and methodologies for securely building and managing social networks and relevant secure applications as well as to cross-cutting issues. Topics of interest: include but are not limited to: - Access control and identity management - Delegation and secure collaboration - Information flow, diffusion and auditing - Malware analysis in social networks - Privacy challenges and mechanism - Risk assessment and management - Secure social-network application development and methodologies - Secure object tagging, bookmarking and annotations - Trust and reputation management - Usability driven security mechanisms ------------------------------------------------------------------------- In-Bio-We-Trust 2010 International Workshop on Bio-Inspired Trust Management for Information Systems, Held in conjunction with the Bionetics 2010, Boston, MA, USA, December 1-3, 2010. (Submissions due 1 September 2010) http://inbiowetrust.org Traditional security mechanisms fall short of what new information systems need. To fix this problem, two research communities have recently proposed new security mechanisms. One of those communities is called "bio-inspired systems" and is increasingly borrowing ideas from nature to make information systems more effective and robust. The other is called "trust management systems" and has been proposing and scrutinizing algorithms for information systems that mimic how people manage trust in society. Increasingly the two communities are working on similar research problems but, alas, they are doing so separately. Although there is an enormous number of potentially useful bio-inspired mechanisms that can be exploited in trust management, it comes as a surprise that bio-inspired trust management has not received any attention at all. Clearly,the dialog between researchers in bio-inspired systems and in trust management should widen. The workshop seeks to bring together the world's experts in both communities, and to stimulate and disseminate interesting research ideas and results. Contributions are solicited in all aspects of bio-inspired and trust management systems, including: - Bio-inspired models for managing trust in any information systems: virtual organizations, grid and cloud computing, mobile-ad-hoc/opportunistic/delay-tolerant networks, service oriented architectures, self-organizing networks and communities, mobile cooperative systems, mobile platforms, recommender systems. - Fixed and mobile architectures and protocols for distributed trust management. - Identity management in trust models. - Security attacks to trust systems and adaptive bio-inspired defenses. - Incorporation of bio-inspired algorithms into security communication protocols and computing architectures. - Descriptions of pilot programs, case studies, applications, work-in-progress, surveys, and experiments integrating biological designs or trust and security aspects into information systems. ------------------------------------------------------------------------- SecIoT 2010 1st Workshop on the Security of the Internet of Things, Held in conjunction with the Internet of Things 2010, Tokyo, Japan, November 29, 2010. (Submissions due 10 September 2010) http://www.isac.uma.es/seciot10 While there are many definitions of the Internet of Things (IoT), all of them revolve around the same central concept: a world-wide network of interconnected objects. These objets will make use of multiple technological building blocks, such as wireless communication, sensors, actuators, and RFID, in order to allow people and things to be connected anytime anyplace, with anything and anyone. However, before this new vision takes its first steps, it is essential to consider the security implications of billions of intelligent things cooperating with other real and virtual entities over the Internet. SecIoT'10 wants to bring together researchers and professionals from universities, private companies and Public Administrations interested or involved in all security-related heterogeneous aspects of the Internet of Things. We invite research papers, work-in-progress reports, R&D projects results, surveying works and industrial experiences describing significant security advances in the following (non-exclusive) areas of the Internet of Things: - New security problems in the context of the IoT - Privacy risks and data management problems - Identifying, authenticating, and authorizing entities - Development of trust frameworks for secure collaboration - New cryptographic primitives for constrained "things" - Connecting heterogeneous ecosystems and technologies - Legal Challenges and Governance Issues - Resilience to external and internal attacks - Context-Aware security - Providing protection to an IP-connected IoT - Web services security and other application-layer issues ------------------------------------------------------------------------- ESSoS 2011 International Symposium on Engineering Secure Software and Systems, Madrid, Spain, February 9-10, 2011. (Submissions due 13 September 2010) http://distrinet.cs.kuleuven.be/events/essos2011/ Trustworthy, secure software is a core ingredient of the modern world. Unfortunately, the Internet is too. Hostile, networked environments, like the Internet, can allow vulnerabilities in software to be exploited from anywhere. To address this, high-quality security building blocks (e.g., cryptographic components) are necessary, but insufficient. Indeed, the construction of secure software is challenging because of the complexity of modern applications, the growing sophistication of security requirements, the multitude of available software technologies and the progress of attack vectors. Clearly, a strong need exists for engineering techniques that scale well and that demonstrably improve the software's security properties. The Symposium seeks submissions on subjects related to its goals. This includes a diversity of topics including (but not limited to): - scalable techniques for threat modeling and analysis of vulnerabilities - specification and management of security requirements and policies - security architecture and design for software and systems - model checking for security - specification formalisms for security artifacts - verification techniques for security properties - systematic support for security best practices - security testing - security assurance cases - programming paradigms, models and DLS's for security - program rewriting techniques - processes for the development of secure software and systems - security-oriented software reconfiguration and evolution - security measurement - automated development - trade-off between security and other non-functional requirements - support for assurance, certification and accreditation ------------------------------------------------------------------------- IEEE Transactions on Information Forensics and Security, Special Issue on Using the Physical Layer for Securing the Next Generation of Communication Systems, June 1, 2011. (Submission Due 15 September 2010) http://www.signalprocessingsociety.org/publications/periodicals/forensics /forensics-authors-info/ Guest editor: Vincent Poor (Princeton University, USA), Wade Trappe (Rutgers University, USA), Aylin Yener (Pennsylvania State University,USA), Hisato Iwai (Doshisha University, Japan), Joao Barros (University of Porto, Portugal), and Paul Prucnal (Princeton University, USA) Communication technologies are undergoing a renaissance as there is a movement to explore new, clean slate approaches for building communication networks. Although future Internet efforts promise to bring new perspectives on protocol designs for high-bandwidth, access-anything from anywhere services, ensuring that these new communication systems are secure will also require a re-examination of how we build secure communication infrastructures. Traditional approaches to building and securing networks are tied tightly to the concept of protocol layer separation. For network design, routing is typically considered separately from link layer functions, which are considered independently of transport layer phenomena or even the applications that utilize such functions. Similarly, in the security arena, MAC-layer security solutions (e.g. WPA2 for 802.11 devices) are typically considered as point-solutions to address threats facing the link layer, while routing and transport layer security issues are dealt with in distinct, non-integrated protocols like IPSEC and TLS. The inherent protocol separation involved in security solutions is only further highlighted by the fact that the physical layer is generally absent from consideration. This special issue seeks to provide a venue for ongoing research area in physical layer security across all variety of communication media, ranging from wireless networks at the edge to optical backbones at the core of the network. The scope of this special issue will be interdisciplinary, involving contributions from experts in the areas of cryptography, computer security, information theory, signal processing, communications theory, and propagation theory. In particular, the areas of interest include, but are not limited to, the following: - Information-theoretic formulations for confidentiality and authentication - Generalizations of Wyner's wiretap problem to wireless and optical systems - Physical layer techniques for disseminating information - Techniques to extract secret keys from channel state information - Secrecy of MIMO and multiple-access channels - Physical layer methods for detecting and thwarting spoofing and Sybil attacks - Techniques to achieve covert or stealthy communication at the physical layer - Quantum cryptography - Modulation recognition and forensics - Security and trustworthiness in cooperative communication - Fast encryption using physical layer properties - Attacks and threat analyses targeted at subverting physical layer communications ------------------------------------------------------------------------- CODASPY 2011 1st ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA, February 21-23, 2011. (Submissions due 15 September 2010) http://www.codaspy.org/ Data and the applications that manipulate data are the crucial assets in today's information age. With the increasing drive towards availability of data and services anytime anywhere, security and privacy risks have increased. New applications such as social networking and social computing provide value by aggregating input from numerous individual users and/or the mobile devices they carry with them and computing new information of value to society and individuals. Data and applications security and privacy has rapidly expanded as a research field with many important challenges to be addressed. The goal of the conference is to discuss novel exciting research topics in data and application security and privacy and to lay out directions for further research and development in this area. The conference seeks submissions from diverse communities, including corporate and academic researchers, open-source projects, standardization bodies, governments, system and security administrators, software engineers and application domain experts. ------------------------------------------------------------------------- FC 2011 15th International Conference on Financial Cryptography and Data Security, Bay Gardens Beach Resort, St. Lucia, February 28 - March 4, 2011. (Submissions due 1 October 2010) http://ifca.ai/fc11/ Financial Cryptography and Data Security is a major international forum for research, advanced development, education, exploration, and debate regarding information assurance, with a specific focus on commercial contexts. The conference covers all aspects of securing transactions and systems. Original works focusing on both fundamental and applied real-world deployments on all aspects surrounding commerce security are solicited. Submissions need not be exclusively concerned with cryptography. Systems security and inter-disciplinary efforts are particularly encouraged. ------------------------------------------------------------------------- IFIP-DF 2011 7th Annual IFIP WG 11.9 International Conference on Digital Forensics, Orlando, Florida, USA, January 30 - February 2, 2011. (Submissions due 15 October 2010) http://www.ifip119.org The IFIP Working Group 11.9 on Digital Forensics (www.ifip119.org) is an active international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in the emerging field of digital forensics. The Seventh Annual IFIP WG 11.9 International Conference on Digital Forensics will provide a forum for presenting original, unpublished research results and innovative ideas related to the extraction, analysis and preservation of all forms of electronic evidence. Papers and panel proposals are solicited. All submissions will be refereed by a program committee comprising members of the Working Group. Papers and panel submissions will be selected based on their technical merit and relevance to IFIP WG 11.9. The conference will be limited to approximately sixty participants to facilitate interactions between researchers and intense discussions of critical research issues. Keynote presentations, revised papers and details of panel discussions will be published as an edited volume - the seventh in the series entitled Research Advances in Digital Forensics (Springer) in the summer of 2011. Revised and/or extended versions of selected papers from the conference will be published in special issues of one or more international journals. Technical papers are solicited in all areas related to the theory and practice of digital forensics. Areas of special interest include, but are not limited to: - Theories, techniques and tools for extracting, analyzing and preserving digital evidence - Network forensics - Portable electronic device forensics - Digital forensic processes and workflow models - Digital forensic case studies - Legal, ethical and policy issues related to digital forensics ------------------------------------------------------------------------- SESOC 2011 3rd International Workshop on Security and Social Networking, Held in conjunction with the PerCom 2011, Seattle, WA, USA, March 21, 2011. (Submissions due 31 October 2010) http://www.sesoc.org Future pervasive communication systems aim at supporting social and collaborative communications: the evolving topologies are expected to resemble the actual social networks of the communicating users and information on their characteristics can be a powerful aid for any network operation. New emerging technologies that use information on the social characteristics of their participants raise entirely new privacy concerns and require new reflections on security problems such as trust establishment, cooperation enforcement or key management. The aim of this workshop is to encompass research advances in all areas of security, trust and privacy in pervasive communication systems, integrating the social structure of the network as well. Topics of Interest include: - all types of emerging privacy concerns - new aspects of trust - decentralized social networking services - availability and resilience - community based secure communication - data confidentiality, data integrity - anonymity, pseudonymity - new key management approaches - secure bootstrapping - security issues in forwarding, routing - security aspects regarding cooperation - new approaches to reputation - new attack paradigms - social engineering, and phishing - new requirements for software security - malware ------------------------------------------------------------------------- IEEE Network, Special Issue on Network Traffic Monitoring and Analysis, May 2011. (Submission Due 15 November 2010) http://dl.comsoc.org/livepubs/ni/info/cfp/cfpnetwork0511.htm Guest editor: Wei Wang (University of Luxembourg, Luxembourg), Xiangliang Zhang (University of Paris-sud 11, France), Wenchang Shi (Renmin University of China, China), Shiguo Lian (France Telecom R&D Beijing, China), and Dengguo Feng (Chinese Academy of Sciences, China) Modern computer networks are increasingly complex and ever-evolving. Understanding and measuring such a network is a difficult yet vital task for network management and diagnosis. Network traffic monitoring, analysis and anomaly detection provides useful tools in understanding network behavior and in determining network performance and reliability so as to effectively troubleshoot and resolve the issues in practice. Network traffic monitoring and anomaly detection also provides a basis for prevention and reaction in network security, as intrusions, attacks, worms, and other kinds of malicious behaviors can be detected by traffic analysis and anomaly detection. This special issue seeks original articles examining the state of the art, open issues, research results, tool evaluation, and future research directions in network monitoring, analysis and anomaly detection. Possible topics include: - Network traffic analysis and classification - Traffic sampling and signal processing methods - Network performance measurements - Network anomaly detection and troubleshooting - Network security threats and countermeasures - Network monitoring and traffic measurement systems - Real environment experiments and testbeds ______________________________________________________________________ The complete Cipher Calls-for-Papers is located at http://www.ieee-security.org/CFP/Cipher-Call-for-Papers.html The Cipher event Calendar is at http://www.ieee-security.org/Calendar/cipher-hypercalendar.html ==================================================================== Listing of academic positions available by Cynthia Irvine ==================================================================== Posted June 2010 George Mason University Department of Applied Information Technology Fairfax, VA Review of applications will continue until positions are filled http://jobs.gmu.edu, Position number F9379z -------------- Full list: http://cisr.nps.edu/jobscipher.html This job listing is maintained as a service to the academic community. If you have an academic position in computer security and would like to have in it included on this page, send the following information: Institution, City, State, Position title, date position announcement closes, and URL of position description to: irvine@cs.nps.navy.mil ==================================================================== Information on the Technical Committee on Security and Privacy ==================================================================== ____________________________________________________________________ Information for Subscribers and Contributors ____________________________________________________________________ SUBSCRIPTIONS: Two options, each with two options: 1. To receive the full ascii CIPHER issues as e-mail, send e-mail to cipher-admin@ieee-security.org (which is NOT automated) with subject line "subscribe". OR send a note to cipher-request@mailman.xmission.com with the subject line "subscribe" (this IS automated - thereafter you can manage your subscription options, including unsubscribing, yourself) 2. To receive a short e-mail note announcing when a new issue of CIPHER is available for Web browsing send e-mail to cipher-admin@ieee-security.org (which is NOT automated) with subject line "subscribe postcard". OR send a note to cipher-postcard-request@mailman.xmission.com with the subject line "subscribe" (this IS automated - thereafter you can manage your subscription options, including unsubscribing, yourself) To remove yourself from the subscription list, send e-mail to cipher-admin@ieee-security.org with subject line "unsubscribe" or "unsubscribe postcard" or, if you have subscribed directly to the xmission.com mailing list, use your password (sent monthly) to unsubscribe per the instructions at http://mailman.xmission.com/cgi-bin/mailman/listinfo/cipher or http://mailman.xmission.com/cgi-bin/mailman/listinfo/cipher-postcard Those with access to hypertext browsers may prefer to read Cipher that way. It can be found at URL http://www.ieee-security.org/cipher.html CONTRIBUTIONS: to cipher @ ieee-security.org are invited. Cipher is a NEWSletter, not a bulletin board or forum. It has a fixed set of departments, defined by the Table of Contents. Please indicate in the subject line for which department your contribution is intended. Calendar and Calls-for-Papers entries should be sent to cipher-cfp @ ieee-security.org and they will be automatically included in both departments. To facilitate the semi-automated handling, please send either a text version of the CFP or a URL from which a text version can be easily obtained. For Calendar entries, please include a URL and/or e-mail address for the point-of-contact. For Calls for Papers, please submit a one paragraph summary. See this and past issues for examples. ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY. All reuses of Cipher material should respect stated copyright notices, and should cite the sources explicitly; as a courtesy, publications using Cipher material should obtain permission from the contributors. ____________________________________________________________________ Recent Address Changes ____________________________________________________________________ Address changes from past issues of Cipher are archived at http://www.ieee-security.org/Cipher/AddressChanges.html _____________________________________________________________________ How to become <> a member of the IEEE Computer Society's TC on Security and Privacy _____________________________________________________________________ You may easily join the TC on Security & Privacy by completing the on-line for at IEEE at http://www.computer.org/TCsignup/index.htm ______________________________________________________________________ TC Publications for Sale ______________________________________________________________________ IEEE Security and Privacy Symposium The 2009 hardcopy proceedings are not available. The DVD with all technical papers from all years of the SP Symposium and the CSF Symposium is $12, plus shipping and handling. The 2008 hardcopy proceedings are $10 plus shipping and handling; the 29 year CD is $10.00 The 2007 proceedings are available in hardcopy for $10.00, the 28 year CD is $10.00, plus shipping and handling. The 2006 Symposium proceedings and 11-year CD are sold out. The 2005, 2004, and 2003 Symposium proceedings are available for $10 plus shipping and handling. Shipping is $5.00/volume within the US, overseas surface mail is $8/volume, and overseas airmail is $14/volume, based on an order of 3 volumes or less. The shipping charge for a CD is $1 per CD (no charge if included with a hard copy order). Send a check made out to the IEEE Symposium on Security and Privacy to the 2010 treasurer (below) with the order description, including shipping method and shipping address. Al Shaffer Treasurer, IEEE Symposium Security and Privacy 2010 Glasgow East Annex, Rm. 218 (GE-218) 1411 Cunningham Rd. Naval Postgraduate School Montrerey, CA 93943 831/656\3319, voice oakland10-treasurer@ieee-security.org IEEE CS Press You may order some back issues from IEEE CS Press at http://www.computer.org/cspress/catalog/proc9.htm Computer Security Foundations Symposium Copies of the proceedings of the Computer Security Foundations Workshop (now Symposium) are available for $10 each. Copies of proceedings are available starting with year 10 (1997). Photocopy versions of year 1 are also $10. Contact Jonathan Herzog if interested in purchase. Jonathan Herzog jherzog@alum.mit.edu ____________________________________________________________________________ TC Officer Roster ____________________________________________________________________________ Chair: Security and Privacy Symposium Chair Emeritus: Hilarie Orman Ulf Lindqvist Purple Streak, Inc. SRI 500 S. Maple Dr. Menlo Park, CA Woodland Hills, UT 84653 (650)859-2351 (voice) ieee-chair@purplestreak.com ulf.lindqvist@sri.com Vice Chair: Chair, Subcommittee on Academic Affairs: Sven Dietrich Prof. Cynthia Irvine Department of Computer Science U.S. Naval Postgraduate School Stevens Institute of Technology Computer Science Department, Code CS/IC +1 201 216 8078 Monterey CA 93943-5118 spock AT cs.stevens.edu (831) 656-2461 (voice) irvine@nps.edu Treasurer: Chair, Subcomm. on Security Conferences: Terry Benzel Jonathan Millen USC Information Sciences Intnl The MITRE Corporation, Mail Stop S119 4676 Admiralty Way, Suite 1001 202 Burlington Road Rte. 62 Los Angeles, CA 90292 Bedford, MA 01730-1420 (310) 822-1511 (voice) 781-271-51 (voice) tbenzel @isi.edu jmillen@mitre.org Newsletter Editor: Security and Privacy Symposium, 2011 Chair: Hilarie Orman Deborah Frincke Purple Streak, Inc. Pacific Northwest National Laboratory 500 S. Maple Dr. deborah.frincke@pnl.gov Woodland Hills, UT 84653 cipher-editor@ieee-security.org ________________________________________________________________________ BACK ISSUES: Cipher is archived at: http://www.ieee-security.org/cipher.html Cipher is published 6 times per year