Review of  the
Conference on Detection of Intrusions and Malware & Vulnerability Assessment,
Bonn, Germany
July 8-9, 2010

Review by Asia Slowinska and Johannes Hoffmann
July 22, 2010

Program Chair, Christian Kreibich, gave some background on the program. There were 35 submissions from 19 countries. 12 papers got accepted, what gives the acceptance ratio of 34%. Eight-eight people participated in the conference.


Keynote: José Nazario held the first talk with the title "Trends in Malevolence".

This talk was about the strategies and consequences regarding the fight against malware. José evaluated the question 'Are we breeding "superbugs"?' and compared the actions which are taken against malware with antibiotics. Later on, he concluded that every action has (unintended) consequences for the security and illustrated this with an example: Before Windows XP SP2 the services running on servers were the main-target of attacks and after the introduction of SP2, which enabled the Firewall by default, a heavy shift towards attacks against client-applications took place. At the end, he reminded the audience to evaluate the consequences of the actions that might be taken and to think of a good concept and the right order of actions that must be taken.

Is it reasonable and fair to involve the normal users to do more for (their) security? Example: Do not klick on every link. Reasonable: no, fair: yes; people do not freely give their credit-card number away if you ask them and they do lock their homes. But the computer systems are too complex.


The first session, Host Security, was chaired by Christian Kreibich (International Computer Science Institute, Berkeley, CA).

The first paper was "HookScout: Proactive Binary-Centric Hook Detection" by Heng Yin, Pongsin Poosankam, Steve Hanna and Dawn Song. It was presented by Lok Yan.

The talk began with a general introduction into kernel hooking mechanisms. The authors motivate the need for HookScout by enormous attack vector space - their study shows that there are almost 20K function pointers in Windows kernel, 7K of which are fixed, and do not change once initialized. In principle, all of these pointers could be altered when hooks are installed. Their hook detection approach is based on dynamic analysis implemented on top of the QEMU processor emulator. First, HookScout monitors kernel execution and seeks for function pointers which have constant values. Their locations within kernel objects are used to generate a hook detection policy for a given operating system binary. Next, the hook detection subsystem enforces the policy by assuring that pointers described previously as invariant are never changed in the runtime. This would suggest that a function pointer value has been altered by a hook installer. The detection engine is currently implemented as a kernel module and incurs approximately 6.5% performance overhead. However, the authors recognize the risk of HookScout being turned off by a skilled attacker, and leave moving it to a hypervisor as a future work. So far only a pretty modest evaluation of the system has been performed, and few false positives were reported.

One question was whether user is to be entrusted with running HookScout. Lok suggested that the detection engine should be included in e.g., antivirus software. The next question challenged the issue of false positives, since it is very difficult to perform exhaustive testing and exercise enough paths of the kernel under analysis to properly map all function pointers. The first problem is caused by device drivers or dynamically loaded modules. These could benevolently install new function pointers in the kernel which have not been seen before. Another difficulty stems from unions prevalently used in (at least) Windows kernel. When they are stored in arrays, their order can be altered, which would again cause discrepancies. Lok did not have a good answer to this question and admitted that thorough testing of the system is required, and false positives might occur more often.

The next paper was "Conqueror: Tamper-Proof Code Execution on Legacy Systems" by Lorenzo Martignoni, Roberto Paleari and Danilo Bruschi. Roberto presented. This work tackles the problem of software-based code attestation, i.e., verifying the integrity of a piece of code executing in an untrusted system. Code attestation is also used to execute an arbitrary piece of code with the guarantee that the code is run unmodified and in an untampered environment. The scheme is based on a challenge-response protocol: the verifier issues a challenge for the untrusted system, which uses the challenge to compute a function. The verifier relies on time to determine whether the function result is genuine or if it could have been forged. Conqueror enhances Pioneer, the state-of-the-art attestation scheme for this class of systems, and is not vulnerable to attacks designed to challenge Pioneer. The significant shortcoming of Pioneer is that the function computed by the untrusted systems is known in advance, giving attackers chance and time to deeply analyze it and find its weaknesses. Conqueror proposes instead to generate the function on demand (so that attackers cannot analyze it offline), and send it encrypted and obfuscated to the untrusted machine. Moreover, Conqueror replaces the interrupt descriptor table with a custom one. If executed, interrupt handlers corrupt the result of the checksum function being executed. This assures that the attacker cannot try to emulate or intercept function execution and remain hidden. A prototype of the system was implemented and tested on 32-bit systems running Windows XP.

Someone raised the issue of relying on timing in the presence of unrelated network data occupying the system. Roberto commented that it does not cause difficulties since the verifier and untrusted machines are both on the local network. Only should the network switch be overloaded, one could expect problems. Another question was related to assuring that the function result is not forged by another machine. Here Roberto explained that it is unlikely since due to the custom IDT the attacker has no means of silently interrupting/monitoring execution of the function computing the checksum.

The last paper of this session was "dAnubis - Dynamic Device Driver Analysis Based on Virtual Machine Introspection" by Matthias Neugschwandtner, Christian Platzer, Paolo Milani Comparetti and Ulrich Bayer. It was presented by Matthias.

dAnubis is an extension to the Anubis (http://anubis.iseclab.org/) dynamic malware analysis system for the analysis of malicious Windows drivers. Matthias motivated their work by an observation that it is pretty convenient for an attacker to inject malicious code into kernel by loading and installing a device driver. Indeed, Windows XP machines are even often operated with Administrator privileges and provide APIs that allow loading an unsigned driver without any user interaction. In the light of that, dAnubis aims at providing a human-readable report of a device-driver behaviour. This includes information on driver's interaction with another drivers and the interface it offers to userspace, in addition to the information on the use of call hooking, kernel patching or Direct Kernel Object Manipulation (DKOM). dAnubis is built on top of the QEMU processor emulator. It monitors the execution of the guest OS, and observes events such as the execution of driver's code, invocation of kernel functions, or access to the guest's (virtual) hardware. Using dAnubis the authors analyzed over 400 rootkit samples. The paper provides some details on the results of this analysis. One question was whether dAnubis is able to tell the type of malware, e.g., a keylogger. No, it's not. It aims to provide a picture of a driver's behaviour, but detection (distinguishing malicious drivers from benign ones) or classification is outside the scope of this work. Another question was about the performance penalty incurred by the kernel memory monitoring. Matthias explained that the monitoring starts only once the driver is loaded, and then the overhead is between 13% and 14%. (However, that's the overhead on top of the QEMU processor emulator.) Further, somebody asked whether the authors noticed malware that tries to avoid their analysis. Not for dAnubis, but malware may try to detect that it is running in Anubis, based on the popular QEMU.


Carel van Straaten held the first invited talk about "Modern Spammer Infrastructure"

In this talk the audience got a deep insight into the professional business of spammers. Carel told about the highly skilled people behind it and how they keep their distributed network up and running. Of course he also talked about the countermeasures that Spamhaus provides to hinder the spam flow. Several spam filtering features were presented that help to keep the 1-2 Mio unique computers spitting out spam each day in check. More information about Spamhaus can be found on their webpage, http://www.spamhaus.org/.


The second session, Trends, was chaired by Sven Dietrich (Stevens Institute of Technology).

Kapil Singh presented the paper "Evaluating Bluetooth as a Medium for Botnet Command and Control" by Kapil Singh, Samrit Sangal, Nehil Jain, Patrick Traynor and Wenke Lee.

This talk evaluated the capabilities of Bluetooth as a medium for command and control structures for botnets. Since BT gets more and more pervasive, it provides a stealthy alternative to currently used command and control infrastructeres. The authors assume that the device is already infected with malware and they evaluated the movements of the devices based on a trace based simulation. They conclude that it is possible to control a botnet over BT since most people visit the same places each day on a very regular basis. The command propagation in their scenarios enabled the botnet master to get most messages to the devices within a period of a day. The talk ended with a few defense mechanisms: pushed software-updates for the devices through the service provider, the time when the commands are mostly send is known (in the morning hours when people go to work) and may be used to mitigate the risks; desktop software may test the devices for malicious code.

Q: What happens if the vulnerable devices have set a BT pin?
A: This is not part of the research. All devices are assumed to be already infected with some kind of malware.

Somebody commented that it would be interesting to evaluate their approach using real traces and not only simulations. He mentioned that some researchers in Italy built a similar tool, but when they wanted to test it in the real world environment, they were faced with serious problems. These could have been caused by too short contact with the device to be attacked. Kapil explained that they only use the aforementioned simulated traces, and they assume that two devices were able to communicate if they were close enough at the beginning and at the end of a 5 minutes interval. The questioner commented further that they really don't know what happens in between, and suggested a more thorough evaluation.

Then Antonio Nappa gave the presentation about the paper "Take a Deep Breath: a Stealthy, Resilient and Cost-Effective Botnet Using Skype" by Antonio Nappa, Aristide Fattori, Marco Balduzzi, Matteo Dell'Amico and Lorenzo Cavallaro.

In this talk a botnet model based on Skype is proposed. Since Skype uses p2p-technology, is encrypted, offers NAT and firewall traversal techniques and also offers an API it is an ideal botnet command and control network. The author described how a parasitic overlay over the normal Skype network is build and how hard it is to get hold of the botnet master.

Q: What are the lessons you learned during the research?
A: That the Skype API is dangerous if used in this way.

Q: Have you contacted the company behind Skype regarding the dangerous potential behing their API?
A: No.

Q: How do you react when someone blames you for malicious actions performed which are based on your research?
A: Malicious software is already using these kinds technologies; at least for messaging.

Q: Although it is p2p-based it may be shut down?
A: Legit user accounts may be used to perform the malicious actions.

The session ended with the paper "Covertly Probing Underground Economy Marketplaces" by Hanno Fallmann, Gilbert Wondracek and Christian Platzer. Hanno Fallmann presented the results.

This talk explained how cyber criminals trade their goods over publicy available channels over the internet. A tool was introcuded, that automaticly monitors those actions in IRC networks and web platforms. The evalution took place over 11 months.

Q: How good is your bot? It looks similiar to Eliza.
A. It started as an idea but turned out to get specific information about payment etc.

Q: Why not switch to a human after a chat is initialized?
A: It runs all the time and we do not have the resources.

Q: Do you come by discussions about botnet/malware?
A: That is not part of the research, but it exists elsewhere.

Q: What about automaticly joining new channels/forums?
A: That is difficult because some kind of reputation is needed and that is not part of the research.

Q: What about bogus sells (rippers)?
A: That is a hot topic in the community and you can find them everywhere. The distribution between real and bogus offers is unknown.

Q: What about the risks doing this from an university ip?
A: We started with an university ip and got DDOSed by a botnet; now we use a different network.

Q: Do you have contact with law enforcement agencies?
A: Not really. We want to analyze the data first.


The third session, Vulnerabilities, was chaired by Michael Meier (University of Dortmund).

Adam Doupé began with the presentation of the paper "Why Johnny Can't Pentest: An Analysis of Black-box Web Vulnerability Scanners" by Adam Doupé, Marco Cova and Giovanni Vigna.

Adam told the audience about the challenges that modern web scanners face and how they perform. 11 scanners were tested, ranging from free open source scanners to products which cost more than 30.000$. After a short summary about their operation methods he presented the results of the tests which were executed against a self written vulnerable web application. From a total of 16 vulnerabilities (inlucding XSS, refl. XSS, SQL inj., weak passwords, ...) no scanner found more than 8 vulnerabilities in total. On the other hand, students found 15 of the vulnerabilities. Adam concluded his talk, that a good crawling behaviour is as import as a good detection engine.

Q: Were there any incosistencies between two runs from the same scanner?
A: I heard of incosistencies, but we did not test that. All scanner were only run once.

Q: What about the runtime of the scanners?
A: Some are pretty fast and only take 4-7 minutes, other are slow. The slowest scanner tested for 4 hours.

Q: How did you chose the scanners?
A: We did chose most of the big players.

Q: Was there any infinite crawling because of links "directing back"?
A: Yes, one scanner was fooled and we had to remove the corresponding page for this scanner.

Q: How many differenent SQL injections have you implemented?
A: 1 normal SQL injections and 1 stored SQL injection.

The session ended with Bryce Boe presenting the paper "Organizing Large Scale Hacking Competitions" by Nick Childers, Bryce Boe, Lorenzo Cavallaro, Ludovico Cavedon, Marco Cova, Manuel Egele and Giovanni Vigna

This talk gave a quick overview of the challenges that arise when hosting hacking competitions and why these competitions are useful for the participants. The talker described the scroring system and explained the goals of the different types of challenges. The talk ended with three tips: keep it simple and cost effectice and stress test the installation before going live.

Q: Why change from defense and attack based challenges to attack based only?
A: Because of the knowledge and tools that teams reuse in further competitions to gain an unfair advantage to new players.

Q: What about attack statistics? E.g. the runtime of some attacks?
A: We did not analyse the data but it is online available on our webpage. Some teams focus on side quests to earn many score points in less time and other teams focus on main quests.


Invited Talk

Marc Dacier held the second invited talk about "TRIAGE: the WOMBAT attack attribution approach".

In this talk a new method regarding "attack attribution" was introduced. Attack attribution aims at explaining which new attacks contribute to which (new) phenomena. TRIAGE is an acronym for "atTRIbution of Attacks using Graph based Event clustering", which itself is a multicriteria clustering method funded by the WOMBAT project. The project uses many different criteria (ip, email adresses used for registering domains, date of registration, domain names, ...) to build a "contextual fingerprint" which itself is used for the clustering and to draw a graphical representation of the collected data. The bottom line of the talk was to to use the collected data and search in it, but do not search for a problem with an already known solution. The collected data is available for research purposes (NDA required).

Q: Are you correlating data from honeypots and other shady services?
A: No.

Q: Why a NDA, is there private data that must be protected?
A: The NDA is 8 years old and protects companies from accusing other companies that they misused their data. A nice side effect of the NDA is the creation of a group of persons that shares data for analysis purposes.


The fourth session, Intrusion Detection, was chaired by Robin Sommer (International Computer Science Institute, Berkeley, CA).

Ali Ghorbani started with his talk about "An Online Adaptive Approach to Alert Correlation" by Hanli Ren, Natalia Stakhanova and Ali Ghorbani.

In this talk the problem of a very huge amount of alerts with a huge amount of false positives alarms when dealing with alert correlation is approached. Ali explains that not all features of an alert are relevant and that they build so called "hyper alerts", which consist of several "atomic alerts". In further steps, their proposed system extracts relevant features and calculates a correlation probability for alert types by leveraging a Bayesian propability analysis. Their adaptive online correlation module is now able to analyse and correlate new alerts to hyper alerts. This approach provides an unsupervised training method and is able to show why two alerts are correlated.

Sasa Mrdovic presented his results of "KIDS - Keyed Intrusion Detection System".

Sasa begins his talk with a simple statement: The better IDS are, the better the attacks will be. When attackers know how the underlying model for attack recognition in an IDS works, they will build packets that mimic "normal packets" in order to get by the IDS. KIDS uses random delimeters for network packet data instead of known delimeters to prevent mimicry attacks. This yields random words of the payload which are used to identify attacks. The random delimeters represent the key which is kept secret. As long as an attacker does not know the key, he will not be able to mimic normal packets. The authors tested their system with http traffic from an university and used the Metasploit framework for attacks, but it should work with other protocols. The detection rate with random delimeters were slightly less good, but still acceptable. The benefit however is, that mimicry attacks are infeaseble. Sasa concluded that this is a novel approach and that it needs a better implementation and that the key selection process needs a proper evalution.

Q: Have you tested how hard it is to attack the system with random key selection?
A: This was tested in a different paper with http traffic.

Q: What is the individual influence of the 2 scores to the detection rate?
A: It turned out that the multiplying of the scores improves the false positive rate. Former tests used only one score.

Rump session

Sven Dietrich pointed the audience to the IEEE Cipher webpage.

Michael Meier announced the next conferences and events this year.

Armin Büscher demonstrated Monkeywrench which crawls webpages for malicious content. The URL is www.monkeywrench.de.

Felix Leder introduced a new Python based sandbox.

Herbert Bos advertised the DIMVA 2011 next year in Amsterdam and gave a quick overview of the dutch culture.


The final session on Web Security was chaired by Herbert Bos (Vrije Universiteit Amsterdam)

"Modeling and Containment of Search Worms Targeting Web Applications" by Jingyu Hua and Kouichi Sakurai was presented by Jingyu.

Unlike traditional worms which scan IP address space or use hitlists to locate vulnerable servers, search worms feed specially crafted queries to search engines (e.g., Google) and quickly create lists of attack targets based on the search results. Such worms might be looking for server pages with default titles, error pages generated by software, etc. These pages are called "eigenpages". For example, to exploit a vulnerability in phpBB bulletin service, a worm could search for keywords including "allinurl: "viewtopic.php"". The work presented aims to model search worms to understand how quickly they can spread. The authors also propose a solution to contain such worms. The analysis part considers two propagation models, depending on the distribution of eigenpages. The conclusion is that to maximize the spreading speed, eigenpages should be uniformly distributed on servers. In order to contain search worms, the authors propose to inject honey pages among search results. These are faked pages and point to honeypots luring malicious attackers. Further queries from infected machines are to be rejected. The approach was tested using simulations showing that it is enough to insert 2 honey pages in every 100 search results to contain a well known Santy search worm.

Somebody asked how honey web pages are built. After all, they must appear attractive and worth attacking to a worm. Jingyu replied that doing it automatically is left as a future work. Another issue raised was that the approach proposes to block worms based on IP addresses used to issue the malicious query. However, this way all benign users having infected machines are simply blocked. The presenter admitted that it is a problem.

The final presentation, "HProxy: Client-side detection of SSL stripping attacks" by Nick Nikiforakis, Yves Younan and Wouter Joosen was given by Nick.

Nick started his talk with some background information on stripping attacks. This technique is not based on any specific error, but on the observation that users never explicitly request SSL protected websites (they usually do not type https prefix in the browsers), but rely on servers to redirect them to the appropriate version of a particular website. And now, if attackers can launch a man-in-the-middle (MITM) attack, they can suppress such messages and provide users with "stripped" version of the requested website forcing them to communicate over an insecure channel. The approach described in the paper proposes to use the browser's existing history as a detection mechanism. A client-side proxy creates a unique profile for each secure website visited by the user. This profile contains information about the usage of SSL, based on the security characteristics of a website and not on the contents. This enables the system to operate correctly both with static and dynamic webpages. HProxy uses the profile of a website to identify when a page is maliciously modified a MITM. No server side support or trusted third parties are required for the system to work.


All slides will be available on the DIMVA webpage. The next DIMVA will be held in Amsterdam in 2011.