Review of the
Detection of Intrusions and Malware & Vulnerability Assessment,
July 9-10, 2009
Review by Martin Apel and Michael Meier
July 16, 2009
The conference dinner was held at a remote lakeside restaurant, where we were taken by a small boat and shown the sights (mostly villas owned by the rich and famous) on the way.
Without further ado, the conference notes of Martin Apel and Michael Meier ...
The conference was opened by the General Chair Danilo Bruschi and the Program Chair Ulrich Flegel presented statistics on submissions, selection and attendance broken down to countries and sectors.
The first session on Malware and SPAM was chaired by Toralv Dirro.
A Case Study on Asprox Infection Dynamics
How good are malware detectors at remediating infected systems?
Emanuele Passerini presented a fully automated testing methodology to evaluate remediation capabilities of malware detectors. They used their method to evaluate six well known commercial malware detectors and found out that none of them remediates all modifications done to the system by the malware.
After the talk one attendee remarked that the command line versions of the malware detection tools, which were used for the evaluation, often provide limited remediation functionality compared to the GUI versions.
Q: Why did you choose different samples for each malware detector?
A: Our goal was not to compare the six malware detectors, but a survey how good the malware detectors manage to remediate.
Towards Proactive-Spam Filtering
In his talk Jan Goebel presented a methodology to infer email-templates used by spam bots from the mails send by the bots, which can later be used to filter spam. They monitored the bots in a SandNet and used an approach which is based on the determination of the longest common substrings and regular expressions.
Q: What is the "proactive" part?
A: The mails are collected when they are sent and not when they are received.
Q: Have you compared your results to spam assassin?
The session Emulation-based Detection was moderated by Peter Szor.
Sheparding Loadable Kernel Module through On-demand Emulation
As the authors (Xuan Chaoting, John Copeland, and Raheem Beyah) could not attend the conference, this talk was given by John McHugh using their slides.
The rootkit prevention system DARK was introduced which combines program monitoring (using on demand emulation) with rootkit detection techniques. The suspicious kernel code is monitored and its interactions with the rest of the kernel are checked against a group of well-selected manually crafted security policies. DARK was evaluated using 13 kernel rootkits and 20 benign kernel modules showing 0 false negatives and one false positive (5%). The runtime performance penalty has been measured on only one module (iptable_filter) and is around 10%.
Q: When is the emulation started, meaning when is the code regarded suspicious?
A: Static analysis is used to decide, whether code is suspicious or not.
Yataglass: Network-level Code emulation for analyzing memory scanning
Makoto Shimamura discussed exploit-/shellcode that uses instructions from the victims memory-image. Thus for analyzing the shellcode the victims memory-image is required. He presented Yataglass, a network-level code emulator, which emulates the victims memory-image by responding to memory scan requests of the shellcode. It uses symbolic execution to infer the instructions that are scanned for. It cannot infer instructions if the shellcode scans for a value in a range or if it scans for a function signature.
Q: How do you determine the bytes in the stream, where you start the emulation?
A: Every possible position is tested.
Q: Can the stream be crafted in a way, that makes Yataglass fork "very often" and thus enables a denial of service attack?
A: Yataglass takes some countermeasures ... extensive answer will be given offline.
Defending Browsers against Drive-by Downloads: Mitigating Heap-spraying Code Injection Attacks
Q: CaptureHPC is based on Internet Explorer and your approach uses
Firefox. Wouldn't this lead to a problem with malicious webpages that
use browser fingerprinting?
A: To circumvent this problem ActiveX has been implemented for Firefox to make it look like an Internet Explorer.
In his interesting and entertaining talk Richard Kemmerer told the
story of the torpig botnet, which was controlled by researchers of his
group for ten days. This was done by reverse engineering the domain
generation algorithm and registering the domains for the torpig C&C
Server. He gave an interesting overview on gained insights during this
ten days and concluded that
The session Software Diversity was chaired by John McHugh.
Polymorphing Software by Randomizing Data Structure Layout
Zhiqiang Lin presented a technique to polymorphing software, which randomizes data-structure layout of programs. This can be used to avoid attacks that are based on knowledge of data-structure layout but also to evade signature based detection. The technique has been implemented for the gcc. The software is licensed under GPL and available at http://www.cs.purdue.edu/homes/zlin/dimva09.html
On the effectiveness of software diversity: A systematic study on Real-World Vulnerabilities
Many systems which utilize diverse off-the-shelf software usually assume that these software products are diverse enough not to be compromised simultaneously with the same exploit. The work presented by Jin Han investigates, if this assumption is valid. Therefore they investigated the following questions:
How many software has potential substitutes with the same functionality that cannot be exploited with the same attack?
Can vulnerabilities of one software be exploited on different OS simultaneously?
Results on an analysis of the vulnerabilities published in 2007 show that more than 98.5% of vulnerable applications have substitutes with a very low chance being compromised by the same attack. 50% of the applications are supported to run on multiple OS and different OS distributions of the same application have more than 80% chance to suffer from the same vulnerability but their attack code is quite different.
Q: Are the system-calls issued from an IIS on a windows machine really
comparable to an apache running on a linux?
A: How the comparison is made is not part of the paper, but has been described elsewhere.
The program of day 1 ended with an open meeting of SIG SIDAR chaired by Michael Meier.
In the first session of day 2 Henry Stern (Cisco IronPort Systems LLC) delivered a keynote address on A new Era in Security Collaboration: Turning the Tables on Botnets. He introduced a reputation based collaboration framework for routers. It can be used to defeat attacks (even distributed ones) and is being used successfully by Cisco.
The 2nd session of day 2 on Harnessing Context was moderated by Engin Kirda.
Using Contextual Information for IDS Alarm Classification
Since intrusion detection systems are known to generate many non-critical alerts context-information may be exploited to classify the alerts.
In his talk Francois Gagnon presented results on a study of the effectiveness of incorporating context information on the target operating system and application configuration. Also they analyzed whether existing tools are good enough to gather such context information automatically. Based on their experimental results they conclude that target information is valuable information for alert classification but also that existing operating system discovery tools are not adequate for IDS context gathering.
Q: How many packets/alarms can the system handle?
A: There is no testing data yet.
Browser Fingerprinting from Coarse Traffic Summaries: Techniques and Implementations
In her talk Ting-Fang Yen presented an approach for browser finger-printing that does not rely on payload data but on behavioral features evidenced in flows. She also discussed two applications of the finger-printing approach. First the extension of network ids allows detecting a broader range of malware by incorporating browser platform characteristics to find similar traffic.
Second browser fingerprints can be used for deanonymization of website in flow records that have been anonymized.
Q: Have different Browser Versions and configurations been evaluated?
A: Only one version has been tested.
A service dependency modeling framework for policy based response enforcement
Nizar Kheir presented a modeling framework for services and their dependencies, which allows formally defining dependency attributes. He also demonstrated the use of the service dependency framework for providing appropriate candidates for intrusion responses.
Q: Is there any tool support?
Sven Dietrich announced a workshop on ethics in computer security research, to take place in Tenerife, Canary Islands, Spain, in January 2010, co-located with FC'10.
Marko Jahnke (general chair of DIMVA 2010) announced that DIMVA 2010 will take place in Bonn, Germany.
The session on anomaly detection was chaired by Pavel Laskov.
Learning SQL for Database Intrusion Detection using Context-Sensitive Modeling
In his talk Christian Bockermann started with a motivation showing the prevalence of SQL injection attacks. He further presented an approach for modeling SQL statements to apply for machine learning in order to detect malicious behavior at the database transaction level. The approach incorporates the parse tree structure of SQL queries as characteristic feature. The presented experimental results demonstrate and compare the separation capabilities of different feature models.
Q: Hasn't this been "solved" already?
A: Yes there are methods, but this vulnerability is still out there.
Selecting and Improving System Calls Models for Anomaly Detection
Frederico Maggi started his presentation with an introduction to system call based anomaly detection and analysis of two detectors based on different approaches, a deterministic (FSA-DF) and a stochastic one (S2A2DE). Further he discussed the combination of the two complementary approaches which incorporates deterministic as well as stochastic models. Experimental comparison of two combined detectors and the two original ones showed that all detectors have the same detection rate but that the combined versions have significantly lower false positive rates.
Q: Why have you used SOMs (which are performance intensive) to group
the string arguments and not simpler measures like Edit-Distance?
A: Edit-Distance also has its problems and SOMs are an interesting technique we wanted to try.
In the last session of DIMVA 2009 Lexi Pimenidis presented the results of the fifth CIPHER CTF ("Capture-the-Flag), which took place in parallel to the DIMVA conference. Results are available at http://www.cipher-ctf.org/cipher5/
Proceedings of DIMVA2009 were published as Springer LNCS 5587 and are available online at http://www.springerlink.com/content/978-3-642-02917-2
Slides of the DIMVA 2009 presentation will be soon available at http://www.dimva.org/dimva2009
See you next year in Bonn!