The first "Workshop on Issues in the Theory of Security" was held in Geneva on 7-8 July 2000, co-located with ICALP'00 conference. It has been the first workshop of the "IFIP Working Group 1.7 on Theoretical Foundations of Security Analysis and Design". This Working Group has been recently founded to investigate the theoretical foundations of security. Its aim is to promote the discovery of new theoretical techniques in computer security and new ways to apply formal techniques systematically to develop security related applications. More information can be found at the IFIP WG 1.7 home page www.dsi.unive.it/IFIPWG1_7/
The workshop was an open event, and all the researchers working on the
theory of computer security were invited to participate. The extended
abstracts presented at the Workshop represent ongoing work, and the
program has been designed to encourage discussion among attendees, both
during and after the scheduled presentations. The Program Committee
selected 20 out of 30 papers submitted. The main topics of the
selected papers include:
- formal definition and verification of the various aspects of security:
confidentiality, integrity, authentication and availability;
- new theoretically-based techniques for analyzing cryptographic protocols
formally, and for designing them and their manifold applications (e.g.,
- information flow modeling and its application to the theory of
confidentiality policies, composition of systems, and covert channel
- formal techniques for analyzing and verifying mobile code.
There were 30 participants from 8 different countries: Belgium, Denmark, France, Germany, Italy, Switzerland, United Kingdom, USA.
The collection of extended abstracts presented at WITS '00 is available at the workshop home page: www.dsi.unive.it/IFIPWG1_7/wits2000.html
"Extending Formal Cryptographic Protocol Analysis Techniques for Group Protocols and Low-Level Cryptographic Primitives", Catherine Meadows. Group protocols, i.e., protocols which involve the communications among a set of principals which could be arbitrarily large, are usually difficult to analyze as even legal executions can take an unbounded number of steps. Also the introduction of low-level features of cryptographic algorithms (e.g., commutativity and distributivity of RSA) complicates the analysis a lot. This paper presents an extension of the NRL Protocol Analyser language construct, called parametrized language, which allow to prove properties of group protocols which use non-standard cryptographic primitives, thus allowing a formal analysis. In particular, a parameterized language allows to put conditions on the encryption keys, on the encrypted messages and also on the inputs to any operation defined in the language.
"Paths through Well-Behaved Bundles", Joshua D. Guttman and F. Javier Thayer Fábrega, reports a new method for reasoning about cryptographic protocols within the strand space model. The method is based on two main ideas: paths and well-behaved bundles. A path is a sequence of nodes where two consecutive nodes represent either a message transmission between two principals or a sequential execution of one principal (starting with a message reception and ending with a message transmission). Some interesting paths are identified: pedigree paths, which are useful to track the origin of a value, and penetrator paths, corresponding to penetrator activity. A well-behaved bundle is induced by an equivalence relation which identifies bundles which only differ in the penetrator nodes. This allows to define normal bundles where every destructive penetrator action precedes every constructive one. Every bundle is proved to be equivalent to a normal one.
"Protocol Composition and Correctness", Nancy Durgin, John Mitchell and Dusko Pavlovic, introduces a new model, called Cords, for reasoning about cryptographic protocols. Cords basically extend the strand space model by introducing variables that are instantiated upon receipt of a message. The aim is to better model the dynamics of communications which is now reflected in the substitution mechanism, similarly to, e.g., process algebraic approaches. One of the main motivation for this work is to obtain a method for assembling protocols from previously identified parts and also compositionally assemble protocol correctness proofs.
"Hardest Attackers", Hanne R. Nielson and Flemming Nielson, points out two different causes of undecidability of security properties: (i) the presence of an infinity of malicious attackers, (ii) the fact that even in the presence of one single attacker, the property to be checked is likely to be undecidable. The second point can be solved by approximating the property in a way that always "err on the safe side", as done in typical static analysis techniques. The first point is approached in this paper by identifying a hardest attacker. Intuitively, a hardest attacker is as hard to protect against as any of the infinity of attackers. An important distinction is made between hardest attacker and complete attacker, as the former one is identified with respect to the static analysis rather than the semantics, i.e., it might not be a successful attacker with respect to the semantics.
"Security Analysis for Mobile Ambients", Francesca Levi and Chiara Bodei. Safe Ambients (SA) is an extension of Mobile Ambients (MA) where a movement can take place only if the involved ambient agrees. Here, a control flow analysis previously proposed for MA is extended to SA. The technique is then applied to prove a secrecy property: ambients are classified as trustworthy and untrustworthy and secrecy is guaranteed only if untrustworthy ambients can never open trustworthy ones. A most hostile context (matching some requirements) is defined. Similarly to the previous talk, if a process P passes the test with this most hostile context E, then P will do so, when plugged in any other context represented by E.
"Secure Safe Ambients and JVM Security", Michele Bugliesi and Giuseppe Castagna, presents a typed variant of SA (see also the previous talk): the Secure Safe Ambients (SSA). The type system allows to express and verify behavioural invariants of ambients. In particular, the types of ambient names are protection domains that group ambients sharing common security policies. As an example, it is possible to specify that a protection domain D should not be entered by any ambient, by stating that no ambient outside D may exercise the capability "in d" (which moves the executing ambient into ambient d) for any d of type D. A distributed version of the calculus is also studied, where no assumption is initially made on the well-typedness of external ambients, and a sort of proof-carrying-code ambients are introduced to allow the verification of entering ambients.
"Mobile Functions and Secure Information Flow", Dilsun Kirli. This work focus on the problem of secrecy within the framework of Mobile-l (Mobile-lambda), a higher-order mobile code language. In this language, some constructs for the transmission and reception of values are introduced in the functional core and mobile processes are represented by mobile functions. An information flow property for Mobile-l is identified and a suitable type system to enforce the property is proposed. As this type system follows the type and effect discipline, providing ML-style polymorphism becomes a natural extension of earlier work in this field.
"Considering Non-Malleability in Formal Models for Cryptographic Protocols", Carsten Rudolph. Many models for specification of cryptographic protocols assume that cryptography is perfect, i.e., that encryption and decryption are possible only knowing the correct keys. However, there exist attacks which exploit particular weakness of cryptographic algorithms. Thus, it could be of interest to consider properties of encryption schemes inside the specification abstract model. This paper considers the non-malleability property of encryptions schemes which ensures the attacker's inability, given a ciphertext y, to output a different ciphertext y' such that the relative plaintexts x, x' are related in some way. Two examples are given to show that non-malleability is indeed relevant in the proof of protocol correctness.
"On the Perfect Encryption Assumption", Olivier Pereira and Jean-Jacques Quisquater. Also this work is about weakening the perfect cryptography assumption. It is proposed a model that takes into account the multiplicative structure of RSA, which is able to identify a particular kind of attack on the fixed Needham-Schroeder public key protocol. Such a model has been also specified in Promela (the specification language of SPIN), thus allowing an automatic detection of attacks similar to the one reported in the paper.
"Decorrelation: a New Theory for the Security of Conventional Encryption", Serge Vaudenay. The security of conventional cryptographic primitives (e.g. block ciphers) is often approached by showing that some particular kind of cryptanalysis cannot work. Decorrelation theory follows a different, more general, idea as it allows to prove security against a distinguisher which models an idealized attacker. In other words, a conventional cryptographic primitive is secure if no Turing machine can distinguish it from a canonically idealized primitive. Decorrelation theory basically defines a measure of the distance of a cryptographic primitive from the idealized one. Through the results of decorrelation theory (e.g., multiplicative properties of the distance) it is thus possible to define explicit block ciphers with low distance from idealized primitives.
"On the Reachability Problem in Cryptographic Protocols", Roberto Amadio, Denis Lugiez and Vincent Vanackere, studies secrecy and authentication properties of (symmetric key based) cryptographic protocols. The verification task is formulated as a reachability problem, i.e., a problem of determining if the protocol (model) can reach a certain point while interacting with the environment. The proposed decision procedure is based on a symbolic reduction system. Finally, a prototype has been implemented where the symbolic method is extended also to public keys and hash functions.
"A B Automaton for Authentication Process", Stéphanie Motré. This work focus on authentication policies and processes. It proposes to use security automata to model an authentication process, i.e., all the possible states of the process are considered and the (authentication) events determine a transition from a state to another one. The main motivation is to give a simple way of specifying a particular process that can be useful for proving the correctness of the process with respect to the corresponding authentication policy. This model is integrated in the B method. During the talk some examples of application of the method have been presented.
"Discretionary Access Control with Code Migration", Chiara Polloni and Corrado Priami. Code migration introduces new issues in dealing with security, e.g., mobile agents have to be identified by the hosts which establish capabilities and resource consumption limits. The paper proposes a method for discretionary access control for mobile agents. In particular any hosts and sites decide the access rights to their resources. This is formalized together with the conditions for agent migration. The original contribution is a direct handling of multi-hop code migration.
"A New Definition of Multilevel Security", Riccardo Focardi, Roberto Gorrieri and Roberto Segala. Usually information flow security is modelled though Non-Interference, i.e., if no interference is possible from a group of users (high level) to another one (low level) than no information flow is present. This indirectly solves the problem of modeling information flow. This paper proposes a reversed (more direct) approach: it is identified a particular process representing an explicit (forbidden) information channel from high level to low one; this induces a family of properties which guarantee the absence of such a particular information flow. As there are different possible channels corresponding to different notions of information flow, the aim is to classify existing information flow properties with respect to which channels they are able to rule out, i.e, with respect to which kind of information flow they indeed avoid.
"Towards Automatic Synthesis of Systems without Information Flows", Fabio Martinelli. This work is about automatic synthesis of secure systems, i.e., systems with no forbidden information flows. The main idea is that the analysis of secure systems is treated as the analysis of open systems, i.e., systems which have some unspecified components. As a matter of fact, the potential attackers (e.g., intruders on networks and Trojan horses in systems) are not known a-priori and it is thus convenient not to specify them. In this way, the resulting open system can be proved secure with respect to every possible potential attacker. This approach indeed allows the synthesis of secure systems. As an example it is shown how to modify an existing system to make it secure, e.g., by inserting new components.
"Multiple Security Policies in Mob_adtl", Gianluigi Ferrari, Carlo Montangero, Laura Semini and Simone Semprini. Mob_adtl is a temporal logic based model for the specification and the design of networking applications where different security requirements coexist. In this model a system is based on a set of elaboration nodes (called neighborhoods) each having internal security requirements enforced by authorities called guardians. In this work it is exploited the Mob_adtl framework to specify security policies able to control the movements of agents among neighborhoods. In particular it is described how to give guardians the ability of control agents movements among neighborhoods according to their history.
"Probabilistic Security Analysis in a Declarative Framework", Alessandra Di Pierro, Chris Hankin and Herbert Wiklicky. This work develops a methodology for the security analysis of programs similar to classical program analysis methods. Both Shannon's and Scott's notions of information are considered by choosing a probabilistic extension of a declarative programming language, the Probabilistic Concurrent Constraint Programming (PCCP) developed by the authors in previous papers. A property of probabilistic confinement is presented which is related to previous work by, e.g., Volpano and Smith and also Sands and Sabelfeld. The main difference is that here a declarative programming paradigm is considered instead of an imperative one.
"Dolev-Yao is no better than Machiavelli", Paul Syverson, Catherine Meadows and Iliano Cervesato. In this work it is shown that attacks mounted by a traditional Dolev-Yao intruder can be enacted by a (apparently weaker) "Machiavellian" adversary, in which compromised principals do not share long-term secrets and do not send arbitrary messages, i.e., messages with a different structure with respect to the legitimate messages of the protocol. It is also shown that a Dolev-Yao adversary composed of multiple compromised principals is attack-equivalent to an adversary consisting of a single dishonest principal who is only willing to produce messages in valid protocol form. These results hold for common authentication protocols, where long-term keys are never transmitted. Establishing the equivalence of intruder models is non-trivial and is useful when security properties have to be checked against the "hardest attacker" (see also the previous talk by Nielson and Nielson).
"A Game Approach to the Verification of Exchange Protocols - Application to Non-Repudiation Protocols", Steve Kremer and Jean-François Raskin. Exchange protocols usually require some non-repudiation guarantees, i.e, the parties should not deny having sent or received a message (e.g., a payment or a receipt). When these protocols are analyzed the usual model of the external intruder (with two honest parties) cannot be directly applied as one of the parties could cheat in order to get an advantage in the exchange. This work proposes to model such protocols by considering the actions that are possible with no predefined order of execution. Then, each entity and each communication channel is viewed as a player. In this context the required properties becomes "strategies" and both adversarial and cooperative behaviours between entities is taken into account.
"Possibilistic Information Flow is Safe for Probabilistic Non-Interference", David Clark, Chris Hankin, Sebastian Hunt and Rajagopal Nagarajan, studies the problem of analysis and detection of probabilistic cover channels. In previous papers, Sabelfeld and Sands have given a semantic characterization of probabilistic non-interference and used it to give correctness proofs of type-based systems proposed by Volpano and Smith. Here, it is shown that it is possible to use possibilistic information flow analysis to check probabilistic non-interference, i.e., the absence of information flow implies the absence of interference. The technique is applied to the analysis of Probabilistic Idealised Algol, and is based on control flow analysis which allows to construct (an analogue of) higher-order flowcharts introduced by one of the authors in previous work.