Links
Venue
SecReT series
|
SecReT 2010
5th International Workshop on Security and Rewriting Techniques
Valencia (Spain), June 18-20
Accepted papers
-
Sergiu Bursuc, Hubert Comon-Lundh and Stephanie Delaune
Title: Deducibility constraints
Date: Friday June 18
Abstract:
A deducibility constraint is a sequence of proofs with gaps, in which some
formulas have been replaced with variables. Solving such constraints consists
in finding assignments to the variables in such a way that the gaps can be filled.
Deducibility constraints have been introduced in the context of security
protocol verification. Their interpretation depends on an inference
system that reflects the intruder's capabilities.
Deciding the existence of a solution amounts
to decide whether some secret is disclosed. Simplifying the constraints
to solved forms allows one to decide more security properties (trace properties).
There are several results showing decidability of deducibility constraints
for various intruder systems. Here, we provide with a
simple relationship between the properties of the intruder's inference system
and the decidability of deducibility constraints.
We show that local inference systems (actually a slight
modification of such systems) yield not only a tractable deduction
problem, but also decidable deducibility constraints.
A typical example of a local inference system is the so-called Dolev-Yao
intruder deduction system. For the Dolev-Yao system, the existence of an attack on
secrecy is NP-complete [Rusinowitch & Turuani, 2001]. Our constraint solving
technique yields also a decision algorithm. If one is interested in other security properties
than secrecy (or authentication, that can be encoded), the constraint solving
technique is a way to represent symbolically the infinite set
of possible traces of a security protocol in a hostile environment. This
was used for instance in [Comon-Lundh, Cortier & Zalinescu, 2010]
in order to decide more complex trace properties,
such as the existence of key cycles. Our procedure subsumes
this constraint solving procedure since we do not only consider the Dolev-Yao
inference system.
Our deducibility constraint solving procedure is correct and complete
for infinite local inference systems as well. When the inference system
is infinite, it is however not effective.
Our next step is to show, on two relevant examples, how to deal with some infinite
recursive and local inference systems.
We consider first the blind signatures that are used in some e-voting
protocols. The inference system is not local.
However, we may saturate the system, adding all non-redundant inferences
that can be derived. The limit system is saturated, hence
local. For blind signatures, this yields an infinite, yet recursive,
local inference system. We use then our deducibility constraint solving
procedure, representing in a symbolic way the (possibly infinite) set
of successors of a deducibility constraint. In other words, we enrich
the constraint system with new predicates in such a way that
our non-deterministic constraint solving procedure is now finitely branching.
Our second example is homomorphic encryption:
encrypting a pair of messages yields the same result as pairing the
encryption of the components. Some encryption schemes have this property,
which is used in some protocols. As for blind signatures, we get a non-local
inference system, that can be saturated, yielding an infinite local system.
Then we show how to solve the associated constraints, relying on an additional
predicate symbol.
A part of this work has been published at ASIAN'09.
-
Santiago Escobar, Catherine Meadows, Jose Meseguer and Sonia Santiago
Title: Sequential Protocol Composition in Maude-NPA
Date: Friday June 18
Abstract: We propose a syntax and operational semantics for sequential protocol composition
in Maude-NPA, a protocol analysis tool based on unification and narrowing-based backwards
search. Sequential composition, in which one or more child protocols make use of information
obtained from running a parent protocol, is the most common use of composition in
cryptographic protocol analysis. The original Maude-NPA syntax has been extended to provide
a more expressive notation supporting many technical
details necessary for protocol specification, analysis, and composition that are not always
made explicit in other formulations. We provide semantics to sequential composition via a
natural extension to the rule-based operational semantics of the Maude-NPA. However,
instead of implementing a new version of the Maude-NPA generating such transition rules for
each protocol composition, we have defined a protocol
transformation that achieves the same effect using the current Maude-NPA tool.
-
Santiago Escobar, Catherine Meadows and Jose Meseguer
Title: Maude-NPA Tool Demo
Date: Friday June 18
Abstract:
In this demo we plan to give demonstrations of Maude-NPA searches over different
protocols satisfying several different equational theories. We will show how one specifies a
theory, and how one presents a query to Maude-NPA. We will also show how different
equational theories effect the outcome of Maude-NPA searches. The goal of the demo is both
to give the audience an idea of how Maude-NPA works, and also to demonstrate how an
accurate representation of the equational theory satisfied by a
cryptographic algorithm can aid in protocol analysis.
-
Graham Steel
Title: Abstractions for Verifying Key Management APIs
Date: Friday June 18
Abstract:
Key management APIs are used to control the use of tamper-resistant
cryptographic hardware devices such as smartcards. Such APIs are
designed to preserve some security properties, such as secrecy of
sensitive keys, no matter what sequence of API calls are
made. Multiset rewriting has been used successfully to model and analyse APIs such
as the RSA standard PKCS#11 [2] and the SeVeCoM API for
cryptographic devices on vehicles [4]. A challenge for such formal
modelling is that APIs typically contain commands that generate fresh
keys and fresh pointers to keys on the device (known as handles). It
is not clear a priori whether a proof of security for $n$ keys and
handles is valid if $n+1$ fresh keys are generated.
In a previous paper [3], we proposed an abstraction for keys and key
handles for the PKCS#11 API that permits proofs of security for
unbounded fresh handles and keys to be found by automatic tools. Since
that paper, we have experimented with several commercial PKCS#11
devices, and discovered that in fact they almost exclusively implement
the standard in a quite different way to the one we considered. We had
assumed that the device would allow keys to be generated and their
attributes or metadata set by subsequent commands, as is described in
the standard. The setting of the attrbutes for a key governs which
commands in the API can be used with that key, and a careful
regulation of these attributes is vital for the security of the
interface. In fact, most devices disable the commands for changing the
attribiutes of a key, but instead allow the key generation command to
be called with a variety of different templates, which set the
attributes for the key. We can adapt our abstraction to this scenario,
but the number of possible templates is typically very large, resulting
in a model that is impractical for automated tools.
In this talk I will describe a new abstraction algorithm that computes
a set of templates that is sufficient for proving security for a given
API model. The abstraction applies to PKCS#11 and other similar
APIs. I will explain the proof of soundness, and demonstrate the
abstraction in action on some models of real devices. Using the
abstraction we are able to prove security for some new PKCS#11
configurations which, unlike those previously proposed [1,3], do not
require any changes to the standard.
[1] C. Cachin and N. Chandran, A secure cryptographic token
interface. In CSF '09, pages 141-153. IEEE Computer Society Press,
2009.
[2] S. Delaune, S. Kremer and G. Steel. Formal Analysis of PKCS#11.
In CSF'08, pages 331-344. IEEE Computer Society Press, 2008.
[3] S. Fröschle and G. Steel. Analysing PKCS#11 Key Management APIs
with Unbounded Fresh Data. In ARSPA-WITS'09, LNCS 5511, pages
92-106. Springer, 2009.
[4] G. Steel. Towards a Formal Security Analysis of the Sevecom
API. In Proceedings of ESCAR '09.
-
Tony Bourdier, Horatiu Cirstea, Mathieu Jaume and Helene Kirchner
Title: Rule-based Specification and Analysis of Security Policies
Date: Friday June 18
-
Clément Hurlin and Helene Kirchner
Title: Semi-Automatic Synthesis of Security Policies by Invariant-Guided
Abduction
Date: Friday June 18
Abstract: We propose a specification approach and an abduction algorithm to generate
possible security policies for transition-based systems. As we use invariants to guide
abduction, we generate security policies that enforce security properties specified by these
invariants. We show how our domain of application - transition systems guarded by policies -
allows to tune abduction in two ways: (1) how to filter out bad security policies and (2) how
to generate additional possible security policies. This approach is illustrated on role-based
access control systems.
-
Rohit Chadha, Stefan Ciobaca and Steve Kremer
Title: Deciding trace equivalence for finite cryptographic process calculi
(work in progress)
Date: Saturday June 19
-
Vincent Cheval, Hubert Comon-Lundh and Stephanie Delaune
Title: Automating security analysis: symbolic equivalence of constraint systems
Date: Saturday June 19
Abstract:
Security protocols are small distributed programs aiming at some security
goal, though relying on untrusted communication media. Formally proving
that such a protocol satisfies a security property (or finding an attack)
is an important issue, in view of the economical and social impact of a
failure. Until recently, most efforts and successes only concerned trace properties,
i.e., security properties that can be checked on each individual sequence of messages
corresponding to an execution of the protocol.
There are however several security properties that cannot be stated as properties
of a single trace. Consider for instance a voter casting her vote, encrypted
with a public key of some server. Since there are only a fixed, known, number of possible
plaintexts, the confidentiality is not an issue. A more relevant property is the ability to relate
the voter's identity with the plaintext of the message. This is a property in the family of
privacy (or anonymity) properties. Another example is the strong secrecy property: roughly,
m is strongly secret if replacing m with any m' in the protocol, would yield another protocol
that is indistinguishable from the first one: not only
m itself cannot be deduced, but the protocol also does not leak any piece of m. These two
examples of properties are not trace properties, but equivalence properties:they can be
stated as the indistinguishability of two processes by an
attacker.
In the present work, we consider security properties
of cryptographic protocols, that are either trace properties (such as
confidentiality or authenticity) or equivalence properties
(such as anonymity or strong secrecy). More precisely,
we are interested in automating the proofs of equivalence properties.
Infinite sets of possible traces are symbolically represented using deducibility
constraints. We give a new algorithm that decides the trace equivalence for the traces
that are represented using such constraints, in the case of signatures,
symmetric and asymmetric encryptions.
The main idea of our method is to simultaneously solve pairs of
constraints, instead of solving each constraint separately and comparing the
solutions. These pairs are successively split into several pairs of systems, while preserving the
symbolic
equivalence: roughly, the father pair is in the relation if, and only if, all the sons pairs are in
the relation.
This is not fully correct, since, for termination purposes, we need to keep track of
some earlier splitting, using additional predicates. Such predicates, together with the
constraint systems, yield another notion of equivalence, which is preserved upwards, while
the former is preserved downwards.
When a pair of constraints cannot be split any more, then the equivalence can be trivially
checked
A preliminary version of the algorithm has been implemented and works well
(within a few seconds) on all
benchmarks. This is the main advantage over the works by M. Baudet and
Chevalier et al. Note also that the same implementation can be used for
checking the static equivalence and for checking the constraints satisfiability.
Up to our knowledge, this is the first implemented algorithm, deciding symbolic trace
equivalence.
This work has been accepted at IJCAR'10
-
Yannick Chevalier and Mounira Kourjieh
Title: Finitary Deduction Systems
Date: Saturday June 19
Abstract: Our goal is to define a notion of finitary deduction systems such that trace (also
called symbolic) equivalence is decidable for these. We hope this will provide a simple
criterion for later proving decidability of various deduction systems, thereby going the seminal
work of Baudet for subterm deduction systems. This work is currently very preliminary.
-
Zhiqiang Liu and Christopher Lynch
Title: Efficient XOR Unification
Date: Sunday June 19
Abstract: We give efficient sets of inference rules for unification with uniterpreted function
symbols modulo each of the following theories: XOR, XOR with homomorphism, abelian
group, and abelian group with homomorphism.
-
Daniel Dougherty and Joshua Guttman
Title: Geometric Logic and Strand Spaces
Date: Sunday June 19
-
Mauricio Alba-Castro, María Alpuente and Santiago Escobar
Title: Automated Abstract
Certification of Global Non-interference in Rewriting Logic
Date: Sunday June 19
Abstract: A non-interference policy allows programs to manipulate and modify confidential
data as long as the observable data generated by those programs do not improperly reveal
information about the confidential data. In this paper, we provide a detailed description of our
abstract non–interference certification methodology for analyzing global non-interference of
entire Java programs. We formulate the observational capabilities of an attacker and the
conditions required on Java programs for ensuring the correctness of the analysis. We
ascertain the conditions for the safety of extended Java computations to entail program
non-interference. Our methodology is an instance of Proof–carrying code (PCC), a mechanism
originated by Necula for ensuring the secure behavior of programs. In our methodology,
certificates are encoded
as (abstract) rewriting sequences that (together with an encoding of the abstraction in
Maude) can be checked by standard reduction
Last modified: Monday May 24 10:37:11 EDT 2010 |