Project Proposal

Theoretical Adverse , and Safety

ThEorie des Calculs AdveRses, et s´ecurit´e

CARTE

Scientific leader: Jean-Yves Marion INRIA scientific and technological challenges: 1 3 and 1 Keywords: Algorithmic, Complexity, , Complex Systems, Algo- rithms, Virus, Malwares, Networks, Computational Models, Resource Analysis, Program Properties, Termination, Formal Systems, Logic

1The seven INRIA scientific and technological challenges are: 1. Designing and mastering the future network and communication services infrastructures 2. Developing multimedia data and information processing 3. Guaranteeing the reliability and security of software-prevalent systems 4. Coupling models and data to simulate and control complex systems 5. Combining simulation, visualization and interaction 6. Modeling living beings 7. Fully integrating ICST into medical technology 1 Project-team composition 1.1 Permanent positions • Full-time researchers

– Olivier Bournez, Charg´ede recherche INRIA. – Johanne Cohen, Charg´eede recherche, CNRS, – Isabelle Gnaedig, Charg´eede Recherche INRIA.

• Professors and assistant professors

– Guillaume Bonfante, Maitre de conf´erences, Ecole´ des Mines de Nancy, INPL – Jean-Yves Marion, Professeur, Ecole´ des Mines de Nancy, INPL

1.2 PhD and Post-Docs • Emmanuel Hainry, ATER, UHP

• Romain P´echoux, PhD MENRT,

• Matthieu Kaczmarek, BDI CNRS-R´egion,

• Marco Gaboardi, joint PhD with Turin University

• Octave Boussaton, PhD Henri Poincar´eUniversity

2 2 Main objectives

Algorithmic and programming theory define implicitly the semantics of an algo- rithm or a program. Thus, when an algorithm asks for a value, the request has a meaning, like the weight of an arc, but this meaning is external to the algo- rithm. This characteristic is not shared by other sciences, whose semantic is not defined, in general, in an external way by the scientist. Since the environment is made of autonomous systems, that have their own objectives, and since we try to take into account real world constraints, an algorithm cannot be based on “a priori” knowledge from its environment. A network routing com- putation has to consider the economic importance of each independent agent. An antivirus has to determine the intentions of a program (potentially hostile) to do its task correctly. A program is asked to perform well, even if good clas- sical properties are not ensured. One of the main objectives of the CARTE project is to take into account adversity, as a veritable full-fledged component, linked to actors of a computation, whose behavior is unknown or unclear. We call this notion adversary computation. One of the main objectives is then to predict the behavior of adversary computations, to construct robust, i.e. fault-tolerant algorithms and systems, taking into account divergent interests, resisting viral attacks, and performing well, even if certain correctness properties are missing. One of the original as- pects of the project is to combine the two following approaches. The first one is the analysis of the behavior of a wide-scale system, using tools coming from both the Continuous and the Game Theories. Simply put, this is a macroscopic approach. In the second approach, the tools we envisage using to build defenses would come rather from logic, rewriting, and, more generally, from the Pro- gramming Theory. We will complete these two approaches with the control of computations in antagonistic environments, which requires studying and ver- ifying the fundamental properties of programs and systems, like termination, stability or access to resources.

3 3 Scientific foundations

We present the fundamental notions, formalisms and computational models we will use for our purpose. This brief presentation is not exhaustive, but looks at the areas in a particular way, enhancing the aspects which seem to us interesting to reach the goals we have fixed in our research directions.

3.1 Continuous Computation Theory Today’s classical computability and complexity theory deals with discrete time and space models of computation. But discrete time models of machines work- ing on a continuous space can be considered: consider for example the Blum Shub and Smale machines of [6]. Recursive analysis, introduced by Turing [81], Grzegorczyk [50], and Lacombe [56] can also be considered as continous space discrete time computation theory. Models of machines working with a continuous time and space can also be considered: see for example the General Purpose Analog Computer of Claude Shannon [76], defined as a model of the Differential Analysers [20]. At its beginning, continuous time computation theory was mainly concerned with analog machines. Determining which systems can actually be considered as computational models is a very intriguing question. This relates to the philo- sophical discussion about what is a programmable machine, which is beyond the scope of this discussion. Nonetheless, there are some early examples of built analog devices that are generally accepted as programmable machines. They include Bush’s landmark 1931 Differential Analyzer [20], as well as Bill Phillips’ Finance Phalograph, Hermann’s 1814 Planimeter, Pascal’s 1642 Pascaline, or even the 87 b.c. Antikythera mechanism: see [25]. Continuous time computa- tional models also include neural networks and systems that can be built using electronic analog devices. Since continuous time systems are conducive to mod- eling huge populations, one might speculate that they will have a prominent role in analyzing massively parallel systems such as the Internet [72]. The first true model of a universal continuous time machine was proposed by Shannon [76], who introduced it as a model of the Differential Analyzer. Dur- ing the 1950s and 60s an extensive body of literature was published about the programming of such machines2. There were also a number of significant pub- lications on how to use analog devices to solve discrete or continuous problems: see e.g. [82] and the references therein. However, most of this early literature is now only marginally relevant given the ways in which our current understanding of computability and complexity theory have developed. The research on artificial neural networks, despite the fact that it mainly focused on discrete time analog models, has motivated a change of perspective due to its many shared concepts and goals with today’s standard computability and complexity theory [70], [69]. Another line of development of continuous time computation theory has been motivated by hybrid systems, particularly

2See for example the very instructive Doug Coward ’s web Analog Computer Museum [25] and its bibliography.

4 by questions related to the hardness of their verification and control: see for example [18] and [4]. In recent years there has also been a surge of interest in alternatives to classical digital models other than continuous time systems. Those alternatives include discrete-time analog-space models like artificial neural networks [70], optical models [87], signal machines [29] and the Blum Shub and Smale model [7]. More generally there have also been many recent developments in non- classical and more-or-less realistic or futuristic models such as exotic cellular automata models [48], molecular or natural computations [51], [2], [57], [73], computations [53], or quantum computations [28], [49], [77], [54]. The computational power of discrete time models are fairly well known and understood thanks in large part to the Church-Turing thesis. The Church- Turing thesis states that all reasonable and sufficiently powerful models are equivalent. For continuous time computation, the situation is far from being so clear, and there has not been a significant effort toward unifying concepts. Nonetheless, some recent results establish the equivalence between apparently distinct models [47], [46], [45], and [16], which give us hope that a unified theory of continuous time computation may not be too far in the future. A survey on continuous-time computation theory co-authored with Manuel Campagnolo can be found in [15]. This survey includes open problems and re- search directions. Extended discussions can also be found in [14]. This presenta- tion is extracted from [15]. A monograph on complexity theory for discrete-time computation over the real and over arbitrary structures in the sense of [6] can be found in [7] and [74] respectively.

3.2 Rewriting The rewriting paradigm, initially developed to mechanize the word problem, i.e. to decide whether an equality between terms is true modulo a set of axioms, has been greatly studied, in the context of automated deduction, especially since the seventies. Rewriting a term, with a set of rules that are oriented axioms, consists in deciding through matching whether a part of the term to be rewritten is an instance of a left-hand side of rule, then in replacing this instance in the term by the corresponding instance of the right-hand side. Properties of this deduction mechanism like confluence, sufficient complete- ness, consistency, or various notions of termination, have been described. For these generally undecidable properties, many proof methods have been devel- oped for basic rewriting and in a weaker proportion, for extensions like equa- tional extensions, consisting of rewriting modulo a set of axioms, conditional extensions where rules are applied under certain conditions only or typed or constrained extensions. Completeness test algorithms, a great number of meth- ods to prove termination, most founded on orderings or on decreasingness argu- ments, and completion procedures to ensure confluence are now at our disposi- tion [23, 26, 5, 27, 79]. Rewriting has reached some maturity and the rewriting paradigm is now widely used. It allows for easily expressing deduction systems in a declarative

5 way, for expressing complex relations on infinite sets of states in a finite way, provided they are countable. It also provides a semantics for programming lan- guages and environments. Let us cite ASF+SDF [17], Maude [21], CafeOBJ [42], ELAN [13], Stratego [83], or Tom [65], now currently used for any kind of appli- cation. An interesting aspect of this paradigm is that it allows automatable or semi- automatable correctness proofs on the systems or programs. Indeed, properties of rewriting systems as those cited above are translatable to the deduction sys- tems or programs they formalize, and the proof techniques may directly apply to them. Another interesting aspect of rewriting is that it allows characteristics or properties of the modelized systems to be expressed as equational theorems, often automatically provable using the rewriting mechanism itself or induction- less induction techniques based on completion [75]. Note also that the rewriting and the completion mechanisms also allows transformation and simplification of formal systems or programs. Nevertheless, an important stake in the domain is now to adapt and com- plete the previous techniques to the real needs of modelizing and programming. The proof techniques for the above cited rewriting properties or the equational theorems still need to be refine to take into account the extensions of rewriting allowing to give a realistic description power. The need is especially important in the adversary case, for example when key properties like termination are not ensured.

3.3 Algorithmic game theory Game theory aims at discussing situations of competition between rational play- ers [71]. After the seminal works of Emile Borel and John von Neumann, one key events was the publication of the book [85] published in 1944 by John von Neumann and Oskar Morgenstern. Game theory then spent a long period in the doldrums. Much effort was devoted at that time towards the mathematics of two-persons, zero-sum games. For general games, the key concept of Nash equilibrium was proposed in the early 50s by John Nash in [67], but it was not until the early 70s that it was fully realized what a powerful tool Nash has provided in formulating this concept. This is now a central concept in economics, biology, sociology and psychology to discuss general situations of competition, as attested for example by several Nobel prizes of Economics. Even if the theory of repeated games allows to model some dynamical as- pects, one can say that this stays a theory to discuss static situations of equilibria between rational players [71]. Evolutionary game theory originated as an application of the mathemat- ical theory of games to biological contexts, arising from the realization that frequency dependent fitness introduces a strategic aspect to evolution. This theory introduced by [62] can be considered as the application of population dynamicalal methods to game theory [52, 86].

6 Algorithmic game theory differs from game theory by taking into account algorithmic and complexity aspects, as historically main developpments of clas- sical game theory have been realized in a mathematical context, without true considerations on effectivness of constructions. Game theory and algorithmic game theory have large domains of applica- tions, even in theoretical computer science: it has been used to understand com- plexity of problems linked to equilibrium [63], the loss of performance due to individual behavior in distributed algorithmics [72, 3, 24], the design of provoca- tive mechanisms [68], the problems related to the pricing of services in some protocols [30], auction problems, etc [19] . . .

3.4 Computer virology The literature which explains and discusses about practical issues is quite ex- tensive, see for example Ludwig’s book [58] or Szor’s one [78]. But, we think that the best reference are both books of Filiol [31] (English translation [32]) and [34]. However, there are only a few theoretical/scientific studies, which at- tempt to give a model of computer viruses. This situation is improving with the new journal of computer virology and the workshop TCV that we are or- ganizing. (Indeed, other conferences are either purely technical, like SSTIC in France, or organized by anti-virus companies.) From an historical point of view, the first official virus appeared in 1983 on Vax-PDP 11. In the very same time, a series of papers was published which always remain a reference in computer virology: Thompson [80], Cohen [22] and Adleman [1]. In a seminal work, Cohen defines viruses with respect to Turing Machines. For this, he considers viral sets. A viral set is a pair (M,V ) constituted of a Turing Machine M and a set of viruses V . When one runs on M a virus v of V , it duplicates or mutates to another virus of V on M’s tape. Adleman [1] takes a more abstract formulation of computer viruses based on computability theory in order to have a definition independent from a partic- ular computational model. The most important contributions of Adleman is certainly to develop scenarios, which describe, and classify how a virus behaves with respect to some environment. From this study, and from the current point of views like the ones that we find in Filiol’s book[31] or Szor’s one [78], a virus is characterized by

1. A virus infect programs by modifying them

2. A virus copies itself and can mutate

3. Virus spreads throughout a system

So, a virus is essentially a self-replicating program inside an adversary envi- ronment. More precisely we establish in [10] that Kleene’s second recursion theorem [55] is the cornerstone from which viruses and infection scenarios can be defined and classified. It is essential to understand that computer virology has a strong theoretical foundation, which is based on self-replicable systems

7 and evolving fixed points, that is a fixed point which modifies its code each time. Thus, by replacing “automaton” by “virus”, the following citation of von Neumann is inspiring [84]: ”Can an automaton be constructed, i.e., assembled and built from appropriately ”raw material”, by another automaton? . . . Can the construction of automata by automata progress from Simpler types to increas- ingly complicated types?”. The above scientific foundation justifies our position to use the word virus as a generic word for self-replicating malwares. (There are yet a difference. A malware has a payload, and virus may not have one.) For example, worms are an autonous self-replicating malwares and so fall into our definition. In fact, the current malware taxonomy (virus, worms, trojans, . . . ) is unclear and subject to debate. The crucial question is how to detect viruses or self-replicating malwares. Cohen demonstrated that this question is undecidable. The anti-virus heuris- tics are based on two methods. The first one consists in searching for virus signatures. A signature is a regular expression, which identifies a family of viruses. There are obvious defect. For example, an unknown virus will not be detected, like ones related to a 0-day exploit. We strongly suggest to have a look at the independent audit [33] in order to understand the limits of this method. The second one consists in analysing the behaviour of a program by monitoring it. Following [35], this kind of methods is not yet really implemented. Moreover, the large number of false-positive implies this is barely usable. To end this short survey, intrusion detection encompasses virus detection. However, unlike com- puter virology, which has a solid scientific foundation as we have seen, the IDS notion of “malwares” with respect to some security policy is not well defined. The interesting reader may consult [66].

8 4 Research directions

The scientific project is divided into three parts. The first part focuses on com- putation models and in particular on continuous system and rewriting systems. This part is the scientific foundations on which the two other parts are based. The two other parts consider adversity, as a full-fledged component, linked to actors of a computation, whose behavior is unknown or indistinct. The objective of the second part is to study robust and distributed algorithms. Our approach will use algorithmic game theory and so will use continuous dynamic systems. The last part concerns computer virology. This part needs both first parts. In- deed, our approach tries to use formal methods in order to prevent viruses, and also to develop algorithms over large distributed systems in order to understand the evolution of a virus attack with respect to agent policies.

4.1 Computations by Dynamical Systems Participants : Olivier Bournez, Guillaume Bonfante, Isabelle Gnaedig, Jean- Yves Marion The dynamical system model is a very general mathematical model. In general, a dynamical system can be defined as the action of a subgroup T of R on a space X, i.e. by a function (a flow) φ : T × X → X satisfying the following two equations φ(0, x) = x (1) φ(t, φ(s, x)) = φ(t + s, x). (2)

It is well known that subgroups T of R are either dense in R or isomorphic to the integers. In the first case, the time is termed continuous, in the latter case, discrete. A dynamical system whose space is discrete and that evolves discretely is termed digital, otherwise it is analog Since flows obtained by solutions of ordinary differential equations satisfy equations (1) and (2), they correspond to specific continuous time and space dynamical systems. Although not all continuous time and space dynamical systems can be put in a form of a differential equation, ordinary differential equations are sufficiently general to cover a very wide class of such systems. In particular, if φ is continuously differentiable, then y0 = f(y), with f(y) = d dt φ(t, y) t=0, describes the dynamical system. For discrete time systems, we can assume without loss of generality that T is the integers. The analog of this dy differential equation dt = f(y) is a recurrence equation y(t + 1) = f(y(t)) Most of the continuous time models described above have a continuous dy- namics described by differential equations, hence by continuous time and space dynamical systems. Rewriting theory can be considered as a tool for describ- ing discrete space and time dynamical systems over the set of terms of a given signature. Models from evolutionary game theory are also particular dynamical systems models. It follows that understanding computational properties of these systems, as well as the hardness of the analysis and the verification of properties for these

9 systems turns out to be very closely related problems that can be formulated uniformly as understanding algorithmic complexity of decision problems for dy- namical systems. For example, as an undecidability proof is often obtained by simulating a Turing machine, an undecidability proof for verification of a given property often yields that the considerd class of dynamical systems has at least the power of universal Turing machines. Conversely, most of the results motivated by computation theory for dynamical systems yields direct lower bounds on the hardness of their verification. Dynamical systems in our sense are formal entities allowing to modelize or compute, evoluting during the time in an explicit or implicit way. We would like to study in particular models for dynamical systems having a continuous behavior, and properties of dynamical systems having an adversary behavior.

Continuous systems computation theory Some systems carrying out com- putations are intrinsically continuous. This is the case, for example, of some more or less futuristic models such as quantum models of computations, or models based on space-time curves, but also of some quite real mechanical or electronic machines. In particular, the progress of analog electronics allows for the possibility of reintroducing purely analog machines. By the way, the under- lying models of all the components of today’s computers are analog, even if for technological reasons, and also for historical choices, digital electronics imposed itself to the detriment of analog electronics. Furthermore, even for intrinsically discrete models, as soon as the number of individuals is high, natural models become continuous. To characterize, for example, a population of processors in a given state at a given time, it is more natural to talk about proportions of individuals with a given property than talking directly about the individuals (i.e. of statistics or probabilities). One can then talk about the dynamicals of these populations in a natural way. Observe that this constitutes an abstraction of huge-sized systems, which is quite unusual in classical complexity/computability/algorithmic, but natural, and abundantly used in many of other sciences, like biology or physics. Our long term objectives are to try to answer the following questions: 1. How can we model, in a robust and satisfactory way, continuous systems or systems close to continuous systems? 2. How can we characterize the computing power, at the level of complexity and calculability of the systems obtained? 3. Is there a robust theory of computability for continuous machines? 4. Is there a robust theory of complexity for continuous machines? 5. How can we use the power of continuous machines for the resolution of digital problems? For the middle term, our objectives are the following:

10 • compare the classes of algebraic models resulting from R-recursing func- tions [64], and the model of GPAC of Shannon [76], with recursive analysis, • present results about the complexity of solving differential equations, that allow for extracting upper bounds on the power of some models, • use continuous time computation theory to discuss the computational properties of macroscopic population models.

Analysis and verification of adversary systems The other research di- rection we are interested in on dynamical systems is the study of properties of systems or programs in the context of adversary computations. We would like to offer proof and verification tools for properties of systems and programs, whose behavior is unknown or indistinct, to guarantee the correct behavior of these systems or programs. We have been working for several years on the analysis of systems and the properties of programs, and we have the intention to carry on with our efforts in this direction, to complete the previous objectives. We are interested in continuous and hybrid systems. In a mathematical sense, a hybrid system can be seen as a dynamical system, whose transition function does not satisfy the classical regularity hypotheses, like continuity, or continuity of its derivative. The properties to be verified are often expressed as reachability properties. For example, a safety property is often equivalent to (non-)reachability of a subset of unsure states from an initial configuration, or to stability (with its numerous variants like asymptotic stability, local sta- bility, mortality, etc . . . ). Thus we will essentially focus on verification of these properties in various classes of dynamical systems. We are also interested in the analysis methods of rewriting properties. For more than five years, we have been developing specific procedures for property proofs of rewriting, for the sake of programming, in particular with inductive techniques, already applied with success to termination under strategies [42, 43, 44, 46], to sufficient completeness [50], and to probabilistic termination [44]. The three last results take place in the context of adversary computations, since they allow for proving that even a divergent program, in the sense where it does not terminate, can give the expected results. We also study the links between the different properties of programs to coordinate and optimize the use of the tools allowing to prove them. We will continue this study, as well as the development of our inductive approach, which seems to be general enough to be extended to different aspects of adversary computations. We especially intend to continue to study the termination property, and its impact on other properties, since it seems to be central in several research directions retained by the project: • until now it was very used for deciding properties like confluence, com- pleteness or consistency. Its weakening or its absence, very frequent in real cases, throws us into the context of adversity,

11 • it is a tool for the analysis of resources, • thoroughly analyze divergence by non-termination phenomenon can be an approach for modeling the continuous. A crucial element of safety and security of software systems is the prob- lem of resources. We are working in the field of Implicit Computational Com- plexity. Interpretation based methods like Quasi-interpretations (QI) or sup- interpretations, are the approach we have been developing these last five years, see [6, 52, 9, 24, 20] and the ACI project CRISS. • Interpretation based methods, like QI, provide static analysis of the com- plexity of programs. These approaches apply to functional, reactive or object programming languages. • There are quite efficient heuristics to find the QI or SI of programs. We are currently developing the CROCUS system (following the former ex- perimental software ICAR) to synthesize QIs. Our middle-dated terms are the following:

• to carry on with our study of property proof coordination, and extend our inductive approach to other properties like confluence, and to other deduction mechanisms besides rewriting, in the context of adversary com- putations, • to carry on on some theoretical aspects of implicit computational com- plexity, more precisely on ramified recursion and light logics. • to carry on with our efforts on quasi-interpretations. In particular, we will make every effort to built a system that will be able to automatically infer the resources of a program. • to analyze the work linking the termination proof of rule-based systems to the complexity of derivation chains, for a modelization of the continuous.

4.2 Robust and distributed algorithms, algorithmic game theory Participants : Olivier Bournez, Johanne Cohen, Jean-Yves Marion One of the problems related to distributed algorithmics corresponds to the minimization of resources (time of transit, quality of services) in problems of transiting information (routing problems, group telecommunications) in telecom- munication networks. Each type of network gives rise to natural constraints on models. For ex- ample, a network is generally modeled by a graph. The material and physical constraints on each component of the network (routers, communication media, topology, etc . . . ) result in different models. One natural objective is then to build algorithms to solve these types of problems on various types of models.

12 One can also impose proposed solutions to offer certain guarantees: for example the property of self-stabilization, which expresses that the system must end in a correct state whatever its initial state; or certain guarantees of robustness: even in the presence of a small proportion of Byzantine actors, the final result will remain correct; even in the presence of rational actors with divergent interests, the final result will remain acceptable. Algorithms of traditional distributed algorithmics were designed with the strong assumption that the interest of each actor does not differ from the interest of the group. For example, in a routing problem, classical distributed algorithms do not take into account the economic interests of the various autonomous sys- tems, and only try to minimize criteria such as shortest distances, completely ignoring the economical consequences of decisions for involved agents. If one wants to have more realistic models, and take into account the way the different agents behave, one gets more complex models. However, today, one gets models on which this is rather hard to reason. For example,

• models of dynamism are missing: e.g., how to model a negotiation in a distributed auction mechanism for the access to a telecommunications service,

• only few methods are known to guarantee that the equilibrium reached by such systems remains in some domains that could be qualified as safe or reasonable,

• there is almost no method discussing the speed of convergence, when there is convergence,

• only a little is known about the time and space resources necessary to establish some techniques to guarantee correct behavior.

Thus, it is important to reconsider the algorithms of the theory of distributed algorithmics, under the angle of the competitive interests that involved agents can have (Adversary computation). This requires to include/understand well how to reason on these types of models. The long-term objectives are the following:

• characterize the associated mechanism theory. The objective is to in- clude/understand which properties can be guaranteed, for a given concept of equilibrium,

• understand the limits of these theories,

• reconsider the problems of optimization of resources in telecommunications networks, taking into account the fact that each involved agent can have a divergent interest from that of the group.

At middle-term, our objectives are the following:

13 • Present a model of protocol, such as inter-domain routing, utilizing eco- nomic aspects, with proved guarantees.

• Produce relevant concepts of equilibrium for these models, and relevant concepts for their dynamics.

• Produce classes of strategies that lead in a proven way to these notions of equilibrium, and discuss associated proof techniques.

4.3 Computer Virology Participants : Guillaume Bonfante, Isabelle Gnaedig, Jean-Yves Marion From an epistemological point of view, it is rightful to wonder why there is only a few fundamental studies on computer viruses while it is one of the important flaws in software engineering. The lack of theoretical studies explains maybe the weakness in the anticipation of computer diseases and the difficulty to improve defenses. For these reasons, we do think that it is worth to explore fundamental aspects. Following the introductory part, a virus is a program which is the solution of fixed point equations. Solutions are provided by variant of Kleene’s recursion Theorem. In [8], we have tired to show how to build ”highly” metamorphic virus compilers. We think that this line of research should be pursued in order (i) to define what is a computer in different programming languages and setting, (ii) to take into consideration resources like time and space. The next objective should be to define security policies and the mechanisms that implement them in order to prevent attacks. This goal require good understanding of point (i) and (ii) above. We think that formal methods like rewriting, type theory, logic, or formal languages, should help to define the notion of a formal immune system, which defines a certified protection, that we try to developp. Lastly, we have constructed tools to analyze system resources [11], which might be useful to avoid attacks by memory overflow to take control of a system. Then, we are working on virus detection. We have already explained the current methods. We are developing a detection algorithms based on an ab- straction of the control flow graph of programs. The idea is to compare control flow graphs of malware with the one of a target program. The experiments that we have made are quite successful. From a theoretical point of view, this approach leads us to work with tree regular languages whose symbols as non- constant arity. In the last direction, we try to change the scale of our point of view. Viruses spread through networks which might be seen as a very large distributed system. Virus epidemic could be then represented as a game where agents have different interest. So, the question is can we define reasonable politi- cies in such setting which integrate the interest of each partner. Related to this, we could also look biological immunology. The long-term objectives are the following:

• Define security policies and mechanisms against malware by analyzing data-flow by means of type systems.

14 • Consider defenses based on an analogy with biological immune systems.

At middle-term, our objectives are the following:

• To study the foundation of computer virology

• To design a detection methods based on tree regular languages by flow graph matching.

This study on computer virology leads us to propose and construct a “high security lab” in which experiments can be done in respect with the French law. This project of “high security lab” in one of the main project of the CPER 2007-2013.

15 5 Applications

Our project aims at building the necessary theoretic approach to the setting of applications linked to computer science security. We have chosen to present applications either lying on current contractual research, or carrying on finished contractual research.

• The ARA SOGEA and the CPER SSS operation TATA. The ARA SO- GEA aims to contribute to the algorithmic game theory, in studying the problems linked to adversary, to the game dynamic and to the complexity problems linked to games. The SSS operation TATA is a starting collab- oration with economists on algorithmic game theory.

• The ARA VIRUS. The computer virology will focus our efforts in the context of the ARA SSIA Virus in collaboration with Eric Filiol (ESAT). Today, the definition of the defense policy against viral attacks is an appli- cation emerging from this recent research subject. The CPER SSS “High security laboratory” aims to build the condition with respect to the law to make experiments on attacks and defences.

6 Software CARIBOO

In the context of our study of rule-based program proof and validation, we develop and distribute CARIBOO (http://protheo.loria.fr/softwares/cariboo/), an environment dedicated to specific termination proofs under strategies like the innermost, the outermost or local strategies. Written in Elan and Java, it has a reflexive aspect, since Elan is itself a rule-based language. CARIBOO was partially developed in the Toundra QSL project, and reinforced in the framework of the Modulogic ACI [1, 2].

CROCUS

The CROCUS software aims at synthesizing quasi-interpretations. It takes pro- grams as input and returns the corresponding quasi-interpretation. Doing this, it can guarantee some bounds on the memory used along computations by the input program. The currently analyzed programs are written in a subset of

16 the CAML language, more precisely a first-order functional language subset of CAML.

7 Expected results and success criterions

We are conscious of the ambition of the proposed project, all the more because some research subjects are new for us. Thus, several of us have changed disci- plines, which does not go without risk. But taking risk is stimulating and mo- tivates our activities. The usual success criterions apply to this project, which can be measured by the number of publications, applications development, or patents. However, for the short-term, it seems to us that the first challenge is to move the gears of the described research subjects to mutually enrich them. It is clear that the network inspired models have to do with software in- fections; in the same way, the rewriting inspired tools have applications in the study of the described computation models; also taking into account the contin- uous in computation models has to do with adversary computations. We have emphasized four general objectives for our project:

• to propose a robust computation theory for continuous systems,

• to propose a methods against viruses,

• to understand limits of distributed algorithms with possibly antagonist interests,

• to realize property proofs for a robust computation in antagonist environ- ments.

To reach them, we will rely on the research subjects contained in the project, which we master well, and on others, which we just address, like virology. For this last subject, a success criterion for us would be to combine the efforts of the research community on the topic.

References

[1] L. Adleman. An abstract theory of computer viruses. In Advances in Cryp- tology — CRYPTO’88, volume 403. Lecture Notes in Computer Science, 1988.

[2] L. M. Adleman. Molecular computation of solutions to combinatorial prob- lems. Science, 266:1021–1024, November 11, 1994.

[3] Elliot Anshelevich, Anirban Dasgupta, Eva Tardos, and Tom Wexler. Near- optimal network design with selfish agents. In Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, pages 511–520. ACM Press, 2003.

17 [4] Eugene Asarin, Oded Maler, and Amir Pnueli. Reachability analysis of dy- namical systems having piecewise-constant derivatives. Theoretical Com- puter Science, 138(1):35–65, February 1995.

[5] Franz Baader and Tobias Nipkow. Term rewriting and all that. Cambridge University Press, New York, NY, USA, 1998.

[6] L. Blum, M. Shub, and S. Smale. On a theory of computation and complex- ity over the real numbers; NP completeness, recursive functions and univer- sal machines. Bulletin of the American Mathematical Society, 21(1):1–46, July 1989.

[7] , Felipe Cucker, Michael Shub, and Steve Smale. Complexity and . Springer-Verlag, 1998.

[8] G. Bonfante, M. Kaczmarek, and JY Marion. A classification of viruses through recursion theorems. In Springer, editor, CIE, volume 4583 of LNCS, 2007.

[9] G. Bonfante, J-Y Marion, and J-Y Moyen. On termination methods with space bound certifications. In Andrei Ershov Fourth International Con- ference ”Perspectives of System Informatics”, Lecture Notes in Computer Science. Springer, 2001.

[10] Guillaume Bonfante, Matthieu Kaczmarek, and Jean-Yves Marion. On abstract computer virology: from a recursion-theoretic perspective. Journal of computer virology, 1(3-4), 2006.

[11] Guillaume Bonfante, Jean-Yves Marion, and Jean-Yves Moyen. On com- plexity analysis by quasi-interpretation. In 2nd Appsem II workshop - APPSEM’04, pages 85–95, Apr 2004.

[12] Guillaume Bonfante, Jean-Yves Marion, and Jean-Yves Moyen. Quasi- interpretations, a way to control resources. Theoretical Computer Science, (revision).

[13] P. Borovansk´y, C. Kirchner, H. Kirchner, P.-E. Moreau, and Ch. Ringeis- sen. An Overview of ELAN. In C. Kirchner and H. Kirchner, editors, Proc. Second Intl. Workshop on Rewriting Logic and its Applications, Electronic Notes in Theoretical Computer Science, Pont-`a-Mousson (France), Septem- ber 1998. Elsevier Science Publishers B. V. (North-Holland).

[14] Olivier Bournez. Mod`eles Continus. Calculs. Algorithmique Distribu´ee. Ha- bilitation `adiriger les recherches, Institut National Polytechnique de Lor- raine, 7 D´ecembre 2006.

[15] Olivier Bournez and Manuel Campagnolo. New Computational Paradigms, chapter A Survey on Continuous Time Computation. Springer-Verlag, 2007. To appear.

18 [16] Olivier Bournez, Manuel Lameiras Campagnolo, Daniel S. Gra¸ca,and Em- manuel Hainry. Polynomial differential equations compute all real com- putable functions on computable compact intervals. Journal of Complexity, 2007. to appear.

[17] M.G.J. van den Brand, A. van Deursen, J. Heering, H.A. de Jong, M. de Jonge, T. Kuipers, P. Klint, L. Moonen, P.A. Olivier, J. Scheerder, J.J. Vinju, E. Visser, and J. Visser. The ASF+SDF Meta-Environment: a Component-Based Language Development Environment. In R. Wilhelm, editor, Compiler Construction (CC ’01), volume 2027 of Lecture Notes in Computer Science, pages 365–370. Springer, 2001.

[18] M. S. Branicky. Universal computation and other capabilities of hybrid and continuous dynamical systems. Theoretical Computer Science, 138(1):67– 100, 6 February 1995.

[19] Patrick Briest, Piotr Krysta, and Berthold V¨ocking. Approximation tech- niques for utilitarian mechanism design. In Harold N. Gabow and Ronald Fagin, editors, Proceedings of the 37th Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, May 22-24, 2005, pages 39–48. ACM, 2005.

[20] V. Bush. The differential analyser. Journal of the Franklin Institute, 212(4):447–488, 1931.

[21] Manuel Clavel, Francisco Dur´an, Steven Eker, Patrick Lincoln, Narciso Mart´ı-Oliet, Jos´eMeseguer, and Carolyn Talcott. The Maude 2.0 system. In Robert Nieuwenhuis, editor, Proceedings of the 14th International Con- ference on Rewriting Techniques and Applications, volume 2706 of Lecture Notes in Computer Science, pages 76–87. Springer, June 2003.

[22] F. Cohen. Computer Viruses. PhD thesis, University of Southern Califor- nia, January 1986.

[23] H. Comon. Inductionless induction. In A. Robinson and A. Voronkov, editors, Handbook of Automated Reasoning, volume I, chapter 14, pages 913–962. Elsevier Science, 2001.

[24] Jos´eR. Correa, Andreas S. Schulz, and Nicol´asE. Stier Moses. Selfish routing in capacitated networks. Math. Oper. Res., 29(4):961–976, 2004.

[25] Doug Coward. Doug coward’s analog computer museum, 2006. http://dcoward.best.vwh.net/analog/.

[26] N. Dershowitz and J.-P. Jouannaud. Handbook of Theoretical Computer Science, volume B, chapter 6: Rewrite Systems, pages 244–320. Elsevier Science Publishers B. V. (North-Holland), 1990. Also as: Research report 478, LRI.

19 [27] N. Dershowitz and D.A. Plaisted. Rewriting. In A. Robinson and A. Voronkov, editors, Handbook of Automated Reasoning, volume I, chap- ter 9, pages 535–610. Elsevier Science, 2001. [28] D. Deutsch. Quantum theory, the Church-Turing principle and the univer- sal quantum computer. Proceedings of the Royal Society (London), Serie A, 400:97–117, 1985. [29] J´erˆome Durand-Lose. Abstract geometrical computation: Turing- computing ability and undecidability. In S. Barry Cooper, Benedikt L¨owe, and Leen Torenvliet, editors, New Computational Paradigms, First Confer- ence on Computability in Europe, CiE 2005, Amsterdam, The Netherlands, June 8-12, 2005, Proceedings, volume 3526 of Lecture Notes in Computer Science, pages 106–116. Springer, 2005. [30] Joan Feigenbaum, Christos H. Papadimitriou, and Scott Shenker. Shar- ing the cost of multicast transmissions. Journal of Computer and System Sciences, 63(1):21–41, 2001. [31] E. Filiol. Les virus informatiques: th´eorie, pratique et applications. Springer-Verlag France, 2004. Translation [32]. [32] E. Filiol. Computer Viruses: from Theory to Applications. Springer-Verlag, 2005. [33] E. Filiol. Malware pattern scanning schemes secure against black-box anal- ysis. Journal in Computer Virology, 2(1):35–50, 2006. [34] E. Filiol. Techniques virales avanc´ees. Springer, 2007. [35] E. Filiol, G. Jacob, and M. Le Liard. Evaluation methodology and the- oretical model for antiviral behavioural detection strategies. Journal in Computer Virology, 3(1):23–37, 2007. [36] O. Fissore, I. Gnaedig, and H. Kirchner. Termination of rewriting with local strategies. In M. P. Bonacina and B. Gramlich, editors, Selected papers of the 4th International Workshop on Strategies in Automated Deduction, volume 58 of Electronic Notes in Theoretical Computer Science. Elsevier Science Publishers B. V. (North-Holland), 2001. [37] O. Fissore, I. Gnaedig, and H. Kirchner. CARIBOO : An induction based proof tool for termination with strategies. In Proceedings of the Fourth International Conference on Principles and Practice of Declarative Pro- gramming, pages 62–73, Pittsburgh (USA), October 2002. ACM Press. [38] O. Fissore, I. Gnaedig, and H. Kirchner. Outermost ground termination. In Proceedings of the Fourth International Workshop on Rewriting Logic and Its Applications, volume 71 of Electronic Notes in Theoretical Computer Science, Pisa, Italy, September 2002. Elsevier Science Publishers B. V. (North-Holland).

20 [39] O. Fissore, I. Gnaedig, and H Kirchner. Cariboo, a termi- nation proof tool for rewriting-based programming languages with strategies, Version 1.0. Free GPL Licence, APP registration IDDN.FR.001.170013.000.R.P.2005.000.10600, August 2004.

[40] O. Fissore, I. Gnaedig, and H Kirchner. A proof of weak termination provid- ing the right way to terminate. In First International Colloquium on The- oretical Aspect of Computing, volume 3407 of Lecture Notes in Computer Science, pages 356–371, Guiyang, China, September 2004. Springer-Verlag.

[41] O. Fissore, I. Gnaedig, H. Kirchner, and L. Moussa. Cariboo, a termination proof tool for rewriting-based programming languages with strategies, Version 1.1. Free GPL Licence, APP registration IDDN.FR.001.170013.000.S.P.2005.000.10600, December 2005. Available at http://protheo.loria.fr/softwares/cariboo/. [42] K. Futatsugi and A. Nakagawa. An overview of CAFE specification envi- ronment – an algebraic approach for creating, verifying, and maintaining formal specifications over networks. In Proceedings of the 1st IEEE Int. Conference on Formal Engineering Methods, 1997.

[43] I. Gnaedig and H. Kirchner. Computing Constructor Forms with Non Ter- minating Rewrite Programs. In Proceedings of the Eighth ACM-SIGPLAN International Symposium on Principles and Practice of Declarative Pro- gramming, pages 121–132, Venice, Italy, July 2006. ACM Press.

[44] Isabelle Gnaedig. Induction for Positive Almost Sure Termination. In Pro- ceedings of the Ninth ACM-SIGPLAN International Symposium on Princi- ples and Practice of Declarative Programming, Wroclaw, Poland, July 2007. ACM Press.

[45] Daniel Gra¸ca,Manuel Campagnolo, and Jorge Buescu. Robust simula- tions of Turing machines with analytic maps and flows. In B. Cooper, B. Loewe, and L. Torenvliet, editors, Proceedings of CiE’05, New Com- putational Paradigms, volume 3526 of Lecture Notes in Computer Science, pages 169–179. Springer-Verlag, 2005.

[46] Daniel S. Gra¸ca.Some recent developments on Shannon’s general purpose analog computer. Mathematical Logic Quarterly, 50(4–5):473–485, 2004.

[47] Daniel S. Gra¸caand Jos´eF´elix Costa. Analog computers and recursive functions over the reals. Journal of Complexity, 19(5):644–664, 2003.

[48] Serge Grigorieff and Maurice Margenstern. Register cellular automata in the hyperbolic plane. Fundamenta Informaticae, 1(61):19–27, 2004.

[49] Jozef Gruska. Foundations of computing. International Thomson Publish- ing, 1997.

21 [50] A. Grzegorczyk. On the definitions of computable real continuous functions. Fundamenta Mathematicae, 44:61–71, 1957.

[51] T. Head. Formal language theory and DNA: An analysis of the genera- tive capacity of specific recombinant behaviors. Bulletin of Mathematical Biology, 49:737–759, 1987.

[52] J. Hofbauer and K. Sigmund. Evolutionary game dynamics. Bulletin of the American Mathematical Society, 4:479–519, 2003.

[53] Mark L. Hogarth. Does general relativity allow an observer to view an eternity in a finite time? Foundations of Physics Letters, 5:173–181, 1992.

[54] Tien D. Kieu. Hypercomputation with quantum adiabatic processes. The- oretical Computer Science, 317(1-3):93–104, 2004.

[55] Steven C. Kleene. Introduction to Metamathematics. Van Nostrand, New York, 1952.

[56] Daniel Lacombe. Extension de la notion de fonction r´ecursive aux fonctions d’une ou plusieurs variables r´eelles iii. Comptes Rendus de l’Acad´emie des Sciences Paris, 241:151–153, 1955.

[57] R. J. Lipton. DNA solution of hard computational problems. Science, pages 542–545, 1994.

[58] M.A. Ludwig. The Giant Black Book of Computer Viruses. American Eagle Publications, 1998.

[59] J.-Y. Marion. Complexit´eimplicite des calculs, de la th´eorie `ala pratique. Habilitation `adiriger les recherches, Universit´eNancy 2, 2000.

[60] J.-Y. Marion and J.-Y. Moyen. Efficient first order functional program interpreter with time bound certifications. In Michel Parigot and Andrei Voronkov, editors, Logic for Programming and Automated Reasoning, 7th International Conference, LPAR 2000, Reunion Island, France, volume 1955 of Lecture Notes in Computer Science, pages 25–42. Springer, Nov 2000.

[61] Jean-Yves Marion. Analysing the implicit complexity of programs. Infor- mation and Computation, 183(1):2–18, 2003.

[62] J. Maynard-Smith. Evolution and the Theory of Games. Cambridge Uni- versity Press, Cambridge, 1981.

[63] R. McKelvey and A. McLennan. Computation of equilibria in finite games. In Handbook of Computational Economics. Elsevier, 1996.

[64] Christopher Moore. Recursion theory on the reals and continuous-time computation. Theoretical Computer Science, 162(1):23–44, 5 August 1996.

22 [65] Pierre-Etienne Moreau, Christophe Ringeissen, and Marian Vittek. A Pat- tern Matching Compiler for Multiple Target Languages. In G. Hedin, edi- tor, 12th Conference on Compiler Construction, Warsaw (Poland), volume 2622 of LNCS, pages 61–76. Springer-Verlag, May 2003. [66] B. Morin and L. M´e.Intrusion detection and virology: an analysis of differ- ences, similarities and complementariness. Journal in Computer Virology, 3(1):33–49, 2007. [67] John F. Nash. Equilibrium points in n-person games. Proc. of the National Academy of Sciences, 36:48–49, 1950. [68] Noam Nisan and Amir Ronen. Algorithmic mechanism design (extended abstract). In Proceedings of the thirty-first annual ACM symposium on Theory of computing, pages 129–140. ACM Press, 1999. [69] Pekka Orponen. Computational complexity of neural networks: a survey. Nordic Journal of Computing, 1(1):94–110, Spring 1994. [70] Pekka Orponen. A survey of continous-time computation theory. In Ad- vances in Algorithms, Languages, and Complexity, pages 209–224. Kluwer Academic Publishers, 1997. [71] Osbourne and Rubinstein. A Course in Game Theory. MIT Press, 1994. [72] Christos Papadimitriou. Algorithms, games, and the Internet. In ACM, ed- itor, Proceedings of the 33rd Annual ACM Symposium on Theory of Com- puting: Hersonissos, Crete, Greece, July 6–8, 2001, pages 749–753, New York, NY, USA, 2001. ACM Press. [73] G. P˘aun. Membrane Computing. An Introduction. Springer-Verlag, Berlin, 2002. [74] Bruno Poizat. Les petits cailloux. al´eas,1995. [75] A. Robinson and A. Voronkov, editors. Handbook of Automated Reasoning, volume 1. North-Holland, 2001. [76] C. E. Shannon. Mathematical theory of the differential analyser. Journal of Mathematics and Physics MIT, 20:337–354, 1941. [77] P. W. Shor. Algorithms for quantum computation: Discrete logarithms and factoring. In Shafi Goldwasser, editor, Proceedings of the 35th An- nual Symposium on Foundations of Computer Science, pages 124–134, Los Alamitos, CA, USA, November 1994. IEEE Computer Society Press. [78] P. Szor. The Art of Computer Virus Research and Defense. Addison-Wesley Professional, 2005. [79] Terese. Term Rewriting Systems. Number 55 in Cambridge Tracts in The- oretical Computer Science. Cambridge University Press, 2003.

23 [80] K. Thompson. Reflections on trusting trust. Communication of the ACM, 27:761–763, august 1984. Also appears in ACM Turing Award Lectures: The First Twenty Years 1965-1985.

[81] . On computable numbers, with an application to the E¨ ntschei- dungsproblem¨. Proceedings of the London Mathematical Society, 42(2):230– 265, 1936.

[82] Anastasios Vergis, Kenneth Steiglitz, and Bradley Dickinson. The com- plexity of analog computation. Mathematics and Computers in Simulation, 28(2):91–113, 1986.

[83] E. Visser. Stratego: A Language for Program Transformation based on Rewriting Strategies. System Description for Stratego 0.5. In Aart Middel- dorp, editor, Proceedings of the 12th International Conference on Rewriting Techniques and Applications, volume 2051 of Lecture Notes in Computer Science, pages 357–361. Springer, 2001.

[84] J von Neumann. Theory of Self-Reproducing Automata. University of Illi- nois Press, Urbana, Illinois, 1966. edited and completed by A.W.Burks.

[85] John von Neumann and Oskar Morgenstern. Theory of Games and Eco- nomic Behavior. Princeton University Press, Princeton, New Jersey, 1 edition, 1944.

[86] J¨orgen W. Weibull. Evolutionary Game Theory. The MIT Press, 1995. [87] Damien Woods and Thomas J. Naughton. An optical model of computa- tion. Theoretical Computer Science, 334(1-3):227–258, 2005.

24 8 Collaborations 8.1 At INRIA • Calligramme - INRIA Lorraine : As several people involved in this project come from this group, we still have some relations. The object of Project Calligramme is the development of tools and methods stem- ming from proof theory, especially linear logic with a strong application in computational linguistics. We share methods but our research goals and applications are different. In particular, we focus on complexity and adverse computations, which are not anymore a Calligramme subject. • Protheo - INRIA Lorraine : As several people involved in this project come from this group, we still have some relations. The Protheo project aims at designing and implementing tools for program specification and verification. We share methods but our research goals and applications are different. Indeed, we focus on program analysis in the context of adversary computations, and also nex model of computations • Grand-Large - FUTURS: several people of the Grand Large INRIA project are involved in the SOGEA project that we are leading. Project GRAND LARGE works in the framework of large scale computing and the Globalization of Computer Resources and Data (GRID) initiative. Our point of view is more fundamental and we focus on complexity, on game theory, and model of computations.

8.2 National Collaborations • LITA- Metz University : We have regular research discussions with Maurice Margenstern’s team in this Lab. A part of the LITA lab focuses on properties of molecular models of computations (DNA models), on tiling problems, and on cellular automata. • LIPN - Universit´eParis 13 : Jean-Yves Marion and Guillaume Bon- fante have closed relations with P. Baillot, V. Mogbil, and P. de Naurois about control of resources. Their work involves the light fragment of linear logic. • Laboratoire de virologie et de cryptologie - Ecole Sup´erieure et d’Application des Transmissions : We have started a collaboration with Eric Filiol. • Laboratoire PRiSM - Universit´ede Versailles, St-Quentin en Yvelines, Equipe ALCAAP : We collaborate with several members of this lab, in particular since they are involved in the SOGEA project. Their problematic is about algorithmic for graphs and combinatorial analysis. • LRI, Paris XI, Orsay We collaborate with several members of this lab, in particular since they are involved in the SOGEA project. We collaborate

25 with people from the “Algorithms and Complexity” groups and with the “Parallelism” group.

• LIRMM, Montpellier We collaborate with several members of this lab, in particular with Rodolphe Giroudeau and Jean-Claude Konig. Our col- laborations are related to approximation algorithms for scheduling prob- lems.

• Paris VII: We are working with several people like R. Amadio on resource analysis and S. Grigorieff on various topics around computational models and complexity..

8.3 International Collaborations • Computability in Europe Network. Several members of this project are members of this network. Olivier Bournez is leading the French node of this project. It focuses on complexity and computability aspects, and has recently created a conference series, with conferences planed until 2010.

• APPSEM II Network. We collaborate with this network. Its objective is to promote research about semantic oriented applications of program- ming languages. Jean-Yves Marion is the LORIA leader of this network.

• ICC/LCC group. Jean-Yves Marion is member of the steering com- mitee of the group “Logic and Computational Complexity”. Jean-Yves Marion and Guillaume Bonfante collaborate with people in this group: D. Leivant (USA), J. Royer (USA), N. Jones (Denmark-DIKU), M. Hofmann (Germany-Munich), Niggl (Germany-Illmenau), Terui (Japan).

• City University, Hong Kong. Paulin de Naurois defended a PhD thesis co-supervised by Olivier Bournez, Jean-Yves Marion and Felipe Cucker in City University.

• Portugal, Integrated Action Program Pessoa

– Olivier Bournez is leading a PAI with Instituto Superior Technico in Lisbon. This program concerns the investigation on complexity and computability of continuous time models. Daniel Gra¸ca,PhD student there, spent more than one year with us in Nancy. – Jean-Yves Marion is leading a PAI with Lisbon University. This program concerns characterizations of parallel complexity classes.

• Politecnico de Turin : We have collaboration with several people in Turin, in particular with Simona della Ronchi. Jean-Yves Marion co- supervises M. Gaboardi’s PhD thesis. Jean-Yves Marion is the LORIA leader of Franco-Italian project ”geometry of computations”.

26 8.4 Industrial collaborations We have regular contact with industrial partners. For instance, France Telecom and CRIL technologie are partners of the RNTL project. Two PhDs have also been realized in collaboration with industrial partners. Florent Garnier’s PhD, co-directed by Olivier Bournez, was funded by France Telecom through the RNTL project. Liliana Ibanescu’s PhD, also co-directed by Olivier Bournez, was realized in partnership with PSA. Concerning verification of hybrid systems (LIAFA and Verimag), and net- work algorithms, we have informal industrial collaborations.

8.5 Participation to national projects • ACI Modulogic : Modulogic is an ACISI project, whose objective is to build an integrated toolbox for asserted software. This toolbox allows for writing modules built on declaration, definitions, statements and proofs. Declarations can be refined into definitions and statements into proofs, by progressively migrating from the specification to the implementation, with inheriting and redefining mechanisms, and by instantiating parameters. Isabelle Gnaedig is the head of the project for Nancy.

• ACI “s´ecurit´e”Criss : The objective of this project is to analyze pro- gram resources of reactive programs. The main theoretical notion for such analysis is based on quasi-interpretation. This project is related to the former IST MRG (Mobile resource guarantees). Jean-Yves Marion is the local head of this project and Guillaume Bonfante participates in it.

• ACI “nouvelles interfaces des math´ematiques” Geocal : The ob- jective of Geocal is to study links between logics and the different fields of computer science. In particular, we focused on logics and complexity. Jean-Yves Marion is the local head of this project and Guillaume Bonfante participates in it.

• Pˆole Rescom - GDR ASR : This national research group focuses on algorithmics for telecommunications networks. We are involved in it, and Johanne Cohen has participated in its annual meetings over the last few years.

• Network “Complexit´e,Mod´elesFinis, Bases de Donn´ees” of GDR IM : Jean-Yves Marion, Guillaume Bonfante, Olivier Bournez and Em- manuel Hainry are members of this network. It mainly focuses on the complexity theory in a wide sense. Olivier Bournez and Emmanuel Hainry participate in their meetings and presented talks in 2004, 2005 and 2006 at the annual meeting. Olivier Bournez and Emmanuel Hainry co-organized with Dider Galmiche the 2007 annual meeting. This network is a working group of the GDR Informatique Math´ematique.

• Group LAC of GDR IM We all participate in this GDR group.

27 • R´eseau RNTL “Averroes” : We participate in the RNTL project Aver- roes, whose objective is the analysis and verification for the reliability of embedded systems, in particular concerning their quantitative and func- tional properties.

• ARA Sogea : Olivier Bournez is the national scientific coordinator of the ARA Sogea (Security Of Games Equilibria and Distributed Algorithms). More details on this action can be found on http://sogea.loria.fr. Johanne Cohen and Olivier Bournez are members of this ARA. Several other na- tional teams in Paris II, Paris XI, Versailles University are involved in this project.

• ARA Virus : Jean-Yves Marion is the head of the ARA Virus, whose goal is to establish a theory of computer viruses, and to built secure protection systems against them. Guillaume Bonfante also participates in the project.

28 9 Positioning Within the Scientific Community 9.1 Models of Computations, Computability, Complexity On a worldwide scale, many research teams focus on questions related to com- putability and complexity theory. For example, we are involved in the Com- putability in Europe (CiE) Network, which is a network of people interested in the computability theory. In the French world, several research teams are ex- plicitly concerned by complexity theory, for example in LRI at Orsay, in Greyc in Caen, in LIP in Lyon, in LITA in Metz, in LIPN in Paris XIII. Compared to classical approaches, we mainly focus on non classical models of computations, in particular on models of computations related to computations over continuous domains, or inspired by huge parallelism models, or by viruses. In that spirit, our closest competitor in France is the MC2 Team in Lyon, which focuses on models of computations. However, they mainly focus on cel- lular automata, or on quantum computations, or on Blum Shub and Smale, or Valiant Model. Our models are distinct. Concerning continuous time models, at the worldwide level, the models on which we focus are mainly studied by people involved in the CiE network. In particular, several people from Lisbon, with whom we collaborate closely, focus on continuous time models of computations. The new CIM research team from ENS Paris also focuses on properties of continuous dynamic systems, and on their properties, with a distinct point of view. Concerning the models inspired by networks, the closest models that are considered in literature are mainly studied by people from Yale University in the USA with the population protocol model. More classical models for huge size parallelism include cellular automata. Several research teams worldwide focus explicitly on cellular automata.

9.2 Robust and adversary algorithmics Adverse Computations The term adverse computation is a terminology of our own. One classical approach to model competition is the game theory. The game theory was invented by mathematicians, and is mainly used today by economists or biologists. As such, a huge number of research teams in these fields have considerations related to the game theory. The Algorithmic Game Theory differs from the game theory by the fact that algorithmic aspects are taken into account. The Algorithmic Game Theory, in full development for about 7 years, is mainly studied by people working on complexity aspects. Recent years have seen the emergence of its applications in various research fields, including algorithmics for networks. Compared to the most common approach, we focus on dynamic aspects. Only a few papers have been devoted to this subject, in the computer science and algorithmic commu- nities. This is one of the research goals of the European DELIS project, and of the SOGEA project. The goal of DELIS is to develop methods, techniques and tools to cope with challenges imposed by the size and dynamics of today’s and

29 especially future information systems, in an interdisciplinary effort of Computer Science, Physics, Biology, and Economy. Compared to DELIS, we focus on al- gorithmic aspects, and on computation theoretic aspects with models coming from classical and continuous computation theory. Another approach is given by computer virology whose positioning is ex- plained below.

Distributed Algorithms At a worldwide level, several research teams focus on problems related to algorithmics for networks. At the French level, we can mention research teams in LIRMM in Monptellier, LIP and INSA in Lyon, LRI in Orsay, PRiSM in Versailles, LIAFA in Paris VII, IBISC in Evry, LABRI in Bordeaux. All of these teams are involved in the ResCom network, in which we participate. Classical approaches are mainly based on properties of graphs, and proto- cols. The current project proposes to use theoretic game considerations in the construction of algorithms for networks. At the French level, people that also consider similar arguments include some members of the IBISC lab in Evry, mainly focusing on scheduling problems, the LIA in Avignon, mainly focusing on problems related to pricing, in PRiSM, with whom we collaborate, and in IRISA in Rennes. Most of these people are involved in the EURONGI European project. We are also involved in the French ARA SSIA SOGEA project with some of them. The SOGEA project focuses on dynamic aspects and complexity ques- tions, in relations with applications for telecommunication and sensor networks. At a worldwide level, the closest project is the European DELIS project, in which no French computer scientist is involved.

Analysis and Verification of Adversary Systems Our work related to analysis and proof of formal systems mainly focuses on models inspired by rewriting, and on the verification of hybrid and continuous systems. The rewriting theory has been developed over the last 30 years by many research teams worldwide. One of our specificities is to focus on rewriting con- trolled by strategies, and in particular on the proof of properties of rewriting systems under strategies. We have developed inductive approaches for that, in close relation with some people in the PROTHEO research group with whom we still collaborate in an active way. In the international community, we can mention some teams in Aachen, Germany, in Valencia, Spain, and in Urbana, Illinois, which are also working on rewriting proof tools for programming. Our work related to the verification of hybrid and continuous systems is close to problems under study in the model checking community. Many re- search teams focus worldwide on verification or on the control theory of hybrid systems, as can be measured by the Hybrid Systems Computation and Control, or the Conference on Decision and Control conference series. In particular, in the French world, we can mention some teams in LIAFA in Paris VII, in LSV in Cachan, in IRISA in Rennes, in VERIMAG in Grenoble, LORIA in Nancy.

30 Compared to more common approaches, we focus on complexity or computabil- ity theoretic related questions, and we collaborate with several people of these teams about such questions. On implicit computational complexity, there are several teams with slightly different directions. Martin Hofmann in Munich develops type systems. Karl- Heinz Niggl uses matrix algebras in Ilmenau. There are several group which work on light linear logic : P. Baillot (Paris 13), S. Ronchi della Rocca (U. Turin), . . . In north America, Stephen Cook, Daniel Leivant, Harry Mairson, Jim Royer are well known people who are very active on this field.

Computer Virology Even if it is surprising, there are very few academic works. Of course, anti-virus companies should do research on this subject, as well as military labs all over the world. However, there are about twenty papers which consider computer virology from a scientific point of view and which are available. Most of the papers are about description of viruses or worms which are published on blogs. Our research is made in strong ties with Eric Filiol (ESAT) and we are starting to collaborate with the University of Luxembourg, of Athens and of Milano. Moreover, we try to federate effots with the TCV conference initiative.

31 Project’s bibliography PhD and ”Habilitations `adiriger des recherches” From 2000

[1] J´erˆomeBesombes. Un mod`ele algorithmique de la g´en´eralisation de struc- tures dans le processus d’acquisition du langage. PhD thesis, Universit´e Nancy 1, Dec 2003.

[2] Guillaume Bonfante. Construction d’ordres, analyse de la complexit´e. PhD thesis, INPL, 2000.

[3] Paulin Jacob´ede Naurois. R´esultats de Compl´etude et Caract´erisations Syn- taxiques de Classes de Complexit´esur des Structures Arbitraires. PhD thesis, INPL and City university (Hong-Kong), Dec 2004. Joint thesis.

[4] Olivier Fissore. Terminaison de la r´e´ecriture sous strat´egies. Th`ese d’universit´e,UHP Nancy I, Dec 2003.

[5] Liliana Ibanescu. Programmation par r`egles et strat´egies pour la g´en´eration automatique de m´ecanismes de combustion d’hydrocarbures polycycliques. Th`esede Doctorat d’Universit´e,Institut National Polytechnique de Lor- raine, Nancy, France, June 2004.

[6] J.-Y. Marion. Complexit´eimplicite des calculs, de la th´eorie `ala pratique. Habilitation `adiriger les recherches, Universit´eNancy 2, 2000.

[7] Jean-Yves Moyen. Analyse de la complexit´e et transformation de pro- grammes. PhD thesis, Universit´eNancy 2, 2003.

9.3 Books, scientific edition From 2000

[1] Jean-Yves Marion. Editorial : Implicit computational complexity (icc). Theoretical Computer Science, 318(1-2):1, Jun 2004. Editeur scientifique invit´e.

Articles in international journals From 2000

[1] Lali Barri`ere, Johanne Cohen, and Margarita Mitjana. Gossiping in chordal rings under the line model. Theoretical Computer Science, 264(1):53–64, 2001.

32 [2] Dominique Barth, Pascal Berthom´e,and Johanne Cohen. The eulerian stretch of a digraph and the ending guarantee of a convergence routing. Journal of Interconnection Networks (JOIN), 5(2):93–109, jun 2004.

[3] Dominique Barth, Johanne Cohen, and Corentin Durbach. Multicast tree allocation algorithms for distributed interactive simulation. Inter- national Journal of High Performance Computing and Networking - IJH- PCN., 4(3/4):137–151, 2006.

[4] Dominique Barth, Johanne Cohen, and Touafik Faik. On the b-continuity property of graphs. Discrete Applied Mathematics, 2006.

[5] Dominique Barth, Johanne Cohen, Lynda Gastal, Thierry Mautor, and S´etephane Rousseau. A comparison of splittable and non-splittable packet models in an optical ring network. Journal of Optical Networking (JON), 2006.

[6] J´erˆomeBesombes and Jean-Yves Marion. Learning tree languages from positive examples and membership queries. Theoretical Computer Science, 2006.

[7] Vincent Blondel, Olivier Bournez, Pascal Koiran, Christos Papadimitriou, and John Tsitsiklis. Deciding stability and mortality of piecewise affine dynamical systems. Theoretical Computer Science A, 1–2(255):687–696, 2001.

[8] Vincent D. Blondel, Olivier Bournez, Pascal Koiran, and John Tsitsiklis. The stability of saturated linear dynamical systems is undecidable. Journal of Computer and System Science, 62(3):442–462, May 2001.

[9] G. Bonfante, A. Cichon, J.-Y. Marion, and H. Touzet. Algorithms with polynomial interpretation termination proof. Journal of Functional Pro- gramming, 11(1):33–53, 2001.

[10] Guillaume Bonfante, Matthieu Kaczmarek, and Jean-Yves Marion. On abstract computer virology: from a recursion-theoretic perspective. Journal of computer virology, 1(3-4), 2006.

[11] Guillaume Bonfante, Jean-Yves Marion, and Jean-Yves Moyen. Quasi- interpretations, a way to control resources. Theoretical Computer Science, (revision).

[12] O. Bournez, F. Cucker, P.J. de Naurois, and J.Y. Marion. Implicit com- plexity over an arbitrary structure: Quantifier alternations. Information and Computation, 204(2):210–230, Feb 2006.

[13] Olivier Bournez. How much can analog and hybrid systems be proved (super-)turing. Applied Mathematics and Computation, 2005. To appear.

33 [14] Olivier Bournez and Michael Branicky. The mortality problem for matrices of low dimensions. Theory of Computing Systems, 35(4):433–448, Jul-Aug 2002.

[15] Olivier Bournez, Felipe Cucker, Paulin Jacob´ede Naurois, and Jean-Yves Marion. Implicit complexity over an arbitrary structure: Sequential and parallel polynomial time. Journal of Logic and Computation, 15:41–58, 2005.

[16] Olivier Bournez and Emmanuel Hainry. Elementarily computable functions over the real numbers and R-sub-recursive functions. Theoretical Computer Science, 348(2–3):130–147, 2005.

[17] Olivier Bournez and Emmanuel Hainry. Recursive analysis characterized as a class of real recursive functions. Fundamenta Informaticae, 2006. To appear.

[18] Thierry Chich, Johanne Cohen, and Pierre Fraignaud. Unslotted deflection routing: a practical and efficient protocol for multi-hop optical networks. IEEE/ACM Transaction on Networking, 9(1):47–58, 2001.

[19] Johanne Cohen, Pierre Fraignaud, and Cyril Gavoille. Recognizing kn¨odel graphs. Discrete Mathematics, 250:41–62, Mar 2002.

[20] Johanne Cohen, Pierre Fraignaud, and Margarita Mitjana. Polynomial time algorithms for minimum-time broadcast in tree. Theory of Computing Systems, 35(6):641–665, 2002.

[21] Johanne Cohen, Emmanuel Jeannot, Nicolas Padoy, and Fr´ed´eric Wagner. Message scheduling for parallel data redistribution between clusters. IEEE Transactions on Parallel and Distributed Systems., 2006.

[22] Serge Grigorieff and Jean-Yves Marion. Kolmogorov complexity and non- determinism. Theor. Comput. Sci., 271(1-2):151–180, 2002.

[23] D. Leivant and J.-Y. Marion. A characterization of alternating log time by ramified recurrence. Theoretical Computer Science, 236(1-2):192–208, Apr 2000.

[24] Jean-Yves Marion. Analysing the implicit complexity of programs. Infor- mation and Computation, 183(1):2–18, 2003.

Publications in international conferences and workshop From 2000

[1] Eugene Asarin, Olivier Bournez, Thao Dang, and Oded Maler. Approxi- mate reachability analysis of piecewise-linear dynamical systems. In Hybrid Systems : Computation and Control (HSCC’00), Pittsburgh (USA), volume

34 1790 of Lecture Notes in Computer Science, pages 20–31. Springer-Verlag, March 2000.

[2] Eugene Asarin, Olivier Bournez, Thao Dang, Oded Maler, and Amir Pnueli. Effective synthesis of switching controllers for linear systems. Proceedings of the IEEE, Special Issue on ”Hybrid Systems”, 88(7):1011–1025, July 2000.

[3] Dominique Barth, Johanne Cohen, Alain Denise, and Romain Rivi`ere. Shuffling biological sequencess with motifs constraints. In Algorithms and Computational Methods for Biochemical and Evolutionary Networks (Compbionet 04). KCL publications, 2004.

[4] Dominique Barth, Johanne Cohen, Paraskevi Fragopoulou, and G´erard H´ebuterne. Wavelengths assignment on a ring all-optical metropolitan area network. In 3rd Workshop on Approximation and Randomization Algorithms in Communication Networks - ARACNE’2002, Rome, Italy, September 2002.

[5] Dominique Barth, Johanne Cohen, Lynda Gastal, Thierry Mautor, and Stephane Rousseau. Comparison of fixed size and variable size packet mod- els in an optical ring network: Algorithms and performances. In Photonics in Switching (PS’2003), pages 89–91, 2003.

[6] Dominique Barth, Johanne Cohen, Lynda Gastal, Thierry Mautor, and Stephane Rousseau. Fixed size and variable size packet models in an optical ring network: Complexity and simulations. In Cevdet Aykanat, Tugrul Dayar, and Ibrahim Korpeoglu, editors, 19th International Symposium on Computer and Information Sciences (ISCIS), volume 3280 of Lecture Notes in Computer Science, pages 238–246. Springer, Oct 2004.

[7] Emmanuel Beffara, Olivier Bournez, Hassen Kacem, and Claude Kirchner. Verification of timed automata using rewrite rules and strategies. In Sixth Annual Workshop of the ERCIM Working Group on Constraints, Prague, June 18–20, 2001.

[8] Emmanuel Beffara, Olivier Bournez, Hassen Kacem, and Claude Kirch- ner. Verification of timed automata using rewrite rules and strategies. In Nachum Dershowitz and Ariel Frank, editors, Proceedings BISFAI 2001, Seventh Biennial Bar-Ilan International Symposium on the Foundations of Artificial Intelligence, Ramat-Gan, Israel, June 25–27, 2001.

[9] J´erˆomeBesombes and Jean-Yves Marion. Identification reversible depen- dency tree languages. In 3rd Learning Language in Logic (LLL) Workshop. L. Popelinsky, Matthieu Nepil, Sep 2001.

[10] J´erˆomeBesombes and Jean-Yves Marion. Learning dependency languages from a teacher. In 9th conference on Formal Grammar, Aug 2004.

35 [11] J´erˆomeBesombes and Jean-Yves Marion. Learning regular trees with queries. In Mieczyslaw A. Klopotek, Slawomir T. Wierzchon, and Krzsztof Trojanowski, editors, Intelligent Information Processing and Web Mining, Proceedings of the International IIS : IIPWMO04˜ , Advances in Soft Com- puting, pages 181–190. Springer, May 2004.

[12] J´erˆomeBesombes and Jean-Yves Marion. Learning reversible categorial grammars from structures. In Categorial Grammars, Jun 2004.

[13] J´erˆomeBesombes and Jean-Yves Marion. Learning tree languages from positive examples and membership queries. In Shai Ben-David, John Case, and Akira Maruoka, editors, 15th international conference on Algorithmic Learning Theory - ALT’2004, volume 3244 of Lecture notes in Artificial Intelligence, pages 440–453. Springer, Oct 2004.

[14] Vincent D. Blondel, Olivier Bournez, Pascal Koiran, and John N. Tsitsik- lis. The stability of saturated linear dynamical systems is undecidable. In Horst Reichel Sophie Tison, editor, Symposium on Theoretical Aspects of Computer Science (STACS), Lille, France, volume 1770 of Lecture Notes in Computer Science, pages 479–490. LIFL, Springer-Verlag, February 2000.

[15] Guillaume Bonfante, Jean-Yves Marion and Romain P´echoux A Charac- terization of Alternating Log Time by First Order Functional Programs. In Lpar 06, Volume 4246 of Lecture Notes in Computer Science, pages 90–104, Springer, 2006.

[16] G. Bonfante, R. Kahle, J-Y Marion and I. Oitavem Towards an implicit characterization of NCk In CSL, Volume 4207 of Lecture Notes in Computer Science, pages 212-224, Springer, 2006.

[17] Guillaume Bonfante. Some programming languages for logspace and ptime. In AMAST ’06. Lecture Notes in Computer Science, July 2006. [18] Guillaume Bonfante, Matthieu Kaczmarek, and Jean-Yves Marion. Toward an abstract computer virology. In ICTAC’05, Lecture Notes in Computer Science. Springer, Oct 2005.

[19] Guillaume Bonfante, Jean-Yves Marion, and Jean-Yves Moyen. On lexico- graphic termination ordering with space bound certifications. In A. Zamulin D. Bjorner, Matthieu Broy, editor, International Andrei Ershov Memorial Conference - PSI’01, volume 2244 of Lecture notes in Computer Science, pages 482–493. Springer, 2001.

[20] Guillaume Bonfante, Jean-Yves Marion, and Jean-Yves Moyen. On com- plexity analysis by quasi-interpretation. In 2nd Appsem II workshop - APPSEM’04, pages 85–95, Apr 2004.

[21] Guillaume Bonfante, Jean-Yves Marion, and Jean-Yves Moyen. Quasi- interpretations and small space bounds. In Jurgen¨ Giesl, editor, Term

36 Rewriting and Applications, 16th International Conference, RTA 2005, Nara, Japan, April 19-21, 2005, Proceedings, volume 3467 of Lecture Notes in Computer Science, pages 150–164. Springer, 2005.

[22] Guillaume Bonfante, Jean-Yves Marion, Jean-Yves Moyen, and R. P´e- choux. Synthesis of quasi-interpretations. In LCC, 2005.

[23] Olivier Bournez. A generalization of equational proof theory? In Holger Hermanns and Roberto Segala, editors, Process Algebra and Probabilistic Methods : Performance Modeling and Verification, volume 2399 of Lecture Notes in Computer Science, pages 208–209. Springer-Verlag, July 25–26 2002.

[24] Olivier Bournez, Felipe Cucker, Paulin Jacob´ede Naurois, and Jean-Yves Marion. Computability over an arbitrary structure. sequential and parallel polynomial time. In Andrew D. Gordon, editor, Foundations of Software Science and Computational Structures, 6th International Conference (FOS- SACS’2003), volume 2620 of Lecture Notes in Computer Science, pages 185–199, Warsaw, 2003. Springer.

[25] Olivier Bournez, Felipe Cucker, Paulin Jacob´ede Naurois, and Jean-Yves Marion. Safe recursion over an arbitrary structure: Par, ph and dph. In Anuj Dawar, editor, Fifth International Worshop on Implicit Computa- tional Complexity - ICC’2003, volume 90 of Electronic Notes in Theoretical Computer Science, Ottawa, Canada, 2003. Elsevier.

[26] Olivier Bournez, Felipe Cucker, Paulin Jacob´ede Naurois, and Jean-Yves Marion. Tailoring recursion to characterize non-deterministic complex- ity classes over arbitrary structures. In In 2nd APPSEM II Workshop (APPSEM’04), April 2004.

[27] Olivier Bournez, Felipe Cucker, Paulin Jacob´ede Naurois, and Jean-Yves Marion. Tailoring recursion to characterize non-deterministic complexity classes over arbitrary structures. In 3rd IFIP International Conference on Theoretical Computer Science - TCS’2004, Toulouse, France, August 2004. Kluwer Academic Press.

[28] Olivier Bournez, Felipe Cucker, Paulin Jacob´ede Naurois, and Jean-Yves Marion. Logical characterizations of pK and npK over an arbitrary structure k. 2005.

[29] Olivier Bournez, Guy-Marie Cˆome,Val´erie Conraud, H´el`ene Kirchner, and Liliana Ibanescu. Automated generation of kinetic chemical mechanisms using rewriting. In P.M.A. Sloot, D. Abramson, A.V. Bogdanov, J.J. Don- garra, A.Y. Zomaya, and Y.E. Gorbachev, editors, International Confer- ence on Computational Science - ICCS 2003, Melbourne, June 2-4, 2003, Proceedings, Part III, volume 2659 of Lecture Notes in Computer Science, pages 367–376. Springer, 2003.

37 [30] Olivier Bournez, Guy-Marie Cˆome,Val´erie Conraud, H´el`ene Kirchner, and Liliana Ibanescu. A rule-based approach for automated generation of ki- netic chemical mechanisms. In Robert Nieuwenhuis, editor, Rewriting Tech- niques and Applications, 14th International Conference, RTA 2003, Valen- cia, Spain, June 9-11, 2003, Proceedings, volume 2706 of Lecture Notes in Computer Science, pages 30–45. Springer, June 2003. [31] Olivier Bournez, Paulin de Naurois, and Jean-Yves Marion. Safe recur- sion and calculus over an arbitrary structure. In Implicit Computational Complexity - ICC’02, Copenhagen, Denmark, July 2002. [32] Olivier Bournez and Florent Garnier. Proving positive almost sure termi- nation. In Jurgen¨ Giesl, editor, 16th International Conference on Rewriting Techniques and Applications (RTA’2005), volume 3467 of Lecture Notes in Computer Science, page 323, Nara, Japan, 2005. Springer. [33] Olivier Bournez, Mohamed El Habib, Claude Kirchner, H´el`ene Kirch- ner, Jean-Yves Marion, and Stephan Merz. The qsl plateform at loria. In First QPQ Workshop on Deductive Software Components, pages 9–12, Miami, Florida, July 28 2003. CADE-19 Workshop, ftp://ftp.csl.sri.com/pub/users/shankar/QPQ03.pdf, file = ”perso.bib”. [34] Olivier Bournez and Emmanuel Hainry. An analog characterization of el- ementarily computable functions over the real numbers. volume 3142 of Lecture Notes in Computer Science, pages 269–280, Turku, Finland, 2004. Springer. [35] Olivier Bournez and Emmanuel Hainry. An analog characterization of ele- mentarily computable functions over the real numbers. In In 2nd APPSEM II Workshop (APPSEM’04), Tallinn, Estonia, April 2004. [36] Olivier Bournez and Emmanuel Hainry. Real recursive functions and real extentions of recursive functions. In Maurice Margenstern, editor, Ma- chines, Computations and Universality (MCU’2004), volume 3354 of Lec- ture Notes in Computer Science, Saint-Petersburg, Russia, September 2004. [37] Olivier Bournez and Mathieu Hoyrup. Rewriting logic and probabilities. In Robert Nieuwenhuis, editor, Rewriting Techniques and Applications, 14th International Conference, RTA 2003, Valencia, Spain, June 9-11, 2003, Proceedings, volume 2706 of Lecture Notes in Computer Science, pages 61– 75. Springer, June 2003. [38] Olivier Bournez, Liliana Ibanescu, and H´el`ene Kirchner. From chemical rules to term rewriting. In 6th International Workshop on Rule-Based Pro- gramming, Nara, Japan, April 2005. [39] Olivier Bournez and Claude Kirchner. Probabilistic rewrite strategies: Ap- plications to ELAN. In Sophie Tison, editor, Rewriting Techniques and Applications, volume 2378 of Lecture Notes in Computer Science, pages 252–266. Springer-Verlag, July 22-24 2002.

38 [40] Olivier Bournez and Oded Maler. On the representation of timed polyhe- dra. In International Colloquium on Automata Languages and Program- ming (ICALP’00), volume 1853 of Lecture Notes in Computer Science, pages 793–807, Geneva, Switzerland, 9–15 July 2000. Springer.

[41] Johanne Cohen, Fedor Fomin, Pinar Heggernes, Dieter Kratsch, and Gre- gory Kucherov. Optimal linear arrangement of interval graphs. In Springer- Verlag, editor, 31st International Symposium on Mathematical Foundations of Computer Science - MFCS 2006, Lecture Notes in Computer Science, pages 267–279, 2006.

[42] O. Fissore, I. Gnaedig, and H. Kirchner. Termination of rewriting with local strategies. In M. P. Bonacina and B. Gramlich, editors, Selected papers of the 4th International Workshop on Strategies in Automated Deduction, volume 58 of Electronic Notes in Theoretical Computer Science. Elsevier Science Publishers B. V. (North-Holland), 2001.

[43] O. Fissore, I. Gnaedig, and H. Kirchner. CARIBOO : An induction based proof tool for termination with strategies. In Proceedings of the Fourth International Conference on Principles and Practice of Declarative Pro- gramming, pages 62–73, Pittsburgh (USA), October 2002. ACM Press.

[44] O. Fissore, I. Gnaedig, and H. Kirchner. Outermost ground termination. In Proceedings of the Fourth International Workshop on Rewriting Logic and Its Applications, volume 71 of Electronic Notes in Theoretical Computer Science, Pisa, Italy, September 2002. Elsevier Science Publishers B. V. (North-Holland).

[45] O. Fissore, I. Gnaedig, and H. Kirchner. Simplification and termination of strategies in rule-based languages. In Proceedings of the Fifth International Conference on Principles and Practice of Declarative Programming, pages 124–135, Uppsala (Sweden), August 2003. ACM Press.

[46] O. Fissore, I. Gnaedig, and H Kirchner. A proof of weak termination provid- ing the right way to terminate. In First International Colloquium on The- oretical Aspect of Computing, volume 3407 of Lecture Notes in Computer Science, pages 356–371, Guiyang, China, September 2004. Springer-Verlag.

[47] Olivier Fissore, Isabelle Gnaedig, and Helene Kirchner. CARIBOO: A multi-strategy termination proof tool based on induction. In Albert Rubio, editor, Proceedings of the 6th International Workshop on Termination 2003, pages 77–79, Valencia (Spain), June 2003.

[48] Pascal Fontaine, Jean-Yves Marion, Stephan Merz, Leonor Prensa Nieto, and Alwen Tiu. Expressiveness + automation + soundness: Towards com- bining smt solvers and interactive proof assistants. In TACAS, Lecture Notes in Computer Science. Springer, 2006.

39 [49] I. Gnaedig and H. Kirchner. Termination of rewriting strategies: a generic approach. In H.-W. Loidl M. Hofmann, editor, APPSEM II Workshop 2005, Chiemsee, Germany, September 2005. APPSEM II & Ludwig Maximilians Universit¨at Munchen.¨ [50] I. Gnaedig and H. Kirchner. Computing Constructor Forms with Non Ter- minating Rewrite Programs. In Proceedings of the Eighth ACM-SIGPLAN International Symposium on Principles and Practice of Declarative Pro- gramming, pages 121–132, Venice, Italy, July 2006. ACM.

[51] H Kirchner and I. Gnaedig. Termination and normalisation under strategy - Proofs in ELAN. In Proceedigs of the Third International Workshop on Rewriting Logic and its Applications, volume 36 of Elecronic Notes In The- oretical Computer Science, Kanazawa, JAPAN, September 2000. Elsevier Science Publishers B. V. (North-Holland).

[52] J.-Y. Marion and J.-Y. Moyen. Efficient first order functional program interpreter with time bound certifications. In Michel Parigot and Andrei Voronkov, editors, Logic for Programming and Automated Reasoning, 7th International Conference, LPAR 2000, Reunion Island, France, volume 1955 of Lecture Notes in Computer Science, pages 25–42. Springer, Nov 2000.

[53] J-Y Marion and R. P´echoux. Resource analysis by sup-interpretation. In FLOPS, volume 3945 of Lecture Notes in Computer Science, pages 163–176. Springer, 2006.

[54] Jean-Yves Marion. Actual arithmetic and feasibility. In L. Fribourg, editor, International Workshop on Computer Science Logic - CSL’2001, volume 2142 of Lecture notes in Computer Science, pages 115–129. Springer, Sep 2001.

Articles in national journals From 2000

[1] Anne Bonfante and Jean-Yves Marion Les paradoxes de la d´efense virale : la cas Bradley Misc, Volume 28, pages 4–7, 2006

[2] J´erˆome Besombes and Jean-Yves Marion. Apprentissage des langages r´eguliers d’arbres et applications. Traitement automatique de langues, 44(1):121–153, Jul 2003.

[3] Guillaume Bonfante, Bruno Guillaume, and Guy Perrier. Analyse syntaxique ´electostatique. Traitement Automatique des Langues, 44:3 Evolutions´ en analyse syntaxique, 2003.

40 Publications national conference and workshop From 2000

[1] Henry Amet, Johanne Cohen, Freddy Deppner, Marie-Claude Portmann, and St´ephane Rousseau. Un probl`emed’ordonnancement de messages : Par- tie 1 mod´elisations. In 6`eme congr`es de la Soci´et´eFran¸caise de Recherche Op´erationnelle et d’Aide `ala D´ecision (ROADEF), pages 54–55, 2005.

[2] Henry Amet, Johanne Cohen, Freddy Deppner, Marie-Claude Portmann, and St´ephane Rousseau. Un probl`emed’ordonnancement de messages : Par- tie 2 approches de r´esolution. In 6`eme congr`es de la Soci´et´eFran¸caise de Recherche Op´erationnelle et d’Aide `ala D´ecision (ROADEF), pages 56–57, 2005.

[3] Dominique Barth, Johanne Cohen, and Corentin Durbach. Algorithmes de r´epartition de charge pour des simulations distribu´ees.In 5i`eme congr`es de la Soci´et´eFran¸caise de Recherche Op´erationnelle et d’Aide `ala D´ecision (ROADEF 2003), pages 55–56, 2003.

[4] J´erˆome Besombes and Jean-Yves Marion. Apprentissage des langages r´eguliers d’arbres et applications. In Conf´erence d’Apprentissage - CAP’2002, pages 55–70, Jul 2002.

[5] J´erˆome Besombes and Jean-Yves Marion. Apprentissage des langages r´eguliers d’arbres `al’aide d’un expert. In Conf´erence d’Apprentissage - CAP’2003, Jul 2003.

[6] J´erˆomeBesombes and Jean-Yves Marion. Apprentissage des grammaires cat´egorielles `apartir de structures. In Conf´erence Francophone dAppren-´ tissage - CAp’2004, pages 315–330. Presse universitaire de Grenoble, Jun 2004.

Patents From 2000

[1] O. Fissore, I. Gnaedig, and H Kirchner. Cariboo, a termi- nation proof tool for rewriting-based programming languages with strategies, Version 1.0. Free GPL Licence, APP registration IDDN.FR.001.170013.000.R.P.2005.000.10600, August 2004.

[2] O. Fissore, I. Gnaedig, H. Kirchner, and L. Moussa. Cariboo, a termination proof tool for rewriting-based programming languages with strategies, Version 1.1. Free GPL Licence, APP registration IDDN.FR.001.170013.000.S.P.2005.000.10600, December 2005. Available at http://protheo.loria.fr/softwares/cariboo/.

41 Technical reports From 2000

[1] Dominique Barth, Johanne Cohen, and Touafik Faik. Complexity of deter- mining the b-continuity property of graphs. Technical Report RR-2003/37, PRiM, Universit´eVersailles, 2003.

[2] Dominique Barth, Johanne Cohen, and Touafik Faik. Non approximability and non-continuity of the fall coloring problem. Technical report, LRI, 2005.

42 The permanent members, in brief

• Guillaume Bonfante has been Maˆıtre de Conf´erences at the Ecole Na- tionale Sup´erieure des Mines de Nancy - INPL since 2001. He defended his PhD in 2000 with Adam Cichon. He is working in the Calligramme research group, and is currently in detachment at INRIA. His research in- terests are essentially complexity and resource analysis. He is now tackling the theory of viruses.

• Olivier Bournez is Charg´ede Recherche at INRIA in Protheo research group. After Ecole Normale Sup´erieure de Lyon where he defended his PhD with Michel Cosnard on the algorithmic complexity of continuous and dynamic systems in 1999, he spent one year in the Verimag labora- tory in Grenoble. His research interests include computability and com- plexity theory of continuous models, verification of continuous and hybrid systems, theory and algorithmic of systems with probabilistic aspects, dis- tributed algorithmic, and incentive algorithmic.

• Johanne Cohen is Charg´eede Recherche at CNRS. After Ecole Normale Sup´erieure de Lyon, where she defended her PhD with Pierre Fraigniaud in 1998 about group communications in the line model, she was ATER at Parix XI University. She then got a Maitre de Conf´erence position in Nancy in 2000, and then the Charg´ede Recherche position at CNRS in 2001. Her research interests include distributed algorithmics, incen- tive algorithmics, and in particular algorithmics for telecommunications networks.

• Isabelle Gnaedig is Charg´eede Recherche at INRIA in Protheo re- search group. After studies in mathematics and computer science, she defended her PhD about the termination of rewriting with associative- commutative axioms with Pierre Lescanne in 1986. She obtained her Charg´eede Recherche position at INRIA at the end of 1986. Between 1987 and 1997, she was in charge of promotion and communication of the Centre de Recherche en Informatique de Nancy and of the INRIA-Lorraine research unit. Doing research at full time since 1998, she is interested in automated deduction, rule-based programming and termination.

• Jean-Yves Marion has been Professeur at the Ecole Nationale Sup´erieure des Mines de Nancy - INPL since 2002. He defended his PhD in 1992 with Serge Grigorieff, at the Paris 7 University (LITP), about the complexity of the T system of G¨odel interpreted on finite structures. He spent two years at the University of Indiana to work with Daniel Leivant. His first research subject is about computer security and is based on three issues: (i) controlling resources, (ii) computer virology, (iii) the cooperation of de- duction tools. The second is about learning (grammatical inference) and the theory of information (Kolmogorov complexity).

43