Project Proposal Theoretical Adverse Computations, and Safety ThEorie des Calculs AdveRses, et s´ecurit´e CARTE Scientific leader: Jean-Yves Marion INRIA scientific and technological challenges: 1 3 and 1 Keywords: Algorithmic, Complexity, Computability, Complex Systems, Algo- rithms, Virus, Malwares, Networks, Computational Models, Resource Analysis, Program Properties, Termination, Formal Systems, Logic 1The seven INRIA scientific and technological challenges are: 1. Designing and mastering the future network and communication services infrastructures 2. Developing multimedia data and information processing 3. Guaranteeing the reliability and security of software-prevalent systems 4. Coupling models and data to simulate and control complex systems 5. Combining simulation, visualization and interaction 6. Modeling living beings 7. Fully integrating ICST into medical technology 1 Project-team composition 1.1 Permanent positions • Full-time researchers – Olivier Bournez, Charg´ede recherche INRIA. – Johanne Cohen, Charg´eede recherche, CNRS, – Isabelle Gnaedig, Charg´eede Recherche INRIA. • Professors and assistant professors – Guillaume Bonfante, Maitre de conf´erences, Ecole´ des Mines de Nancy, INPL – Jean-Yves Marion, Professeur, Ecole´ des Mines de Nancy, INPL 1.2 PhD and Post-Docs • Emmanuel Hainry, ATER, UHP • Romain P´echoux, PhD MENRT, • Matthieu Kaczmarek, BDI CNRS-R´egion, • Marco Gaboardi, joint PhD with Turin University • Octave Boussaton, PhD Henri Poincar´eUniversity 2 2 Main objectives Algorithmic and programming theory define implicitly the semantics of an algo- rithm or a program. Thus, when an algorithm asks for a value, the request has a meaning, like the weight of an arc, but this meaning is external to the algo- rithm. This characteristic is not shared by other sciences, whose semantic is not defined, in general, in an external way by the scientist. Since the computation environment is made of autonomous systems, that have their own objectives, and since we try to take into account real world constraints, an algorithm cannot be based on “a priori” knowledge from its environment. A network routing com- putation has to consider the economic importance of each independent agent. An antivirus has to determine the intentions of a program (potentially hostile) to do its task correctly. A program is asked to perform well, even if good clas- sical properties are not ensured. One of the main objectives of the CARTE project is to take into account adversity, as a veritable full-fledged component, linked to actors of a computation, whose behavior is unknown or unclear. We call this notion adversary computation. One of the main objectives is then to predict the behavior of adversary computations, to construct robust, i.e. fault-tolerant algorithms and systems, taking into account divergent interests, resisting viral attacks, and performing well, even if certain correctness properties are missing. One of the original as- pects of the project is to combine the two following approaches. The first one is the analysis of the behavior of a wide-scale system, using tools coming from both the Continuous and the Game Theories. Simply put, this is a macroscopic approach. In the second approach, the tools we envisage using to build defenses would come rather from logic, rewriting, and, more generally, from the Pro- gramming Theory. We will complete these two approaches with the control of computations in antagonistic environments, which requires studying and ver- ifying the fundamental properties of programs and systems, like termination, stability or access to resources. 3 3 Scientific foundations We present the fundamental notions, formalisms and computational models we will use for our purpose. This brief presentation is not exhaustive, but looks at the areas in a particular way, enhancing the aspects which seem to us interesting to reach the goals we have fixed in our research directions. 3.1 Continuous Computation Theory Today’s classical computability and complexity theory deals with discrete time and space models of computation. But discrete time models of machines work- ing on a continuous space can be considered: consider for example the Blum Shub and Smale machines of [6]. Recursive analysis, introduced by Turing [81], Grzegorczyk [50], and Lacombe [56] can also be considered as continous space discrete time computation theory. Models of machines working with a continuous time and space can also be considered: see for example the General Purpose Analog Computer of Claude Shannon [76], defined as a model of the Differential Analysers [20]. At its beginning, continuous time computation theory was mainly concerned with analog machines. Determining which systems can actually be considered as computational models is a very intriguing question. This relates to the philo- sophical discussion about what is a programmable machine, which is beyond the scope of this discussion. Nonetheless, there are some early examples of built analog devices that are generally accepted as programmable machines. They include Bush’s landmark 1931 Differential Analyzer [20], as well as Bill Phillips’ Finance Phalograph, Hermann’s 1814 Planimeter, Pascal’s 1642 Pascaline, or even the 87 b.c. Antikythera mechanism: see [25]. Continuous time computa- tional models also include neural networks and systems that can be built using electronic analog devices. Since continuous time systems are conducive to mod- eling huge populations, one might speculate that they will have a prominent role in analyzing massively parallel systems such as the Internet [72]. The first true model of a universal continuous time machine was proposed by Shannon [76], who introduced it as a model of the Differential Analyzer. Dur- ing the 1950s and 60s an extensive body of literature was published about the programming of such machines2. There were also a number of significant pub- lications on how to use analog devices to solve discrete or continuous problems: see e.g. [82] and the references therein. However, most of this early literature is now only marginally relevant given the ways in which our current understanding of computability and complexity theory have developed. The research on artificial neural networks, despite the fact that it mainly focused on discrete time analog models, has motivated a change of perspective due to its many shared concepts and goals with today’s standard computability and complexity theory [70], [69]. Another line of development of continuous time computation theory has been motivated by hybrid systems, particularly 2See for example the very instructive Doug Coward ’s web Analog Computer Museum [25] and its bibliography. 4 by questions related to the hardness of their verification and control: see for example [18] and [4]. In recent years there has also been a surge of interest in alternatives to classical digital models other than continuous time systems. Those alternatives include discrete-time analog-space models like artificial neural networks [70], optical models [87], signal machines [29] and the Blum Shub and Smale model [7]. More generally there have also been many recent developments in non- classical and more-or-less realistic or futuristic models such as exotic cellular automata models [48], molecular or natural computations [51], [2], [57], [73], black hole computations [53], or quantum computations [28], [49], [77], [54]. The computational power of discrete time models are fairly well known and understood thanks in large part to the Church-Turing thesis. The Church- Turing thesis states that all reasonable and sufficiently powerful models are equivalent. For continuous time computation, the situation is far from being so clear, and there has not been a significant effort toward unifying concepts. Nonetheless, some recent results establish the equivalence between apparently distinct models [47], [46], [45], and [16], which give us hope that a unified theory of continuous time computation may not be too far in the future. A survey on continuous-time computation theory co-authored with Manuel Campagnolo can be found in [15]. This survey includes open problems and re- search directions. Extended discussions can also be found in [14]. This presenta- tion is extracted from [15]. A monograph on complexity theory for discrete-time computation over the real and over arbitrary structures in the sense of [6] can be found in [7] and [74] respectively. 3.2 Rewriting The rewriting paradigm, initially developed to mechanize the word problem, i.e. to decide whether an equality between terms is true modulo a set of axioms, has been greatly studied, in the context of automated deduction, especially since the seventies. Rewriting a term, with a set of rules that are oriented axioms, consists in deciding through matching whether a part of the term to be rewritten is an instance of a left-hand side of rule, then in replacing this instance in the term by the corresponding instance of the right-hand side. Properties of this deduction mechanism like confluence, sufficient complete- ness, consistency, or various notions of termination, have been described. For these generally undecidable properties, many proof methods have been devel- oped for basic rewriting and in a weaker proportion, for extensions like equa- tional extensions, consisting of rewriting modulo a set of axioms, conditional extensions where rules are applied under certain conditions only or typed or constrained extensions. Completeness test algorithms, a great number of meth- ods to prove termination, most
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages43 Page
-
File Size-