Sixth Framework Programme Information Society Technology

RE-TRUST

Remote EnTrusting by RUn-time Software authentication

Project Number: FP6 - 021186

Deliverable: D2.2

Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

TABLE OF CONTENTS

1. Summary...... 4 1.1 Abstract...... 5

2. Introduction...... 6 2.1 Document Control...... 6 2.2 Introduction...... 6 2.3 Related work...... 7

3. The proposal for RE-TRUST...... 9 3.1 Virtual Machine Features...... 10 3.1.1 The Java Security Model...... 12 3.1.2 Safety features of the JVM...... 12 3.1.3 The problem of native methods...... 13 3.2 JVMTI overview ...... 13 3.2.1 Enforcing Trust with the JVMTI-based prototype ...... 14 3.3 AOP overview...... 16 3.3.1 Aspect Oriented Programming...... 16 3.3.2 Enforce Trust with Dynamic AOP ...... 17

4. Type of Attacks ...... 19 4.1 Attack on the Operating System...... 19 4.1.1 Methodology of the Attack to the remote host...... 20 4.2 Attack to the application code ...... 21 4.2.1 Information link and data modification...... 21 4.2.2 Remote execution flow modification ...... 21 4.2.3 Denial of Service...... 21 4.2.4 Interaction between remote computer and other parties...... 21 4.3 Attack to the module code...... 21 4.3.1 Masquerading...... 22 4.3.2 Denial of service...... 22 4.3.3 Unauthorized access...... 22

5. Possible prevention ...... 23 5.1.1 The cryptographic protocol ...... 23 5.1.2 Attacks detection...... 26

6. Open Security Issue ...... 27

2

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

7. Conclusion ...... 27 7.1 Future Work...... 27

8. References...... 28

3

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

1. Summary

Project Number: FP6-021186 Project Title: RE-TRUST: Remote EnTrusting by RUn-time Software auThentication Deliverable Type: RP

Deliverable Number: D2.2 Contractual Date of Delivery: August 2007 Actual Date of Delivery: August 2007 Title of Deliverable: Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module Workpackage Contributing to the Deliverable: WP2 Nature of the Deliverable: Report, Public AUTHOR(S): Paolo Falcarin, Antonio A. Durante (Politecnico di Torino) REVIEWER(S): Igor Kotenko (SPIIRAS)

Abstract: A possible solution to the remote trust problem is described in this deliverable. The solution consists in the use of a module interlocked to the remote application running on the untrusted computer. The module is built by a trusted server and it is sent securely to the untrusted computer and interlocked in the client application to be entrusted. Two possible implementations of the proposed solution for managed code (i.e.: running in a virtual machine) are described for Java. The mobile module acts as a controller of behavior of the application to be entrusted, in order to establish its originality, and it is timely replaced by the trusted server. A malicious user that has the control of the remote computer can perform some attacks: modifying application bytecode prior or during the execution, exploiting security flaws of the operative system, analyzing and tampering with the mobile module. A network protocol is used to continuously download and interlock protection modules with the application to be entrusted. Keywords: Trust, mobile code, JVM Classification: N/A

4

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

Name of Client: European Commission Distribution List: European Commission Project Partners Authorised by: Yoram Ofek, University of Trento Issue: 1.0 Reference: RE-TRUST Deliverable Total Number of Pages: ---- Contact Details: Prof. Yoram OFEK University of Trento Dept. of Information and Communication Technology (DIT) Trento, Italy Phone: +39 0461 88 3954 (office) Fax +39 0461 88 2093 (DIT) E-mail: ofek -at- dit.unitn.it

1.1 Abstract A possible solution to the remote trust problem is described in this deliverable. The solution consists of the use of a module interlocked to the remote application running on the untrusted computer. The module is built by a trusted server and it is sent securely to the untrusted computer and embedded in the client application to be entrusted. Two possible implementations of the proposed solution for managed code (i.e.: running in a virtual machine) are described for Java (see Deliverable D1.2). The mobile module acts as a controller of behavior of the application to be entrusted, in order to establish its originality, and it is timely replaced by the trusted server. A malicious user that has the control of the remote computer can perform some attacks: modifying application bytecode prior or during the execution, exploiting security flaws of the operative system, analyzing and tampering with the mobile module. The document continues with a discussion on how the proposed approach fares under such attacks, and a description of the network protocol used to continuously download and interlock modules with the application to be entrusted. The deliverable is organized as follow: section 2 describes the problem and related work, section 3 describes a proposal to the remote trust problem and two possible implementations using java, section 4 describes the types of possible attacks to the remote application running on the remote computer; then section 5 describes a cryptographic protocol to one type of attack, followed by conclusions and future work.

5

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

2. Introduction

2.1 Document Control

Issue number Issue Date Reason for Change 0.1 29 July 2007 First draft 0.2 07 August 2007 Reviewed Version

2.2 Introduction Security has always been a primary concern for industry and recently the interest for client-side software protection has grown. In general, network-enabled software suffers from some inherent security problems, like unauthorized modification by either malicious individuals (host machine cannot be trusted because of the user threat) or digital entities, like viruses and Trojan horses (host machine cannot be trusted because of the network threat). Many circumstances do exist in which it is necessary to protect software from malicious modifications once it is delivered to the public, e.g., e-voting and e-commerce systems. In particular, the main research question is: “How can a client application be entrusted even when running on an un-trusted host?”. With the term “un-trusted host” we mean a networked computing base (e.g., networked computers and mobile devices) in which a possibly malicious user has complete (conceivably physical) access to system resources (e.g., memory and disks) and tools (e.g., debuggers) in order to reverse-engineer the application code. Under these assumptions, an application is deemed trusted whenever its executed code has not been altered prior to and during execution. Satisfying such integrity requirement in a hostile environment is a challenging issue. Different software-only solutions have been proposed: a detailed discussion is given in related work section (2.3). In our approach, clients generate a continuous flow of tags for the server. The tags flow is generated by a software module that is securely combined with the original application. As far as the application code remains genuine, the module will produce valid tags, which are used as continuous evidence to the remote server that the client code is authentic. Note that disabling the module is not an option for the attacker. In such case, tags would no longer be produced and the server would notice the attack attempt immediately. In this document two prototypes of an implementation are presented; both are based on the deployment and continuous replacement of integrity-checking modules implemented as dynamic code. The two prototypes differ in the kind of interlocking they realize with the application to be protected; in particular, one is based on JVM Tool Interface available from Java 5, while the other is based on a standard JVM extended for exposing dynamic aspect oriented programming features. As further discussed in the related-work section, traditional integrity self-checking techniques offer no guarantee that self-check has been actually performed. Our approach provides remote verification of tags in order to verifying that the check has been duly preformed. Our solution can be deployed in several usage scenarios. In particular, it can be used to restrict access to a public server, to let in genuine clients only, possibly distributed by the server itself. Examples of existing applications are Yahoo services (Yahoo advanced services are available only to users deploying Yahoo’s client), gambling servers, and on-line submission of final exams. Along these lines, a messaging system, composed of a server

6

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

acting as entrusting entity and a client acting as entrusted entity, has been developed as a proof of concept. We assume that the client and the server are continuously connected through a network. The main goal of the Re-Trust approach is to assure that a software component running on a remote machine (the client) is always authenticated. This goal can be accomplished by the presence of a remote entity that is completely trusted, called Trusted Host (the server), providing the service to the remote machine. The Figure 1 depicts the model of trust. The application we want to secure is an untrusted application which runs in an untrusted environment (i.e. the remote untrusted host). The mobile module is installed in the untrusted host and is interlocked to the application to be protected. This module checks the integrity of the application. If the application is not corrupted the module generates secure tags, which are sent to the entrusting host. The entrusting host verifies these secure tags with the core of trust. As the flow contains valid secure tags the application running on the untrusted host, is safe and authentic. When the module does not send anymore secure tags the entrusting host stops the interaction with the untrusted host. The security of the application depends on the correct execution of the module and from the secrecy of the cryptographic keys it uses to generate the secure tags. It is important to avoid the reverse engineering of the code of the module. It is also important to note that the untrusted application can interact with a maliciously modified operating system, which means that the attacker can manipulate the execution of internal processes, and thus of the applications being executed.

1. Untrusted machine emanatesgenerates Secure Tags from a code/software during execution

Core of Trust Secure Tags Untrusted Entrusting Host Host Entrusting

2. Entrusting Machine is ENTRUSTING the Untrusted machine by verifying the Secure Tags

Figure 1: The remote trust scenario

2.3 Related work The problem of executing software in a trusted computing environment has recently gained considerable attention and different solutions have been proposed in the literature to protect software from the above-mentioned rogue behaviour, either by using hardware-based solutions or by software-only techniques. Relevant hardware-based approaches are: the Trusted Computing Group [8], Microsoft Next Generation Secure Computing Base [13], and the TrustZone developed by ARM [18]. A relevant area of related work is represented by techniques for protection of mobile agents [9][12]. For instance, previous work proposed a scheme to protect mobile code using a ring-homomorphic encryption scheme based on CEF (computation with encrypted functions)

7

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

with a non-interactive protocol [16][17]. However the existence of such homomorphic encryption function (also known as a privacy homomorphism) is still an open problem. Furthermore, some approaches mix obfuscation and mobility. For instance, in [15] agents are periodically re-obfuscated to ensure that the receiving host cannot access the agent’s state. Some other related solutions were proposed in the context of the Trusted Computing initiative [14]. Such solutions rely both on a trusted hardware component on the motherboard (co-processor) and on a common architecture that enables a trusted server-side management application to attest the integrity of a machine and to establish its level of trust. This non runtime approach was applied to assess the integrity of a remote machine enhanced with a trusted coprocessor and a modified Linux kernel [7]. In that work, a bottom up chain of trust was create: first the BIOS and the coprocessor assessed the integrity of the operating system at start-up, then the operating system assessed the integrity of applications, and so on. Other non run-time approaches rely on additional hardware to allow a remote authority to verify software and hardware originality of a system [10]. Beside Trusted Computing, another interesting approach is presented in [11]. Different software-only techniques have been proposed in the literature to protect software running on a malicious host: integrity self-checking is the main representative. Most of commercial applications just rely on static self-checking, in which the program checks its integrity only once, during start-up (e.g. to check license code), while current research is focusing on dynamic self-checking, in which the program verifies regularly its integrity during run-time. The key issue is to avoid that the self-checking function itself is removed or disabled, without being detected. To this aim, networks of integrity-checkers (called guards) were proposed to detect changes to binary code [2]. Protection against code modification is enforced by including a large number of guards, each protecting a fraction of the application. Clearly, the task of finding and disabling all guards is significantly more difficult. A similar approach provides a mechanism to redundantly test for changes [3]. Chen et al. [4] compute a fingerprint of application code on the basis of its actual execution. This method makes it possible to thwart attacks using automatic program analysis tools and other static methods. Obfuscation aims at increasing the attack complexity by making it hard for the attacker to comprehend the behavior of a decompiled program [5]. Obfuscation techniques are based on the addition of complexity to source code structure (without changing its behavior) through different kinds of code transformations. Code obfuscation transformations are also employed to hide a tamper resistance code embedded in the software so that it cannot be easily detected and removed. However, in most cases, breaking the obfuscation is just a matter of time and attacker’s skills [6], while the overhead of adding obfuscation can be significant both in terms of code size increase and execution time overhead. Customization creates many different copies from an initial version of a program [7]. Each copy of the protected program is different at the binary level, but is functionally equivalent to the other copies. Thus, published exploits to attack one version might not work with other customized versions. This kind of protection discourages diffusion of “cracks” but it is not aimed at detecting and reacting to tampering. Pioneer [19] is a system based on a verification function running on the client as an operating system primitive and a dispatcher acting as a trusted remote server. The dispatcher code and the verification function are implemented inside the respective network card interrupt handlers, enabling both the dispatcher and the un-trusted platform to forge the checksum and to provide run-time attestation of application integrity, which is a stronger guarantee than the TCG-based loadtime attestation, since software can be compromised by dynamic attacks after loading. Pioneer’s verification function can be used to build a rootkit detector for identifying malicious modifications of Linux kernel and to calculate the applications’ checksums.

8

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

3. The proposal for RE-TRUST The domain we selected in order to prove the concept is represented by client-server applications where communication among clients is mediated by a server. In this respect, the IM server receives messages from clients and relays to destination. The scenario is used hereafter to illustrate the remote entrusting method in practical and more comprehensible terms. There are some fundamental principles we bear in mind while designing the proposed method.

Figure 2: Prototype architecture/concept

Root of trust at remote location The basic working assumption when dealing with trustworthiness is that some elements in the system are implicitly entrusted and the whole infrastructure counts on them. In this work, the root of trust is placed in a remote site across the network.

Continuous replacement The root of trust deploys software "monitor or mobile agent" on the untrusted host that is responsible for preserving the integrity of the to-be-entrusted application. The monitor can be replaced at any time by the root of trust. Trustworthiness during run-time – This work introduces a protocol providing application-oriented trustworthiness that is refreshed during run-time. This introduces an on- line and proactive method to avoid trustworthiness violations and application damage.

Trustworthiness establishment protocol Proofs are generated by the trusted replaceable mobile agent on the client side. These proofs are composed of tags attached to data exchanged from client to server. Each tag contains the outcome of a cryptographic algorithm and some secret information, which is hidden in the monitor/agent itself. For instance, the secret information could be a symmetric encryption key, shared between the monitor/agent and the root of trust. As long as the application under surveillance is in proper shape, the monitor/agent keeps producing tags correctly. Upon reception of a tagged message, the root of trust validates the tags and hence certifies the

9

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

trustworthiness of the application. Note that the methodology that is presented here can be extended to other application domains.

Replacement Protecting the trustworthiness of replaceable Agent, both in terms of performed functionality and secrecy of information contained within, is important. In this respect, obfuscation have been proposed in the past to hinder adversaries, but motivated and skilled individuals could eventually succeed in tampering with the application code. Our objective is to make it intractable, within a well-defined period of time, for a malicious agent to modify selected components of application without being detected by the root of trust. To this aim the client application is supplemented with a software monitor/agent that both controls the application integrity and generates tags. The monitor/agent can be replaced by the root of trust at any time during the application run-time, and the time interval for replacement can vary from time to time, in order to avoid that attackers may recognize the replacement rate. The replacement monitor/agents can contain new secret information (e.g., a fresh key), new integrity checking strategies, new algorithms for tag generation, and so on. The interval between two subsequent replacements is the time window that is left to the adversary to break the integrity of the software monitor/agent and, hence, of the application itself. Replacement offers an additional degree of freedom to play with in protecting the application. The complexity of reverse-engineering the combined unit formed by the client application and the replaceable monitor/agent can be directly translated to the time an adversary needs in order to break it (trustworthiness window). As far as the monitor/agent is replaced at a rate that is faster than the inverse of the trustworthiness window, the combined unit can be safely considered not tampered. Clearly, understanding the precise reverse-engineering complexity of tamper-resistant software, e.g. software employing obfuscation [1] and white-box cryptography [3], is not a trivial job and the research field is still open to further study. Nonetheless, it is also possible to trade off between the complexity and the replacement rate. Hence, in case of sufficiently high replacement rate, the adoption of above-mentioned techniques could be taken down to the very minimum, if not avoided at all. This is good result if we take into consideration performance degradation due to anti-tampering techniques.

Interlocking. Defining how the mobile module interacts with the application, if the application to be protected is native code (like “exe” in Windows-like operative systems or “bin” file in Unix- like operative systems) or it is managed code (running in a VM like Java or # applications), which interface is defined between them, which kind of information the agent can get from the application through the used interface, how the agent can also modify these information (like application code segment and data segment in memory).

In next subsections we discuss how to apply interlocking and code mobility to managed code, in particular JVM features are illustrated, and two prototypes are described, analyzing how they implement interlocking and code mobility.

3.1 Features The Java Virtual Machine is the runtime environment on which applications developed in the Java object-oriented language can run. A program developed in Java is generally compiled in order to generate byte-code, a binary format which can be interpreted by a JVM.

10

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

As the JVM is ported on most contemporary machines, a byte-code compiled Java program is portable between these machines.

Figure 3: JVM Architecture

The architecture of the Java environment is illustrated in Figure 3: the JVM is presented as an abstraction of a homogeneous machine with a defined set of instructions, an execution engine (an equivalent of a hardware processor) and runtime data areas used for the memory and process management. In the following, we detail the byte-code properties we are interested in, the operating principles of the execution engine and the runtime data areas of the JVM. The Java bytecode provides an instruction set that is very similar to the one of a hardware processor. Each instruction specifies the operation to be performed, the number of operands and the types of the operands manipulated by the instruction. The execution of bytecode in a Java Virtual Machine is based on a stack. The main difference between bytecode and machine code is that bytecode considers all variables in a stack, and it does not specify any registrer reference like, for example, the assembler for Intel 80x86 processors. This independence from the registers layout of a CPU guarantees its portability. In order to actually execute the bytecode the JVM must map the stack-based operations to the native registers-based operations. The first generation of JVM was based on an interpreted scheme in which the interpreter translates each bytecode instruction into the execution of native code. In order to improve performance, the second generation of JVM has integrated Just-In- Time (JIT) compilation, which compiles each method into native code at first call. Any subsequent invocation uses the compiled native version of the method, and therefore performs much faster (close to the performance of native code). However, if a method is rarely invoked, it is not worth compiling it at first call. This led to the introduction of adaptive JIT compilation, where only frequently invoked methods are dynamically compiled.

11

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

3.1.1 The Java Security Model Java's security model is one of the language's key architectural features that makes it an appropriate technology for networked environments, in particular when software is downloaded across the network and executed locally, e.g. Java applets. Because the class files for an applet are automatically downloaded when a user goes to the containing Web page in a browser, it is likely that a user will encounter applets from un- trusted sources. Thus, Java's security mechanisms establish a needed trust in the safety of network-mobile code. Java's security model is focused on protecting users from hostile programs downloaded from un-trusted sources across a network. To accomplish this goal, Java provides a customizable "sandbox" in which Java programs run. A Java program can do anything within the boundaries of its sandbox, but it can't take any action outside those boundaries. The sandbox for un-trusted Java applets, for example, prohibits many activities, including: • Reading or writing to the local disk • Making a network connection to any host, except the host from which the applet came • Creating a new process • Loading a new dynamic library and directly calling a native method By making it impossible for downloaded code to perform certain actions, Java's security model protects the user from the threat of hostile code.

3.1.2 Safety features of the JVM Because of the safety features built into the Java virtual machine, running programs can access memory only in safe, controlled ways. Uncontrolled memory access is a security risk because an attacker can use the memory to weaken the security system. If, for example, the attacker could learn where in memory a class loader is stored, it could assign a pointer to that memory and manipulate the class loader's data. By enforcing structured access to memory, the Java virtual machine yields programs that are robust, but also discourages attackers who try controlling the internal memory of the Java virtual machine for tampering with the application code and implementing an attack. The unspecified memory layout is another safety feature built into the Java virtual machine, i.e. the unspecified manner in which the runtime data areas are laid out inside the Java virtual machine. The runtime data areas are the memory areas in which the JVM stores the data it needs to run a Java application. These data areas are: Java stacks (one for each thread); a method area where bytecodes are stored; and a garbage-collected heap, where the objects created by the running program are stored. If the attacker examines a class file, he/she will not find any memory addresses. When the Java virtual machine loads a class file, it decides where in its internal memory to store the bytecodes and other data it parses from the class file. When the Java virtual machine starts a thread, it decides where to put the Java stack it creates for the thread. When it creates a new object, it decides where in memory to put the object. Therefore, an attacker cannot predict, by looking at a class file, where in memory the data representing that class (or objects instantiated from that class) will be stored. Furthermore, the attacker cannot figure out which is the memory layout by reading the Java virtual machine specification, because the way in which a JVM lays out its internal data (e.g. data structures used to represent the runtime data areas) depends on designers of the JVM implementation. As a result, even if the attacker somehow were able to break through the Java virtual machine's memory access restrictions, he/she would next be faced with the difficult task of looking around to find something to harm. Although the bytecode instruction set does not allow an unsafe, unstructured way to access memory, this is possible through native methods.

12

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

3.1.3 The problem of native methods Basically, when calling a native method, assumptions on Java's security sandbox are not valid anymore. First of all, the warranties of robustness do not hold for native methods. Although an attacker cannot corrupt memory from a Java method, he/she can do it from a native method. But most important, native methods do not go through the Java API, so the security manager is not checked before a native method attempts to do something that could be potentially damaging. Of course, this is often how the Java API itself gets anything done. Many Java API methods may be implemented as native methods, but the native methods used by the Java API are "trusted". Thus, once a thread gets into a native method, the security policy established inside the Java virtual machine does not apply anymore to that thread, as long as that thread continues to run the native method. This is why the security manager includes a method that establishes whether or not a program can load dynamic libraries, which are necessary for invoking native methods. If un-trusted code is allowed to load a dynamic library, that code could maliciously invoke native methods that can implement some attacks. If a piece of un-trusted code is prevented by the security manager from loading a dynamic library, it will not be able to invoke an un-trusted native method. Applets, for example, are not allowed to load a new dynamic library and therefore cannot install their own new native methods. They can, however, call methods in the Java API, methods that may be native but that are always trusted.

3.2 JVMTI overview Starting from version 5, standard JVMs expose the JVM Tool Interface (JVMTI) which was designed for implementing remote debugging and monitoring in Java-based distributed systems. Using JVMTI an additional agent written in native code can register to some JVM events it is interested in; for example, if it is interested in monitoring class loading phase, the event JVMTI_EVENT_CLASS_FILE_LOAD_HOOK is registered by the agent during JVM start- up. When JVM loads the classes, it will call the registered callback functions implemented by the agent and it can execute additional native code: in the prototype (described in next subsection) the agent downloads and timely replaces additional code for performing integrity- checking from the trusted server. The agent in JVMTI is implemented as a dynamic library, i.e. a dll (dynamic linking library) file in Windows OS, or a so (shared object) file in UNIX-like systems. An agent is written in native code, and it uses JVMTI functions to extract information from a running Java application. The agent must include the jvmti.h file, in order to use Agent_OnLoad and Agent_OnUnload, interfaces that JVM uses to communicate with agent: when the library is loaded, the Agent_OnLoad is invoked, and when the library is unloaded, the Agent_OnUnload is invoked. In our case, the agent registers the other JVM events it wants to monitor in the Agent_OnLoad method and, after that, the JVM will notify the agent when the events occur. In order to use an agent, the application must tell the JVM which agent is to be used when JVM is starting, using the command: java –agentlib:

13

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

The agentLibName is the library (dll or so) implementing the agent loader. Agent_OnUnload will be called by the VM when the agent is about to be unloaded. The function is used to clean-up resources allocated during Agent_OnLoad. Once the agent registered all JVM events, it is notified, for example, whenever a method has to start is execution and when it is returning; at this point of the application execution the agent can use JVMI interface for getting bytecode of methods from memory. In order to do this the agent must enable the capability of can_get_bytecodes, a function which can be only called during the start or the live phases of JVM. During VM initialization, a JVMTI Event of type JVMTI_EVENT_VM_INIT is generated and sent to the callback VMInit routine in our agent code. Once the VM initialization event is received, the agent can complete its initialization, and is free to call any java Native interface or JVMTI function, for example:

Error = (*jvmti) SetEventNotificationMode(jvmti, JVMTI_ENABLE, JVMTI_EVENT_VM_INIT, (jthread)NULL)

In the VMInit callback routine, the agent uses the JVMTI GetByteCodes interface to get the bytecodes of the method, which will be returned via bytecodePointer, and the number of bytecodes is returned via byteCodeCount, as we can see in the following:

Err = (*jvmti)ÆGetBytecodes( jvmti, method, byteCodeCount, bytecodePointer);

Moreover, the agent can enable the Exception events in the VMInit callback routine in this way: Error= (*jvmti) Æ SetEventNotificationMode (jvmti, JVMTI_EVENT_EXCEPTION, (jthread) NULL );

Thus, in the callback routine of EXCEPTION, agent can call the JVMTI GetThreadInfo, GetThreadGroupInfo, GetAllThreads, and GetStackTrace interfaces to display the current thread and group details, like this:

Err = (*jvmti) Æ GetThreadInfo(jvmti, thr, &info);

3.2.1 Enforcing Trust with the JVMTI-based prototype The JVMTI prototype monitors the run-time execution of Java applications. In particular, we selected a chat application as a toy example. The chat system is implemented by two Java classes in a package. ChatServer acts as a relay server of client text messages. This is a console application. ChatClient is the graphical client. The Java 5 Virtual Machine can be plugged with a so-called agent to monitor the application execution. The agent must be implemented in native code and packaged within a shared library (a DLL, in Windows terminology). The agent can interact with the VM and the application by means of the Java Virtual Machine Tool Interface (JVMTI).

14

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

Figure 4: JVMTI-based Prototype Architecture

Once the VM is started, the trustedflow agent is loaded with logical name of a machine running the TrustedFlow server, and TCP port on which it will be listening, as itss parameters. Upon initialization, the TFAgent registers with the VM, subscribes to the TrustedFlow server, downloads a module, and starts a listening thread attached to the above- mentioned TCP port. The TrustedFlow server is responsible for dynamically deploying new modules and for validating the client-generated tags. The server identifies the client agents by means of their host machine IP address and the agent TCP port. Multiple agents can run on the same host. During the subscription phase, the agent transmits its ID which includes its IP address and port to the server. After the ID being transmitted, the agent fetches the initial module from the TrustedFlow server. Each module contains the code to: - Check the application integrity; - Generate the tags. Accordingly, each module is organized into two shared libraries: The client-side library (c-module) performs both the integrity check and the tag generation functions and it is deployed to the client machine. The server-side library (s-module) performs the tag validation and it is installed in the TrustedFlow server. After the agent downloaded the initial c-module, its TFAgent object opens a TCP server socket and starts a listening thread. Through the above TCP port, the agent can receive new c- modules sent by the TrustedFlow server during the run-time. The newly received c-module replaces the old one. This replacement happens throughout the lifetime of the chat. The rate of replacement is already discussed earlier.

15

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

3.3 AOP overview To counter reverse engineering, current software-based tamper-resistance techniques rely on code checkers whose position is hidden in the application and whose behavior is obfuscated. However, we observe that any technique involving a checker that is permanently embedded within the application is not robust enough. Indeed, the checker can be eventually identified and inhibited by an attacker with enough knowledge, time, and reverse engineering tools. Under such conditions, there are no guarantees that a remote client is actually undergoing all the checks it is supposed to. We try to overcome such limitation by supporting dynamic replacement of the checker module, which is implemented as a dynamic aspect. The checker is bundled as an independent aspect, which is sent to the client at start-up and can be updated dynamically at any time. The update rate can be tuned according to the security requirements of the target application domains (from minutes to days). Nonetheless, other techniques, such as obfuscation, are still a valuable addendum to make it even harder for a rogue user to hack the checker code. A checker that is both mobile and obfuscated gives an additional degree of freedom to customize the security level: the stronger obfuscation, the lower the update rate can be, and vice versa. Under such assumptions we developed another prototype implementation changing the technology used for interlocking the mobile module with the application to be protected: we used a JVM with aspect oriented programming (AOP) features for implementing the module with a dynamic aspect that will be interlocked with the application through the mechanisms offered by a standard JVM extended with dynamic AOP features.

3.3.1 Aspect Oriented Programming Aspect-Oriented Programming [20] is a new programming paradigm easing the modularization of crosscutting concerns in object-oriented software development. In particular, developers can remove scattered code related to crosscutting concerns from classes and placing them into elements called aspects. This methodology is implemented by different AOP platforms; all these tools rely on their own join-point model, which defines the points along the execution of a program that can be possibly addressed by an aspect. Thus, traditional AOP tools [22,23] perform a compiling process, called weaving, for the actual insertion of aspect code into pre-existing application source code or bytecode. When using a dynamic AOP platform, weaving can also occur at run-time. Dynamic AOP platforms can be further divided into two categories: platforms with fixed pointcut definition and platforms with dynamic pointcut definition. Most of the available AOP platforms rely on a fixed pointcut definition [25,26,27,28]. This means that application code is instrumented once, at first deployment, in order to identify candidate join-points; then the related advice code can be added at load-time and then updated at runtime. Other tools [21,29,30] use a dynamic pointcut definition. In the context of this work, we rely on dynamic AOP platforms with dynamic pointcut definition. We decide to use the latter kind of tools, in particular PROSE [21], in order to exploit variability of pointcuts definition for our purposes. Indeed, dynamic AOP platforms with dynamic pointcut definition does not rely anymore on the presence of “hooks” in the application code, because these hooks are determined at runtime by the platform, depending on the pointcuts definition included in the dynamically downloaded aspect. This can be an advantage, because if the attacker knew where the hooks were placed, he/she could use them as a possible starting point for an attack. Implementing integrity-checking techniques with AOP is intuitively useful because aspects can be seen as additional and renewable code having a privileged view on the

16

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

application code and data. Moreover, AOP weavers help “hooking” integrity checkers in the application code in a more general and flexible way, instead of using ad-hoc pre-compilers like most of the current approaches. In particular, the power of pointcuts composition rules in AOP is suitable for a flexible management and distribution of checking code in a large code base. PROSE platform is implemented as an extension of a standard JVM, and for a PROSE JVM an aspect is a normal Java class containing a set of Crosscut objects. A crosscut contains a method called advice and a “pointcutter” object (implementing the “pointcut” concept used in many other AOP platforms) identifying at which points in the dynamic execution of the program, advice code should be executed. For example, a pointcutter describes sets of join points by specifying the objects and methods to be considered, or a specific method execution. PROSE offers a rich set of crosscuts: among these the ‘MethodCall’ crosscut intercepts method calls. Moreover, PROSE uses a wildcard-based syntax to construct the pointcutters in order to capture join-points that share common characteristics, and it provides logical operators to form complex matching rules by combining simple pointcutters.

3.3.2 Enforce Trust with Dynamic AOP We started from an existing prototype of a Java chat system. Thanks to the transparent use of dynamic AOP, the client program needs no changes. Therefore, once the chat client program is released and distributed to users, attackers cannot get clues about the integrity strategy the server will adopt. In contrast with a normal chat application, the client-side program must be executed within the PROSE runtime environment. Concerning the server program, it was extended to integrate with the entrusting server module. As shown in right-hand side of Figure 5, main components of the prototype are the Aspect Manager, the Aspect Factory and the Code Checker. Aspect Manager provides the Chat Server with the interface to access entrusting functionalities, namely the registration of a new client and the verification of tagged messages. Aspect Manager is assisted by the Tag Checker, which validates the tags carried by user messages, and by the Aspect Factory, which dynamically generates the code of the to-be-deployed aspect.

Figure 5: Trust Enforcement based on Dynamic AOP

As shown in left-hand side of the figure, when a new client comes in, a new aspect is crafted by the Aspect Factory and the Aspect Manager loads it in the execution environment

17

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

of the upcoming client. The aspect implements both code integrity checker (that behaves as a watchdog for the client program and looks for integrity breaches) and the trusted tag generator aspect (that seamlessly appends a tag to each user message). Finally, note that the aspect can be replaced by the Aspect Manager at any moment during runtime. The dynamic aspect is composed by a tag generator crosscut, and two checkers crosscuts, i.e. the sandbox checker and the bytecode checker, using BCEL library [22] for easily access underlying application bytecode. The replacement of aspect is handled by PROSE framework using underlying Java-RMI (Remote Method Invocation) features for downloading new version of aspects, implemented as Java classes in PROSE.

18

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

4. Type of Attacks The system, remote host and application code can be subject to three type of attacks by an attacker having privileges to use the remote application. The attacker’s goal is to manipulate application code before or during application execution without been detected by the trusted server. The first type of attack exploits security flaws of the operative system. The second type of attacks exploits the security flaws of the remote code. A third type of attack can come from the module code exchanged.

4.1 Attack on the Operating System An application that executes on an OS is exposed to different attacks. Those attacks can be performed by: local users or remote users. As local user we can consider the administrator that can operate directly on the computer and thus it can utilize every type of tool to reach his goals. Also the remote users can gain the administrator’s privileges by exploiting the OS or the applications security flaws. For example, some exploits such as the buffer overflow or input strings of unexpected format, allow the malicious user to open a doors for the installation of backdoors and tools that permit him to control the system. In this case the malicious user local or remote will have the possibility to corrupt the target applications. In particular, the malicious user can observe directly the target application execution and will have the availability of all the hardware and software data structure. The program functionality can be tampered with by the malicious user by working either at the application privilege level or at the kernel level. First, a malicious user observes program execution and then he analyzes the data obtained by the program execution, finally he chooses the best way to perform a specific attack. Some of the possible attacks are the following: DoS attack, information leakage, regular access to protected files, disinformation, and execution of arbitrary code to gain privilege. A malicious user performing a DoS attack can stop the availability of some OS resource or reduce it illicitly, so that authorized users cannot access it. This type of attack can be performed locally or in the network. Another consequence of this attack is the process and the memory capacity degradation, the file disruption, and the deactivation of Operative System. The information leakage enables the malicious user to collect data for a subsetuent and more disruptive attack. The regular access to a protected file is performed by a malicious user to obtain sensible information such as user name, password and the possibility of changing permission or file properties. With the disinformation the malicious users give false data to hide their activity, thus reducing the promptness in the protection of the software of the damaged system. The standard procedures used by a malicious user are: the modification of register file, the use of rootkit, and the installation of kernel module. The first one is a basic technique to cover the intrusion; the rootkit are more advanced tools that are used to substitute system program; finally the installation of kernel module is a more advanced system that allow the malicious user to compromise the integrity of the system at kernel level. Finally the execution of arbitrary code allows the malicious user to execute code by using automated tools in the memory space of the target application.

19

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

4.1.1 Methodology of the Attack to the remote host This section describes some methodology used to exploit system vulnerability. In this way a malicious user can gain access to the remote host. Most of the times the methodology for searching the system vulnerability consist in the method used to control the weakness of the system. The attack choice depends upon the availability of the code, by the time and the tools available. The first method is code analysis, which implies decompiling the application, searching for relevant functions which can be exploited to perform an attack. In this way it is possible to find a function vulnerable to buffer overflow, such as strcpy, sprintf, strcat and gets of C language. Another method of vulnerability search can consist of difference comparison. This method consists of utilizing the diff utility on different version of the same program, in this way the malicious user obtain information relative to the modifications performed. The method of searching in binary code can be performed by means of tools such as: tracer (for tracing application’s execution paths), debugger (for executing application step by step and then hijacking the execution to malicious code when needed), and packet-sniffer (used to copy all network packets traffic involving the host where the application is running). While the former methods consider the application as a black-box, other methods are based on reverse engineering application code to source code by means of tools like decompilers or disassemblers. With a disassembler it is possible to transform the binary code in assembler language, and a decompiler can produce a C language version of the binary application. After reverse engineering the attacker should understand the generated source code and modify it for his/her purposes, and then recompile it and run that in place of the original application. Some possible attacks performed by malicious users are described in the following paragraphs.

4.1.1.1 Buffer Overflow Many cases of buffer overflow happen as result of security vulnerabilities present in application code or in system library it uses. For example, in C language the function strcpy copies a string of origin in a destination buffer. If the origin string’s length is bigger than the destination string’s one, the exceeding data are written in the surrounding memory. The exploit of the bug is obtained overwriting the critical value with malicious code.

4.1.1.2 Symbolic Links The attack to the symbolic link can be utilized by the malicious user in different way: for changing the file authorization, damaging the file adding data, or overwriting their content. These types of attacks are often launched from temporary directory of the system. Usually these types of attacks depend on programming errors.

4.1.1.3 Manipulation of the file core The core files are images of the memory that the system create such as errors during the process execution. They are called core dump. In the memory there is sensible information, such as hash of the password. A malicious user can generate a program to recover these kind of information by core-dump analysis.

4.1.1.4 Shared Library The shared library allows to the executable programs to utilize the code available in a common library. The library code is linked to the application when it is compiled. If a

20

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

malicious user is able to modify a library, then he can obtain control over the application that utilizes it.

4.1.1.5 Kernel rootkit and LKM The rootkit is a tool for disinformation. It is a tool used by a malicious user to hide his actions from the authorized user. The traditional rootkit substitutes the original programs of the system with tampered ones. These tampered programs give false information to the authorized user and hence hide the actions of the malicious user. Another variant of the rootkit is kernel-based. It modifies the kernel being executed, and can thus cause deviation from the normal execution of the program without modifying the code of the program. A kernel rootkit can be uploaded in the system as a kernel module. To upload this kernel module the operative system uses a functionality called LKM (Load Kernel Module) without compiling it. A kernel module operates in the same space of privileges of the kernel, and can intercept the system interrogations on the same level of system call and thus filtering every sensible data. Once a rootkit is installed on the system it is very difficult to detect it.

4.2 Attack to the application code An application source code running on the untrusted host exposes totally its code, data and state to the untrusted host that execute it. A malicious user can try to attack it in different way.

4.2.1 Information link and data modification The application code has to be readable by the untrusted host. A malicious user could read and remember the instructions going to be executed and might infer the rest of program based on that knowledge. Thus the malicious user can get to know the strategy and the purpose of the application code.

4.2.2 Remote execution flow modification If a malicious user knows the application source code data and physical location of its program counter, it can infer what the instruction will be executed next. Moreover, it can deduce the state of the application program execution. Then it might change the execution flow according to its will to achieve its goal.

4.2.3 Denial of Service A malicious untrusted computer can simply do not execute the module migrating to it or put the agent into waiting list and thus cause delay to the application program.

4.2.4 Interaction between remote computer and other parties A malicious user can eavesdrops the communications between the remote application and the trusted server, for example using a packet-sniffer. Both developed prototypes cipher communications between mobile module and trusted server in order to avoid such attack.

4.3 Attack to the module code A set of possible attacks can be launched from the delivered module. A malicious user can use an old module to perform an attack to the application code. Another way for a malicious user to attack the application code is to substitute the module delivered by the trusted server with a malicious module code. Thus the malicious user may exploit the security

21

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

weakness of the platform and launch attacks against the remote host. Once the malicious user gain access to the remote machine he/she can launch an attack to the application code.

4.3.1 Masquerading A malicious user can change the application/module code when it is sent to the untrusted host. When the untrusted machine executes this code a malicious user can gain for example an unauthorized access to the untrusted computer resources.

4.3.2 Denial of service A corrupted module source code that deliberately exploits system vulnerabilities or a module written with programming errors unintentionally can consume an excessive amount of resource of the untrusted host.

4.3.3 Unauthorized access A malicious user can insert security bugs in the module code of another user to gain an unauthorized access to the computer resources.

22

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

5. Possible prevention In the next section we illustrate a possible prevention for the attacks that come from the module: the masquerading attack, the dos attack, and the unauthorized access attack. These attacks can be performed by a malicious user that change the module delivered by the trusted server. The malicious user exchange the module code with a corrupted copy of it. This attack can be prevented using a cryptographic protocol between the untrusted host and the trusted host, the remote machine can detect if the module code has been substituted by a malicious user.

5.1.1 The cryptographic protocol Let assume that the untrusted host and the trusted host share a key. To establish a secure module exchange between the untrusted host and the trusted server, the following phases are executed: authentication phase, key exchange phase; module exchange;

Authentication phase C S L E I R E Key exchange V N E T R

Module exchange

Figure 6: The phases of secure module exchanging

5.1.1.1 Autentication phase The authentication phase is the initial phase of the protocol. In this phase the untrusted host and the trusted host use a mutual authentication mechanism. The authentication process works as follows: the untrusted host send to the trusted host a sequence number and its id. The sequence number is used to identify the run and the untrusted host id correspond to the untrusted host identity; the trusted host answer to the untrusted host sending a random number; the untrusted host send back to the trusted host the random number encrypted using the secret key of the client.

23

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

Untrusted Trusted Client Id, RC,seq_num1 server

RC= [Id,RC’,RS]_shared_key, ack(seq_num1), RC’ seq_num2

Random_ no false yes num_X = Ack(seq_num2), [RC,RS’]_shared_key, Random_ num_Y seq_num3 Auth. fail server Authenticati Authenticat on fail true ed Client authenticated

Figure 7: The client server mutual authentication protocol

The untrusted host has a unique id thus the trusted host is able to identify the different untrusted host. The random number is sent from the trusted host to the untrusted host and it is every time different thus to avoid replay attacks. When the untrusted host receives the random number from the server this it is encrypted using the shared key, and sent back to the server, thus the identity of the untrusted host is authenticated.

5.1.1.2 Key Exchange After the authentication process the trusted host starts the phase of key exchange. This phase consists of the following steps: the untrusted host asks the trusted server exchange a new key and the trusted host sends to the untrusted host a new key encrypted with the shared key. The new key will be stored securely on the remote host with a timestamp. The timestamp is used to set the key duration. When the key expires the hosts starts a new session of key exchange.

24

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

client server [ask_new_key, seq_num]_key_shared

[new_key, seq_num]_key_shared

[ack_key_exc,seq_num]_key_shared

Figure 8: The client server key exchange protocol

5.1.1.3 Using the protocol The proposed protocol can be used to exchange the module securely thus avoiding the masquerading attack.

client [module_1]_new_key Server

Key exchange

[module_2]_new_key’

Figure 9: The module exchange protocol

25

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

5.1.2 Attacks detection The described attacks to the remote host that come from the module exchanged can be detected by the use of a protocol of mutual authentication. After the server and the client are authenticated they can exchange securely the module.

26

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

6. Open Security Issue The goal of the remote trust is to assure that the remote code run not modified on the untrusted host. An host is considered untrusted if a local or a remote user perform with success an attack, or if the administrator is a malicious user. In the first case a possible defence consist in the use of an IDS thus preventing unauthorized access to the application source code. In the second case is suitable the re-trust proposed solution. The re-trust proposed solution can be applied also in the first case as second line of defence. A possible prevention to the attacks that come from the machine to the source code execution, may consist in a cryptographic protocol for exchanging the source code invariant and other critical variable can be implemented to avoid the possible remote code manumissions.

7. Conclusion The deliverable shows: (1) a prototype to solve the remote trust problem; (2) a software implementation of the prototype; (3) a list of possible threats to the software implementation; (4) a possible prevention to the attacks that come from the module to the remote host. This document describes an all-in-software methodology to deal with remote verification of correct execution for client-side application code. The proposed solutions extend state-of- the-art integrity-checking techniques by providing automated and periodic replacement of checking code during run-time. Furthermore, our approach supports the continuous attestation of integrity by a remote trusted server. In particular we analyzed how to interlock mobile module to the application to be entrusted by means of two prototype implementations: one based on JVMTI interface of Java 5 virtual machines, and one based on an extended JVM offering dynamic AOP features. They proved to be a powerful and effective technique to seamlessly interlock the integrity checker with the application program and to handle dynamic replacement of such integrity checker module at run-time. As a positive outcome, the protection strategy adopted by the module is not visible through static code analysis, and the attacker duty is made even heavier thanks to continuous replacement of the checking aspect. In our prototypes integrity checking is based on secure checksums of executed bytecode, which are continuously sent encrypted to the trusted server in order to be compared with pre-calculated correct values.

7.1 Future Work As a future work we plan to improve the protocol for exchanging attestation of valid remote application execution, and to face with attacks based on operative system.

27

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

8. References

[1] H. Chang and M. Atallah, "Protecting software code by guards" Proc. of ACM Workshop on Security and Privacy in Digital Rights Management, 2002 [2] B. Horne, L. Matheson, C. Sheehan, and R. E. Tarjan, "Dynamic Self-Checking Techniques for Improved Tamper Resistance" Proc. of ACM Workshop on Security and Privacy in Digital Rights Management, 2001. [3] Y. Chen, R. Venkatesan, M. Cary, R. Pang, S. Sinha, and M. Jakubowski, "Oblivious hashing: Silent Verification of Code Execution" Proc. of 5th International Workshop on Information Hiding (IHW 2002), 7-9 October, 2002. [4] C. Collberg, C. Thomborson, and D. Low, "Watermarking, Tamper-Proofing, and Obfuscation - Tools for Software Protection" IEEE Transactions on Software Engineering, 28, 2002. [5] D. Aucsmith, "Tamper resistant software: An implementation" in Information Hiding, Lecture Notes in Computer Science 1174, R. J. Anderson, Ed.: Springer-Verlag, 1996. [6] The Trusted Computing Group. On-line at https://www.trustedcomputinggroup.org [7] R. Sailer, X. Zhang, T. Jaeger, and L. VanDoorn, "Design and Implementation of a TCG- based Integrity Measurement Architecture" Proc. of 13th USENIX Security Symposium, 2004, pp. 223-238. [8] M. Jakobsson, K. Reiter, "Discouraging Software Piracy Using Software Aging". ACM Workshop on Security and Privacy in Digital Rights Management, Philadelphia, USA, November 2001. [9] Esparza, O., Soriano, M., Muñoz, J. L., Forné, J., Detecting and Proving Manipulation Attacks in Mobile Agent Systems. Lecture Notes in Computer Science, Volume 3284, Jan. 2004, pp. 224-233. [10] Kennell, R., Jamieson, L. H., Establishing the Genuinity of Remote Computer Systems. Proceedings of the 12th USENIX Security Symposium, 2003 [11] Maña, A., López, J., Ortega, J., Pimentel, E., Troya, J.M., A Framework for Secure Execution of Software. International Journal of Information Security, Vol. 3(2), 2004 [12] McGraw, G., Felten, E.W., Mobile Code and Security. IEEE Internet Computing, 1998, Vol. 2, No. 6 [13] Next Generation Secure Computing Base, http://www.microsoft.com/resources/ngscb [14] Pearson, S., Trusted computing platforms, the next security solution. Technical Report HPL-2002-221, HP Laboratories, 2002 [15] Badger, L., et al., Self-protecting mobile agents obfuscation techniques evaluation report. NAI Labs Report, Nov. 2001, online at www.isso.sparta.com/research/documents/spma.pdf [16] Sander, T., Tschudin, C. F., Towards Mobile Cryptography. IEEE Symposium on Security and Privacy, May 1998. [17] Sander, T., Tschudin, C. F., Protecting mobile agents against malicious hosts. Lecture Notes in Computer Science, 1998. [18] York, R., A New Foundation for CPU Systems Security, ARM Limited, http://www.arm.com . [19] A. Seshadri, M. Luk, E. Shi, A. Perrig, L. van Doorn, and P. Khosla, Pioneer: Verifying Integrity and Guaranteeing Execution of Code on Legacy Platforms. In 20th ACM Symposium on Operating Systems Principles (SOSP-05), Brighton, UK, October 2005. [20] G. Kiczales, J. Lamping, A. Mendhekar, C. Maeda, C. V. Lopes, J. Loingtier, and J. Irwan. Aspect-oriented programming. Proc. of ECOOP 97, June 1997.

28

RE-TRUST Deliverable D2.2 – Methods to dynamically replace the secure software module and to securely interlock applications with secure SW module

[21] A. Popovici, G. Alonso, and T. Gross, "Just in Time Aspects: Efficient Dynamic Weaving for Java" Proc. of 2nd International Conference on Aspect-Oriented Software Development, 2003 [22] BCEL, Byte Code Engineering Library, available at: http://jakarta.apache.org/bcel/ [23] AspectJ homepage. On-line at http://eclipse.org/aspectj/ [24] Tarr, P., Ossher, H., Harrison, W., and Sutton Jr, S. N-degrees of separation: Multi- dimensional separation of concerns. In Proceedings of International Conference of Software Engineering (ICSE 99), (1999). [25] Pawlak, R., Duchien, L., Florin, G., Martelli, L., and Seinturier, L. Distributed separation of concerns with aspect components. In Proceedings of TOOLS Europe 2000 (2000). [26] Mezini, M., and Ostermann, K. Conquering Aspect with Caesar. In Proceedings of the 2nd international conference on Aspect-oriented software development (AOSD03). (Boston, USA, 2003). [27] Aspectwerkz, dynamic AOP for Java. On-line at http://aspectwerkz.codehaus.org. [28] JBoss AOP. On-line at http://www.jboss.org/products/aop [29] Suvée, D., Vanderperren, W., and Jonckers, V. JAsCo: An Aspect-Oriented approach tailored for Component Based Software Development. In Proc. of the second international conference on aspect-oriented software development (AOSD), (Boston, USA, March 2003). [30] Bockisch, C., Haupt, M., Mezini, M., and Ostermann, K., Virtual Machine Support for Dynamic Join Points. In Proceedings of the AOSD’04 (Lancaster, UK, March 2004).

29