Software Component Services for Windows CE

Total Page:16

File Type:pdf, Size:1020Kb

Software Component Services for Windows CE

Master Thesis D-level

Software Component Services for Windows CE

Author : Piero Belviso Federico Capuani

Supervisor : Frank Lüders

Examiner : Ivica Crnkovic

Department of Computer Science and Electronics Mälardalen University, Västerås, Sweden March 14, 2006 Abstract

During the last period, the use of Software Component Models has become popular, in particular in the development of desktop and server-side software. Because of the special requirements these systems have to meet, such models have not been as popular in the domain of embedded real-time systems; in particular with respect of timing predictability and limited use of resources such as memory and CPU time. Much research has therefore been directed towards defining new component models fore real-time and embedded systems, typically focusing on relatively small and statically configured systems.

In this project we use Software Component Models for the Embedded Real-Time Systems; in particular Microsoft COM (Component Object Model) and our target platform is Windows CE, though the software will be developed in a Windows desktop environment. We will pay attention to making our solution as memory and CPU efficient as possible.

This project’s goal was to create a proxy which could be used for testing different areas of real-time systems, for instance time specifications and resource usage. To solve this we are based on an existing tool, F.A.G Unit, a code generator written in C# which generates code in the C++ programming language. We have done the porting of this tool for Windows CE, so the proxy code created from the tool is fully compatible with an Embedded Real-Time Systems, Windows CE.

2 Acknowledgement

This thesis has been done during the period between October and March 2006, in according with Erasmus project, so we would like to thank both Mälardalen University and University of L’Aquila, for giving us this opportunity.

We wish to thank our italian teacher Paola Inverardi, and our supervisor Ivica Crnkovic for his support.

We would particularly to thank Frank Lüders for his help and support during our work and mainly for his infinite availability.

Finally, we wish to thank all our family and friends, specially the new friends knew in Sweden, that they supported us for this unforgettable period.

3 Table of Contents

Abstract 2

Acknowledgement 3

1 – Introduction 6

1.1 - Overview CBSE 7 1.1.1 - CBSE and Reuse 10 1.1.2 - CBSE and related development paradigm 10

1.2 – Real-Time Systems 11

1.3 – Embedded Real-Time Systems 13 1.3.1 - Embedded Systems 13 1.3.2 - Task Management e Scheduling 14 1.3.3 - Interrupting Services 15 1.3.4 - Communication and Synchronization 16 1.3.5 - Memory Management 16 1.3.6 - Embedded Systems 16 1.3.7 - Trade-offs 17 1.3.8 - Timing Constraints 19 1.3.9 - Embedded OS 20

2 – Component Model Technolog 24

2.1 - COM/DCOM 24 2.1.1 – COM25 2.1.2 - COM : The Programming Model 25 2.1.3 - COM Objects 26

4 2.1.4 Interface 26 Interface Navigation 28 Lifetime Management 28 2.1.5 - COM Server 29 2.1.2 – DCOM 31

2.2 - Overview .NET Framework 32 2.2.1 - Common Language Runtime 33 2.2.2 - .NET Class Library 34 2.2.3 - Features .NET 35 2.2.4 - Application Development in .NET 37

2.3 – Overview Windows CE 38

2.4 – Component Models for ERTS 41

3 – Practical Part 43

3.1 – The existing tool 43 3.1.1 – Logging Service 45 3.1.2 – Execution Time Measurement 46

3.2 – Porting to WinCE and Prototype Tool 47

3.3 – Added Services 50 3.3.1 – Synchronization 50 3.3.2 – Timeout 51

4 – Conclusion 52

5 - References 54

5 1 - Introduction

A sensible approach to reducing the complexity of modern software systems is the application of a technique: building systems out of components. Component-Based Software Engineering (CBSE) advocates this sensible approach. Unlike other programming abstractions, such as objects, software components are expected to accommodate certain non-functional requirements, such as commercial viability, and must be built 'defensively' with certain minimum assumptions about their deployment environments. They are expected to be deployed and used only in binary form in a language-independent (and sometimes even platform-independent) manner.

6 1.1 - Overview CBSE

The complexity of software systems sometimes lingers on the border of human comprehension and often hinders their timely development, effective deployment, proper maintenance, and incremental evolution. On the other hand, because complex software systems involve heavy investments of human and financial resources, they are likely to be critical to the missions of the institutions where they are deployed for very long periods of time. The longevity and significance of such software as mission-critical systems makes their change through maintenance and evolution their inescapable fate, and this intensifies the problem of coping with their complexity throughout their long lives. A sensible approach to reducing this complexity to manageable levels is the application of a technique that has already been successfully deployed in other, older production environments: building systems out of components. Component-Based Software Engineering (CBSE) advocates this sensible approach. Unlike other programming abstractions, such as objects, software components are expected to accommodate certain non-functional requirements, such as commercial viability, and must be built 'defensively' with certain minimum assumptions about their deployment environments. They are expected to be deployed and used only in binary form in a language-independent (and sometimes even platform-independent) manner.

The current popular view of components appears to be summed up by Szyperski: “Software components are binary units of independent production, acquisition and deployment that interact to form a functioning system.” [1]

A component is a software artifact consisting of three parts: a service interface, a client interface and an implementation. Roughly speaking, the service interface consists of the services that the component exports to the rest of the world; the client interface consists of those services used by this component exported from other components, and the implementation is the code necessary for the component to execute its intended functionality. To enforce the idea that a component must interact with other software, we might also want to include a property that the service interface or the client interface might be empty, but not both.

7 Component-based software architectures rationally postulate two types of code: the code that comprises individual software components, i.e., component code; and the code that composes a set of individual components into a coherent system, i.e., the glue code. Variants of the so- called scripting languages have been proposed and used with some success as glue-code languages. Regardless of what (programming, scripting, etc.) language is used to write the glue code, the success of the component-based methodology requires the semantics of the glue code to be independent of the internal semantics of the components it composes into a system. This semantic independence requires an interface layer where the externally observable semantics of a component can be precisely described independently of its irrelevant internal semantics.

Software components are building blocks of software. In today's world, a software component is any piece of pre-written code, with defined interfaces, that can be called to provide the functionality that the component encapsulates. These are typically packaged in "industry standard" ways so that they can be callable from multiple languages, or from multiple environments. Typically components are created as Microsoft® .NET or Microsoft Component Object Model™ (COM) components, Java™ 2 Platform Enterprise Edition (J2EE) or JavaBeans™ components, Borland Delphi™ VCLs, or a number of other lesser known architectures. The new paradigm of assembling components and writing code to make these components work together has a name, and of course an acronym, Component-Based Development (CBD), while the whole discipline including components identification, development, adoption and integration in larger software systems is called Component-Based Software Engineering (CBSE).

Component-based software development is being proposed as a means of reducing costs while accelerating software development. The drive to use components to construct software systems stems from a ‘parts’ philosophy derived from traditional engineering disciplines that promises instant productivity gains, accelerated time to market and lower development costs.

This idea is simple: to build software systems using modules (components) like a builder builds a house, using independent modules. Each module has a specification and an implementation, and then, each is composed to build the final software. For this objective, the

8 interfaces which a component provides and requires are used like the connectors in a “lego piece”. [2]

Component-Based Development is gaining recognition as the key technology for the construction of high-quality, evolvable, large software systems in timely and affordable manners. Constructing an application under this new setting involves the assembly/composition of prefabricated, reusable and independent pieces of software called components. A component should be able to be developed, acquired and incorporated into the system and composed with other components independently in time and space. The ultimate goal, once again, is to be able to reduce developing costs and efforts, while improving the flexibility, reliability, and reusability of the final application due to the (re)use of software components already tested and validated. Component Oriented Programming aims at producing software components for a component market and for later composition (composers are third parties). This requires standards to allow independently created components to interoperate, and specifications that put the composer into the position to decide what can be composed under which conditions.

9 1.1.1 - CBSE and Reuse

In the early 90's a new type of software component came into existence. These were the commercially available components that could be purchased. ComponentSource coined a new term to describe these types of components: Open Market Components. Open Market Components are reusable software components that are available for purchase off the shelf. A lot of these components are based on standard component architecture such as COM, or Java and can be purchased without having to buy as well support, integration, or other types of services and they are sold as genuinely plug-n-play components. There are a plethora of open market components available, over 1,000 at this writing, to make a programmer's job easier and at the same time, allow them to concentrate on programming their core competency tasks by implementing their corporation defined business processes or functionality instead of having to write all sorts of components or routines to do common things like data display, charting, calculations, algorithms, and many others that are available on the open market. Reuse has become a reality because programmers are able to reuse open market components, which is really just code that someone else has written, tested, and documented. In addition to the term Open Market Component, there are many other terms have very similar meaning and basically refer to the same thing with a slight twist. One of the most commonly used term is commercial offthe-shelf (COTS) software products, most commonly referred to as COTS. Other terms such as off-the-shelf (OTS), and Software of Unknown Pedigree (SOUP) are also commonly used. [2]

1.1.2 - CBSE and related development paradigms

The majority of research and development in CBSE has focused on the development and use of components within two development paradigms:

 A RAD (rapid application development) paradigm where visual tools such as interface builders and form designers are used to create the user interface to an application and components are associated with elements identified in the interface. Microsoft, through Visual Basic and Visual C++, have been the principal proponents of this approach and its success has been largely due to the extensive libraries of COM

10 components that are available. This approach uses an iterative approach to development and is particularly suitable for the development of small to medium sized business systems.

 A ‘design-driven’ paradigm where the software is developed using a ‘conventional’ software life cycle. A software design is developed from a specification and this design is programmed in an object-oriented programming language (normally Java). Design notations such as the UML may be used. This approach may involve variable amounts of iteration and is most suited to the development of medium to large systems with demanding performance or dependability requirements.

1.2 – Real-Time Systems

A real-time system is a system where the correctness not only depends on correct functionality, but also on the timing of delivered functionality. Consequently, the correct results should be delivered not too early nor too late. Real-time systems are often embedded. Hence, resources such as computational bandwith and memory are scarce.

Temporal requirements may take many different shapes. In general, is the physical environment that imposes the timing constraints on the system. Existing methods for analyzing a real-time system in order to determine whether of not the temporal requirements are fullfilled require information about the software architecture, e.g. scheduling policy, and the temporal behavior of the different services, e.g. period times, scheduling priorities, and execution times, in the system.

Timeliness is the single most important aspect of a real-time system. These systems respond to a series of external inputs, which arrive in an unpredictable fashion. The real-time systems process these inputs, take appropriate decisions and also generate output necessary to control the peripherals connected to them. As defined by Donald Gillies “A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time in which the result is produced. If the timing constraints are not met, system failure is said to have occurred.” [3]

11 It is essential that the timing constraints of the system are guaranteed to be met. Guaranteeing timing behaviour requires that the system be predictable. The design of a real-time system must specify the timing requirements of the system and ensure that the system performance is both correct and timely.

There are three types of time constraints:

 Hard: A late response is incorrect and implies a system failure. An example of such a system is of medical equipment monitoring vital functions of a human body, where a late response would be considered as a failure.

 Soft: Timeliness requirements are defined by using an average response time. If a single computation is late, it is not usually significant, although repeated late computation can result in system failures. An example of such a system includes airlines reservation systems.

 Firm: This is a combination of both hard and soft timeliness requirements. The computation has a shorter soft requirement and a longer hard requirement. For example, a patient ventilator must mechanically ventilate the patient a certain amount in a given time period. A few seconds’ delay in the initiation of breath is allowed, but not more than that.

One need to distinguish between on-line systems such as an airline reservation system, which operates in real-time but with much less severe timeliness constraints than, say, a missile control system or a telephone switch. An interactive system with better response time is not a real-time system. These types of systems are often referred to as soft realtime systems. In a soft real-time system (such as the airline reservation system) late data is still good data. However, for hard real-time systems, late data is bad data. Most real-time systems interface with and control hardware directly. The software for such systems is mostly custom- developed. Real-time Applications can be either embedded applications or non-embedded (desktop) applications. Real-time systems often do not have standard peripherals associated with a

12 desktop computer, namely the keyboard, mouse or conventional display monitors. In most instances, real-time systems have a customized version of these devices.

1.3 – Embedded Real-Time Systems

Real-time and embedded operating systems are in most respects similar to general purpose operating systems: they provide the interface between application programs and the system hardware, and they rely on basically the same set of programming primitives and concepts. But general purpose operating systems make different trade-offs in applying these primitives, because they have different goals.

1.3.1 - Embedded Systems

Embedded systems do not provide standard computing services and normally exist as part of a bigger system. A computerized washing machine is an example of an embedded system where the main system provides a non-computing feature (washing clothes) with the help of an embedded computer. Embedded systems are usually constructed with the least powerful computers that can meet the functional and performance requirements. This is essential to lower the manufacturing cost of the equipment. Other components of the embedded system are similarly chosen, so as to lower the manufacturing cost. In conventional operating systems, a programmer needing to store a large data structure can allocate big chunks of memory without having to think of the consequences. These systems have enough main memory and a large pool of virtual memory (in the form of disk space) to support such allocations. The embedded systems’ developers do not enjoy such luxuries and have to manage with complex algorithms to manage resources in the most optimized manner. In most of the real-life applications, real-time systems often work in an embedded scenario and most of the embedded systems have real-time processing needs. Such software is called Real-time Embedded Software systems. Typical examples of embedded applications include microwave ovens, washing machines, telecommunication equipment, etc.

13 1.3.2 - Task Management e Scheduling

Task (or “process”, or “thread”) management is a primary job of the operating system: tasks must be created and deleted while the system is running; tasks can change their priority levels, their timing constraints, their memory needs; etcetera. Task management for an RTOS (Real-Time Operating System) is a bit more dangerous than for a general purpose OS: if a real-time task is created, it has to get the memory it needs without delay, and that memory has to be locked in main memory in order to avoid access latencies due to swapping; changing run-time priorities influences the run-time behaviour of the whole system and hence also the predictability which is so important for an RTOS. So, dynamic process management is a potential headache for an RTOS.

In general, multiple tasks will be active at the same time, and the OS is responsible for sharing the available resources (CPU time, memory, etc.) over the tasks. The CPU is one of the important resources, and deciding how to share the CPU over the tasks is called “scheduling”. The general trade-off made in scheduling algorithms is between, on the one hand, the simplicity (and hence efficiency) of the algorithm, and, on the other hand, its optimality. Algorithms that want to be globally optimal are usually quite complex, and/or require knowledge about a large number of task parameters, that are often not straightforward to find on line (e.g., the duration of the next run of a specific task; the time instants when sleeping tasks will become ready to run; etc.).

Real-time and embedded operating systems favour simple scheduling algorithms, because these take a small and deterministic amount of computing time, and require little memory footprint for their code. General purpose and real-time operating systems differ considerably in their scheduling algorithms. They use the same basic principles, but apply them differently because they have to satisfy different performance criterions. A general purpose OS aims at maximum average throughput, a real-time OS aims at deterministic behaviour, and an embedded OS wants to keep memory footprint and power consumption low.

14 1.3.3 - Interrupting Services

An operating system must not only be able to schedule tasks according to a deterministic algorithm, but it also has to service peripheral hardware, such as timers, motors, sensors, communication devices, disks, etc. All of those can request the attention of the OS asynchronously, i.e., at the time that they want to use the OS services, the OS has to make sure it is ready to service the requests. Such a request for attention is often signaled by means of an interrupt.

There are two kinds of interrupts:

 Hardware interrupt. The peripheral device can put a bit on a particular hardware channel that triggers the processor(s) on which the OS runs, to signal that the device needs servicing. The result of this trigger is that the processor saves its current state, and jumps to an address in its memory space, that has been connected to the hardware interrupt at initialisation time.

 Software interrupt. Many processors have built-in software instructions with which the effect of an hardware interrupt can be generated in software. The result of a software interrupt is also a triggering of the processor, so that it jumps to a pre- specified address.

The operating system is, in principle, not involved in the execution of the code triggered by the hardware interrupt: this is taken care of by the CPU without software interference. The OS, however, does have influence on (i) connecting a memory address to every interrupt line, and (ii) what has to be done immediately after the interrupt has been serviced, i.e., how “deferred interrupt servicing” is to be handled. [4] Obviously, real-time operating systems have a specific approach towards working with interrupts, because they are a primary means to guarantee that tasks gets serviced deterministically.

15 1.3.4 - Communication and Synchronization

Another responsibility of an OS is commonly known under the name of Inter-Process Communication (IPC). (“Process” is, in this context, just another name for “task”.) The general name IPC collects a large set of programming primitives that the operating system makes available to tasks that need to exchange information with other tasks, or synchronize their actions. Again, an RTOS has to make sure that this communication and synchronization take place in a deterministic way. Besides communication and synchronization with other tasks that run on the same computer, some tasks also need to talk to other computers, or to peripheral hardware (such as analog input or output cards). This involves some peripheral hardware, such as a serial line or a network, and special purpose device drivers.

1.3.5 - Memory Management

A responsibility of the OS is memory management: the different tasks in the system all require part of the available memory, often to be placed on specified hardware addresses (for memory-mapped IO). The job of the OS then is (i) to give each task the memory it needs (memory allocation), (ii) to map the real memory onto the address ranges used in the different tasks (memory mapping), and (iii) to take the appropriate action when a task uses memory that it has not allocated. (Common causes are: unconnected pointers and array indexing beyond the bounds of the array.) This is the so-called memory protection feature of the OS. Of course, what exactly the “appropriate action” should be depends on the application; often it boils down to the simplest solution: killing the task and notifying the user.

1.3.6 - Embedded Systems

Embedded systems do not provide standard computing services and normally exist as part of a bigger system. A computerized washing machine is an example of an embedded system where the main system provides a non-computing feature (washing clothes) with the help of an embedded computer.

16 Embedded systems are usually constructed with the least powerful computers that can meet the functional and performance requirements. This is essential to lower the manufacturing cost of the equipment. Other components of the embedded system are similarly chosen, so as to lower the manufacturing cost. In conventional operating systems, a programmer needing to store a large data structure can allocate big chunks of memory without having to think of the consequences. These systems have enough main memory and a large pool of virtual memory (in the form of disk space) to support such allocations. The embedded systems’ developers do not enjoy such luxuries and have to manage with complex algorithms to manage resources in the most optimized manner. In most of the real-life applications, real-time systems often work in an embedded scenario and most of the embedded systems have real-time processing needs. Such software is called Real-time Embedded Software systems. Typical examples of embedded applications include microwave ovens, washing machines, telecommunication equipment, etc.

1.3.7 - Trade-offs

This Section discusses some of the trade-offs that (both, general purpose, and real-time and embedded) operating system designers commonly make.

Kernel space versus user space versus real-time space.

Most modern processors allow programs to run in two different hardware protection levels. Linux calls these two levels kernel space and user space. The latter have more protection against erroneous accesses to physical memory of I/O devices, but access most of the hardware with larger latencies than kernels space tasks. The real-time Linux variants add a third layer, the real-time space. This is in fact nothing else but a part of kernel space used, but used in a particular way.

17 Monolithic kernel versus micro-kernel.

A monolithic kernel has al OS services (including device drivers, network stacks, file systems, etc.) running within the privileged mode of the processor. (This doesn’t mean that the whole kernel is one single C file!) A micro-kernel, on the other hand, uses the privileged mode only for really core services (task management and scheduling, interprocess communication, interrupt handling, and memory management), and has most of the device drivers and OS services running as “normal” tasks. The trade-off between both is as follows: a monolithic kernel is easier to make more efficient (because OS services can run completely without switches from privileged to non-privileged mode), but a micro-kernel is more difficult to crash (an error in a device driver that doesn’t run in privileged mode is less likely to cause a system halt than an error occurring in privileged mode). However, more and more embedded systems have footprints of more than a megabyte, because they also require network stacks and various communication functionalities.

Memory management versus shared memory.

Virtual memory and dynamic allocation and de-allocation of memory pages are amongst the most commonly used memory management services of a general purpose operating system. However, this memory management induces overhead, and some simpler processors have no support for this memory management. On these processors (which power an enormous number of embedded systems!), all tasks share the same memory space, such that developers must take care of the proper use of that memory. Also some real-time kernels (such as RTLinux) have all their tasks share the same address space (even if the processor supports memory management), because this allows more efficient code.

Dedicated versus general.

For many applications, it is worthwhile not to use a commercially or freely available operating system, but write one that is optimised for the task at hand. Examples are the operating systems for mobile phones, or Personal Digital Assistants. Standard operating

18 systems would be too big, and they don’t have the specific signal processing support (speech and handwriting recognition) that is typical for these applications. Some applications even don’t need an operating system at all. (For example, a simple vending machine.) The trade-offs here are: cost of development and decreased portability, against cost of smaller and cheaper embedded systems.

Operating system versus language runtime.

Application programs make use of “lower-level” primitives to build their functionality. This functionality can be offered by the operating system (via system calls), or by a programming language (via language primitives and libraries). Languages such as C++, Ada and Java offer lots of functionality this way: memory management, threading, task synchronization, exception handling, etc. This functionality is collected in a so-called runtime. The advantages of using a runtime are: its interface is portable over different operating systems, and it offers ready-to-use and/or safe solutions to common problems. The disadvantages are that a runtime is in general “heavy”, not deterministic in execution time, and not very configurable. These disadvantages are important in real-time and embedded contexts.

1.3.8 - Timing Constraints

Different applications have different timing constraints, which, ideally, the RTOS should be able to satisfy. However, there still doesn’t exist general and guaranteed scheduler algorithms that are able to satisfy all the following classes of time constraints:

 Deadline: a task has to be completed before a given instant in time, but when exactly the task is performed during the time interval between now and the deadline is not important for the quality of the final result. For example: the processor must fill the buffer of a sound card before that buffer empties; the voltage on an output port must reach a given level before another peripheral device comes and reads that value.  Zero execution time: the task must be performed in a time period that is zero in the ideal case. For example: digital control theory assumes that taking a measurement,

19 calculating the control action, and sending it out to a peripheral device all take place instantaneously.  Quality of Service (QoS): the task must get a fixed amount of “service” per time unit. (“Service” often means “CPU time”, but could also be “memory pages”, “network bandwidth” or “disk access bandwidth”.) This is important for applications such as multimedia (in order to read or write streaming audio or video data to the multimedia devices), or network servers (both in order to guarantee a minimum service as in order to avoid “denial of service” attacks).

The QoS is often specified by means of a small number of parameters: “s” seconds of service in each time frame of “t” seconds. A specification of 5 micro-seconds per 20 micro-seconds is a much more real-time QoS than a specification of 5 seconds per 20 seconds, although, on the average, both result in the same amount of time allotted to the task. The major problem is that the scheduler needs complete knowledge about how long each task is going to take in the near future, and when it will become ready to run. This information is practically impossible to get, and even when it is available, calculation of the optimal scheduling plan is a search problem with high complexity, and hence high cost in time. Different tasks compete for the same resources: processors, network, memory, disks, . . . Much more than in the general purpose OS case, programmers of real-time systems have to take into account worst-case scenarios: if various tasks could be needing a service, then sooner or later they will want it at the same time.

1.3.9 - Embedded OS

Embedded operating systems, however, have some features that distinguish them from real- time and general purpose operating systems. But the definition of an “embedded operating system” is probably even more ambiguous than that of an RTOS, and they come in a zillion different forms. But you’ll recognize one when you see one, although the boundary between general purpose operating systems and embedded operating systems is not sharp, and is even becoming more blurred all the time.

20 Embedded systems are being installed in tremendous quantities (an order of magnitude more than desktop PCs!): they control lots of functions in modern cars; they show up in household appliances and toys; they control vital medical instrumentation; they make remote controls and GPS (Global Position Systems) work; they make your portable phones work; etc.

The simplest classification between different kinds of embedded operating systems is as follows:

 High-end embedded systems. These systems are often down-sized derivatives of an existing general purpose OS, but with much of the “balast” removed. Linux has given rise to a large set of such derivatives, because of its highly modular structure and the availability of source code. Examples are: routers, switches, personal digital assistants, set top boxes.

 Deeply embedded OS. These OSs must be really very small, and need only a handful of basic functions. Therefore, they are mostly designed from the ground up for a particular application. Two typical functions deeply embedded systems (used to) lack are high-performance graphical user interfacing or network communication. Examples are: automotive controls, digital cameras, portable phones. But also these systems get more graphics and networking capabilities.

The most important features that make an OS into an embedded OS are:

 Small footprint. Designers are continuously trying to put more computing power in smaller housings, using cheaper CPUs, with on-board digital and/or analog IO; and they want to integrate these CPUs in all kinds of small objects. A small embedded OS also often uses only a couple of kilobytes of RAM and ROM memory.

 The embedded system should run for years without manual intervention. This means that the hardware and the software should never fail. Hence, the system should preferably have no mechanical parts, such as floppy drives or hard disks. Not only because mechanical parts are more sensitive to failures, but they also take up more space, need more energy, take longer to communicate with, and have more complex drivers (e.g., due to motion control of the mechanical parts).

21  Many embedded systems have to control devices that can be dangerous if they don’t work exactly as designed. Therefore, the status of these devices has to be checked regularly. The embedded computer system itself, however, is one of these critical devices, and has to be checked too! Hence, one often sees hardware watchdogs included in embedded systems. These watchdogs are usually retriggerable monostable timers attached to the processor’s reset input. The operating system checks within specified intervals whether everything is working as desired, for example by examining the contents of status registers. It then resets the watchdog. So, if the OS doesn’t succeed in resetting the timer, that means that the system is not functioning properly and the timer goes off, forcing the processor to reset.

If something went wrong but the OS is still working (e.g., a memory protection error in one of the tasks) the OS can activate a software watchdog, which is nothing else but an interrupt that schedules a service routine to handle the error. One important job of the software watchdog could be to generate a core dump, to be used for analysis of what situations led to the crash.

 A long autonomy also implies using as little power as possible: embedded systems often have to live a long time on batteries (e.g., mobile phones), or are part of a larger system with very limited power resources (e.g., satellites).

 If the system does fail despite its designed robustness (e.g., caused by a memory protection fault), there is usually no user around to take the appropriate actions. Hence, the system itself should reboot autonomously, in a “safe” state, and “instantly” if it is supposed to control other critical devices. Compare this to the booting of your desktop computer, which needs a minute or more before it can be used, and always comes up in the same default state.

 It should be as cheap as possible. Embedded systems are often produced in quantities of several thousands or even millions. Decreasing the unit price even a little bit boils down to enormous savings.

 Some embedded systems are not physically reachable anymore after they have been started (e.g., launched satellites) in order to add software updates. However, more and

22 more of them can still be accessed remotely. Therefore, they should support dynamic linking: object code that did not exist at the time of start is uploaded to the system, and linked in the running OS without stopping it.

Some applications require all features of embedded and real-time operating systems. The best known examples are mobile phones and (speech-operated) handheld computers (“PDA”s): they must be small, consume little power, and yet be able to execute advanced signal processing algorithms, while taking up as little space as possible.

The above-mentioned arguments led embedded OS developers to design systems with the absolute minimum of software and hardware. Roughly speaking, developers of general purpose and real-time operating systems approach their clients with a “Hey, look how much we can do!” marketing strategy; while EOS developers say “Hey, look how little we need to do what you want!”. Hence, embedded systems often come without a memory management unit (MMU), multi-tasking, a networking “stack”, or file systems. The extreme is one single monolithic program on the bare processor, thus completely eliminating the need for any operating system at all.

Taking out more and more features of a general purpose operating system makes its footprint smaller and its predictability higher. On the other hand, adding more features to an EOS makes it look like a general purpose OS. Most current RTOS and EOS operating systems are expanding their ranges of application, and cover more of the full “feature spectrum.”

23 2 – Component Model Technology

2.1 - COM/DCOM

The COM/DCOM provides a means to address problems of application complexity and evolution of functionality over time. It is a widely available, powerful mechanism for customers to adopt and adapt to a new style multi-vendor distributed computing, while minimizing new software investment. COM/DCOM is an open standard, fully and completely publicly documented from the lowest levels of its protocols to the highest. As a robust, efficient and workable component architecture it has been proven in the marketplace as the foundation of diverse and several application areas including compound documents, programming widgets, 3D engineering graphics, stock market data transfer, high performance transaction processing, and so on.

24 2.1.1 – COM, Component Object Model

Although Microsoft’s COMPONENT OBJECT MODEL (COM) has been referred to as many things, it is essentially only two: a programming model and a set of related system services.

2.1.2 - COM : The Programming Model

The COM programming model is a client/server, object-based programming model designed to promote software interoperability. The primary goal of COM is to provide a means for client objects to make use of server objects, despite the fact that the two may have been developed by different companies, using different programming languages, at different times. In order to achieve this level of interoperability, COM defines a binary standard, which specifies how an object is laid out in memory at run time [5]. By defining how an object is laid out in memory, COM allows any language that is capable of reproducing the required memory layout to create a COM object. While COM’s primary objective is to provide basic interoperability between object clients and servers at a binary level, COM also has several other objectives: • Providing a solution to versioning and evolution problems • Providing a system view of objects • Providing a singular programming model • Providing support for distributed capabilities In COM, programming model, COM clients connect to one or more COM objects, which are themselves contained in COM servers. Here, a client is any piece of software that makes use of the services provided by a COM object. Each COM object exposes its services through one or more interfaces, which are essentially groupings of semantically related functions. The compiled implementation of each COM object is contained within a binary module (EXE or DLL) called a COM server. A single COM server is capable of containing the compiled implementations of several different COM objects. The COM programming model defines what a COM server must do to expose COM objects, what a COM object must do to expose its services, and what a COM client must do to use a COM object’s services.

25 2.1.3 - COM Objects

A COM object, like any other object, is a run-time instantiation of a particular defining class. However, unlike most other objects, which are identified by a human-readable name, COM objects are identified by a unique Class Identifier (CLSID). CLSIDs are part of a special group of identifiers called Globally Unique Identifiers, or GUIDs. GUIDs are 128-bit values that are statistically guaranteed to be unique across time and space. To understand why it’s imperative that COM use CLSIDs to uniquely identify object classes, consider the following scenario. Imagine that you’ve just developed a COM object and identified it using a traditional human-readable name, such as “MyObject.” You then ship your object in binary form to thousands of anxious developers who quickly install your component. If one of these developers already has an object named “MyObject” installed on his or her system, there is no way to resolve the naming conflict because both objects are in binary form. Therefore, to prevent this type of naming conflict, COM uses CLSIDs to uniquely identify each individual object class. Instead of having a central authority that is responsible for issuing GUIDs, COM provides the CoCreateGuid API, which is used by various GUID generation tools, such as GUIDGEN.EXE and UIDGEN.EXE, which both ship as part of Microsoft Visual C++. Internally, CoCreateGuid calls the RPC function UuidCreate to generate a 128-bit, globally unique identifier, which can be used as a CLSID.

2.1.4 - Interface

A COM object is defined in terms of the individual interfaces that it supports. Conceptually, an interface is simply a group of semantically related functions. Figure 1 shows an example object, the UserInfo COM object, with three interfaces: ICopyInfo, IReverseInfo, and ISwapInfo (the “I” stands for interface, of course). Each interface contains four functions: ICopyInfo contains CopyName, CopyAge, CopySex, and CopyAll; IReverseInfo contains ReverseName, ReverseAge, ReverseSex, and ReverseAll; ISwapInfo contains SwapName, SwapAge, SwapSex, and SwapAll.

26 UserInfo

ICopyInfo CopyName

CopyAge

CopySex

CopyAll

IReverseInfo ReverseName

ReverseAge

ReverseSex

ISwapInfo SwapName

SwapAge

SwapSex

SwapAll

Fig. 1 – UserInfo COM Object

Each interface is identified by a unique identifier called an interface identifier (IID), similar to the way in which each COM object is identified by a unique CLSID. Like CLSIDs, IIDs are also GUIDs, which means that they are created like any other GUID using the COM API CoCreateGuid or some GUID generation tool such as GUIDGEN.EXE or UUIDGEN.EXE. Interfaces are essential to COM programming because they are the only way to interact with a COM object. Instead of obtaining a pointer to an entire COM object, a COM client must obtain a pointer to a particular interface, which is then used to access the functions defined as part of that particular interface. The only way to access the functions of a particular interface is through a pointer to that interface. So if you have a pointer to the ICopyInfo interface, you will only be able to access the CopyName, CopyAge, CopySex, and CopyAll member functions. In order to access SwapName, SwapAge, SwapSex, or SwapAll, you must first obtain a pointer to the ISwapInfo interface. The fact that interfaces are the only way to interact with COM objects should help explain why each interface must be uniquely identifiable. The process of moving from one interface to another is known as interface navigation.

27 Interface Navigation

To support interface navigation, every interface must implement a special function named QueryInterface. QueryInterface takes two parameters, one to specify the desired interface’s IID, and the other to receive the actual interface pointer. If the COM object implements the interface identified by the IID, the QueryInterface call will succeed and return a pointer to the interface in the second parameter; otherwise, the QueryInterface call will fail, and a NULL value will be returned in the second parameter. Because every interface must implement QueryInterface, you are guaranteed the ability to navigate from one interface on an object to any other interface on that same object. However, if the object doesn’t implement a particular interface, you will never be able to obtain a pointer to it, and QueryInterface will always fail when asked to retrieve that particular interface. Changing any of these things effectively changes the interface, and because interfaces are the only way to manipulate a COM object, once an interface is exposed for client usage, it must never change. In other words, interfaces are immutable. The logic behind this is simple. Suppose, that as a client of the UserInfoHandler COM object, my program relies on the CopyName and CopyAge functions of the ICopyInfo interface. If the definition of the ICopyInfo interface or any of its functions is altered or removed, my application will cease working properly. Therefore, to preserve client compatibility, COM stipulates that an interface must never change. Interface navigation is not the only critical function that must be supported by every interface. Every COM interface must also support the AddRef and Release functions (see “Lifetime Management” below). Together, QueryInterface, AddRef, and Release define COM’s most fundamental interface, IUnknown. Because each interface must support these three fundamental functions, every interface must inherit from IUnknown.

Lifetime Management

We have already seen how QueryInterface is used for interface navigation. Typically, the client of an object is responsible for managing the lifetime of that object. The client creates the object whenever it needs to, uses the object, and destroys it once it is done. However, COM objects may have multiple clients that are each unaware of the others. To

28 prevent one client from destroying a COM object and leaving the others with invalid interface pointer references, both the client and the COM object share the responsibility of lifetime management. A COM object’s lifetime is managed through a process called reference counting. When a COM object is first created, its internal counter variable is set to zero. Whenever the COM object issues an interface pointer — as a result of a QueryInterface call, for example — it is the COM object’s responsibility to call AddRef on that interface. AddRef serves to increment the value of the internal counter variable by one. Whenever a client is finished using an interface, it is the client’s responsibility to call Release on that interface. The Release method serves to decrement the object’s internal counter variable by one. When the internal counter variable reaches zero, it is the responsibility of the COM object to destroy itself.

2.1.5 - COM Server

The class for each COM object is implemented in a binary code module (DLL or EXE) called a COM server. COM servers implemented as DLLs are loaded directly into the client process’s address space, and are commonly referred to as in-process servers. The nature of a Win32 DLL is such that a copy of it is mapped directly into each client application’s own private address space. This means that each client application owns any resources allocated by the in-process server. Since in-process servers don’t own their resources, they cannot maintain global resources that are accessible by multiple clients. While it may at times seem a bit disadvantageous for in-process servers not to own their resources, in-process servers do have a major advantage … speed. Because the in-process server is already mapped onto the client’s address space, there is no need for the operating system to perform a context switch in order to access the code contained in the DLL. As a result, there is very little overhead associated with invoking the interface functions of a COM object implemented in an in-process server. COM servers can also be created as stand-alone EXEs, in which case they maintain an address space apart from that of the client. COM servers created as EXEs are commonly referred to as out-of-process servers. Since EXEs maintain their own address space, out-of-process servers are also capable of

29 owning their resources, which may be shared among their clients. An out-of-process server running on the same machine as its client(s) is referred to as a local server and is said to serve the client(s) local objects. However, any COM server, in-process or out-of-process, that is running on a machine other than its client(s) is referred to as a remote server and is said to serve the client(s) remote objects. In the case where a remote server is an in-process server, COM automatically creates a separate surrogate process and loads the in-process server into its address space. However, the benefit of resource ownership is not without its drawbacks — one of which, as you may have guessed, is reduced speed. Whenever a client accesses code or resources located within the out-of-process server, the operating system is forced to perform a context switch, and you must pay a performance penalty. On the other hand, accessing code or resources located within an in-process server is extremely fast. However, in-process servers are incapable of owning their own resources. Clearly, there are benefits and drawbacks to both in-process and out-of-process servers. Ultimately, the type of COM server you create will depend on the overall architecture of your application.

30 2.1.2 – DCOM, Distributed Component Object Model

DCOM (Microsoft Distributed COM) can be defined simply as the extension of the COM programming model beyond the boundaries of one physical machine. DCOM allows COM clients to manipulate COM objects located on physically separate machines through what amounts to a remote procedure call [5]. In fact, DCOM is based on MS-RPC, Microsoft’s implementation of the Open Software Foundation’s (OSF) Distributed Computing Environment (DCE) Remote Procedure Call (RPC) system. By building on top of RPC, DCOM is shielded from the underlying network protocol, and is thus capable of running a variety of connected and connectionless protocols, such as TCP, UDP, SPX, IPX, NetBios, and NetBEUI, just to name a few. DCOM supports all combinations of connectivity between in-process and out-of-process clients and remote servers. It is able to support remote in- process servers by loading them into a special surrogate process designed specifically for this purpose. The ability to instantiate or connect to a remote COM object on a different machine raises some very interesting issues with regard to security.

31 2.2 - Overview .NET Framework

Microsoft .NET provides prefabricated infrastructure for solving the common problems of writing Internet software. Microsoft’s .NET Framework is a new computing platform built with the Internet in mind, but without sacrificing the traditional desktop application platform [6]. The Internet has been around for a number of years now, and Microsoft has been busy developing technologies and tools that are totally focused on it. These earlier technologies, however, were built on Windows DNA (Distributed interNet Applications Architecture), which was based on COM (Component Object Model). Microsoft’s COM was in development many years before the Internet became the force that we know today. Consequently, the COM model has been built upon and added to in order to adapt it to the changes brought about by the Internet. With the .NET Framework, Microsoft built everything from the ground up with Internet integration as the goal. Building a platform from the ground up also allowed the .NET Framework developers to look at the problems and limitations that inhibited application development in the past and to provide the solutions that were needed to quickly speed past these barriers. .NET is a collection of tools, technologies, and languages that all work together in a framework to provide the solutions that are needed to easily build and deploy truly robust enterprise applications. These .NET applications are also able to easily communicate with one another and provide information and application logic, regardless of platforms and languages.

ASP.Net Windows XML Web Services Forms

Base Classes Libraries

Common Language Runtime

Operative System

Fig. 2 – Shows an overview of the structure of the .NET Framework

The .NET Framework includes two main components. They are : 1. Common Language Runtime 2. Base Class Libraries 32 2.2.1 - Common Language Runtime

The Common Language Runtime (CLR) is the environment where all programs in .NET are run. It provides various services, like memory management and thread management. Programs that run in the CLR need not manage memory, as it is completely taken care of by the CLR. For example, when a program needs a block of memory, CLR provides the block and releases the block when program is done with the block.

All programs targeted to .NET are converted to MSIL (Microsoft Intermediate Language). MSIL is the output of language compilers in .NET (see figure 3). MSIL is then converted to native code by JIT (Just-in Time Compiler) of the CLR and then native code is run by CLR.

As every program is ultimately converted to MSIL in .NET, the choice of language is pure personal. A program written in VB.NET and a program written in C# are both converted to MSIL. Then MSIL is converted to native code and run. So, whether you write program in C# or VB.NET at the end it is MSIL all that you get.

Fig. 3 – MSIL and CLR in .NET Framework

33 2.2.2 - .NET Class Library

.NET comes with thousands of classes to perform all important and not-so-important operations. Its library is completely object oriented, providing around 5000 classes to perform just about everything.

The following are the main areas that are covered by Class library.

1. Data Structures 2. IO management 3. Windows and Web Controls 4. Database access 5. Multithreading 6. Remoting 7. Reflections

The above list is comprehensive and only to provide you an instant idea regarding how comprehensive the library is.

The most fascinating part of .NET is the class library; it's common to all language of .NET. That means the way you access files in VB.NET will be exactly same in C#, and in fact all other languages of .NET. You learn library only for once, but use it in every language.

Also the library is common for all types of applications. The following are different types of applications that can make use of .NET class library.

1. Console applications. 2. Windows GUI applications. 3. ASP.NET applications – web applications. 4. XML Web services. 5. Windows services.

34 2.2.3 - Features .NET

The following are major features of .NET. We will use these features throughout out journey. Here is just a brief introduction to all key features of .NET.

 Assemblies

An assembly is either a .DLL or .EXE that forms a part of an application. It contains MSIL code that is executed by CLR. The following are other important points related to an assembly:

1. It is the unit on which permissions are granted. 2. Every assembly contains a version. 3. Assemblies contain interfaces and classes. They may also contain other resources such as bitmaps, file etc. 4. Every assembly contains assembly metadata, which contains information about assembly. CLR uses this information at the time of executing assembly. 5. Assemblies may be either private, which are used only by the application to which they belong or Global assemblies, which are used by any application in the system. 6. Two assemblies of the same name but with different versions can run side-by-side allowing applications that depend on a specific version to use assembly of that version.

The four parts of an assembly are:

o Assembly Manifest

Contains name, version, culture, and information about referenced assemblies.

o Type metadata

Contains information about types defined in the assembly.

o MSIL

MSIL code.

o Resources

35 Files such as BMP or JPG file or any other files required by application..

 Common Type System

Common Type System (CTS) specifies the rules related to data types that languages must follow. As programs written in all languages are ultimately converted to MSIL, data types in all languages must be convertible to certain standard data types.

CTS is a part of cross-language integration, which allows classes written in one language to be used and extended by another language.

 Cross-language Interoperability

.NET provides support for language interoperability. However, it doesn’t mean every program written in a language can be used by another language. To enable a program to be used with other languages, it must be created by following a set of rules called Cross Language Specifications (CLS).

Cross-language inheritance is the ability to create a class in C# from a class created in VB.NET.

When an exception is raised by a program written in C#, the exception can be handled by VB.NET. This kind of exception handling is called cross-language exception handling.

36 2.2.4 - Application Development in .NET

.NET has brought a set of new features which are to be understood by every programmer developing applications for Windows. There is no way any Windows programmer can ignore .NET, unless he is desperate to be outdated. Microsoft will provide .NET as part of its operating systems in future releases. It is the platform for programmers. It is not new OS from Microsoft or a new language. It is the environment for which you develop applications. It is rich in terms of features. The following are different types of applications that can be developed in .NET:

Windows applications – Typical Client/Server applications.

Web applications – Web sites and Intranet applications.

Web services – Programs that are accessible from anywhere using universal protocols like HTTP and SOAP. Console Applications – Simple console based applications without any GUI. Run from command prompt. Best suited to learn fundamentals and also for applications such as server sockets.

Mobile Applications – Contain web pages that run in mobile devices such as PDAs (Personal Digital Assistant) and Cell phones.

37 2.3 - Overview Windows CE

When comparing devices for a mobile computing solution, customers are faced with a myriad of choices concerning the tangible assets of the device: form factor, display size, input technology, scanner and radio technology. In addition, customers must also decide on which operating system best meets their application requirements. This can be somewhat more complicated than at first glance given the intangible nature of the Operating System and the distinct variations in functionality between them. In many cases, the decision is to pursue Windows based technology for embedded devices. Microsoft offers one versions of embedded Windows Operating Systems : Windows CE .NET. Microsoft Windows CE .NET is a highly customizable OS that is ideal for task-specific applications as it delivers broad configuration and application options across a wide variety of devices [7].

Windows CE Background

Windows CE 1.0 was released in 1996 supporting a wide range of processors and architectures to target the embedded operating system world. It was a major departure from PC-focused operating systems that addressed the unique requirements of embedded software devices. Because the target devices are typically battery powered, the Windows CE was designed with a view towards power conservation. Further the memory capacity of the embedded devices is significantly smaller than desktop systems, so the operating system was designed to minimize the memory requirements while providing a level of programming compatibility with the other versions of Windows. Microsoft’s strategy was to provide compatibility for the Win32 API in an effort to shorten the learning curve for programmers moving from the other versions of Windows to programming for Windows CE.

Windows CE has always been very flexible because it is targeted at a wide range of embedded applications and devices. Examples of applications and devices include Handheld PCs, kiosk devices, thin-client devices, AutoPC for use in automobiles, Pocket PC and set-top boxes.

38 The latest version of Windows CE is Windows CE .NET which, according to Microsoft: …is a componentized operating system available to developers and device manufacturers to create customized embedded devices.

The key to CE .NET is the flexibility to customize and support a wide range of device types. This is important for task-specific enterprise devices that can take on a variety of form factors and incorporate a wide range of technology options, depending on the application. These options could include scanners, beepers, multiple keyboard options, etc.

.NET and The Compact Framework

A major shift in thinking on the World Wide Web is underway. Traditionally the focus of the Web was on making content available to users. The shift, described as Web Services, broadens this perspective and enables programs to share information without predefined application specific protocols using a Web paradigm. Microsoft is investing heavily in Web Services under the “.NET” strategy – the .NET architecture enables a high level connectivity between people, programs, systems and devices. Microsoft has integrated the .NET strategy to include Operating Systems, Server and Device Software, and Developer Tools in an effort to allow connected solutions to be built and deployed quickly and reliably. The Compact Framework is a component of .NET that targets “Smart Devices” such as those operating Windows CE, providing a standard subset of the .NET services available for smart devices [7]. The Compact Framework ensures that programmers for these devices enjoy the benefits of .NET without overburdening the platform’s memory capacities. The high level of integration of these features into Visual Studio makes programming for smart devices less challenging than in the past.

39 Windows CE.NET

Windows CE .NET is the latest release of Windows CE that continues to target a broad range of embedded devices and includes support for the Compact Framework described above. Platform Builder is the integrated development environment for building, debugging, and deploying a customized embedded OS based on Windows CE .NET. Each manufacturer can use platform builder to define the features included in the operating system, allowing individual devices to have customized configurations.

40 2.4 - Component Model for ERTS

Though diversified in many aspects, different companies all have the common goal of maximizing profit. Different domains and companies within a domain have targeted towards different business goals. For example, a developer using a component technology have different needs from those supplying the infrastructure and those supplying components. Furthermore, a company with few customers per product will benefit more from an efficient development process since development cost must be shared by few customers, an example being the business segment of heavy vehicles. On the other hand, companies with large volumes are more willing to place extra effort in the development phase and reducing the cost of each product, the car industry being an example. Technically the systems in heavy vehicles and cars are closely related, but due to their different business needs, different technologies are used.

Software reuse, through component based software engineering, is often argued to reduce system development and/or maintenance cost. However, it is important that the range of commonality is sufficiently large. With this we mean that a component model must be applicable to a wide range of developers in order to see a business need for sub-suppliers to provide and develop components and component frameworks. However, the component model can not be too general in that it will not meet the specific needs (such as scarce resource). This is a difficult trade off where the challenge lies in finding a sufficiently general component model that will meet the specific needs of a wide range of requirements placed on applications of a domain.

The view on embedded real-time system (ERTS) has been a, once developed, monolithic, platform dependent view, which is not constructed for evolution [8]. However, the typical life-cycle of such systems, in practice, depicts a quite opposite reality. ERTS tends to have a very long life-time, decades in some cases. The effort placed in them can not easily be ignored, therefore such systems tends to become legacy systems that are hard to incorporate into functionality and/or technology shifts. Component technology offers an opportunity to increase productivity by providing natural units of reuse (components and architectures), by raising the level of abstraction for system construction, and by separating services from their configuration to facilitate evolution. Embedded real-time systems are often characterized by having scarce resources such as

41 memory, processing power and communication bandwidth. Yet many of these systems need to satisfy requirements on dependability.

Developers of ERTS face the challenge of making safe and easy to maintain applications running on limited resources, without overrunning project budgets. Historically, the development of ERTS has been done using low level programming language to guarantee full control over the system behaviour. However, during recent years a new software engineering discipline, component based software engineering (CBSE), has received attention in the embedded development domain. It has been seen as a promising approach to handle the complexities involved in the development of ERTS. The complexities arise, amongst other things, from constant demands on adding new functionalities in systems, to keep up market shares for the developed products. Besides being a discipline aiding developers to cope with complexity, CBSE is concerned with rapid assembly by reuse of components in different applications.

42 3 – Practical Part

3.1 – The existing tool (F.A.G. Unit)

We based our work from an existing tool, the F.A.G. Unit, Fully Automated codeGenerator, the project was a part of the Software Engineering course at Mälardalens University; the objective of the existing tool is the possibility of using the COM-model in a Real-Time environment, his goal is to create a proxy which can be used for testing different areas of real- time systems for instance time specifications and resource usage. This is realized in the C++ programming language.

The F.A.G. Unit read the type library of the target COM object, and its corresponding xml application descriptor file, and with this generate .cpp and .h file for a proxy object. This proxy will serve as a middle hand between caller and called COM objects and perform certain tasks specified in the xml file, such as time measurement and logging.

The tool works in this way:

The user must first use the browse button to select which type library he wants to create proxy code for and then load it. When a type library is loaded the GUI will be redrawn with the CoClasses, interfaces and methods of the type library. The user can optionally specify an xml descriptor to automatically load pre-defined functionality. The functionality can then be altered by clicking on the check boxes. When the user is satisfied with the settings he can press the “Generate proxy for…”-button which opens a “Select project directory”-dialog where the user must specify where all the generated code shall be saved. In that folder the xml file with the new settings will be saved.

The tool have designed its components in an object oriented manner. The proxy object implements the same interface as the server object. The F.A.G unit uses factory pattern to generate the code. F.A.G unit generate code in C++ only. The code compile in Microsoft Visual Studio 6.

43 This process is showed in the following figure:

Fig. 4 Generating a proxy object for a component service

44 3.1.1 – Logging Service

A logging service allows the sequence of interactions between components to be traced.

Fig. 5 – A Logging Service Proxy

In the figure 5, the object C2 implements an interface IC2 for which we wish to apply a logging service. A proxy object that also implements IC2 is placed between C2 and a client that uses the operations exposed through IC2. The operations implemented by the proxy forward all invocations to the corresponding operations in C2 in addition to writing information about parameter values, return codes, and invocation and return times to some logging medium [9].

45 3.1.2 – Execution Time Measurement

This service allows operation invocations to be monitored and information about execution times accumulated. Different measurements, such as worst-case, best-case, and average execution time may be collected. A possible use of the information is to dynamically adapt an on-line scheduling strategy. The suggested solution is to use a forwarding proxy that measures the time elapsed from each operation call till it returns and collects the desired timing information [9].

46 3.2 – Porting to Windows CE and Prototype Tool

SCS Proxy Generator for WinCE is a prototype tool that we are developing and that adds services to COM components on Microsoft Windows CE. The tool generates source code for proxy objects implementing services by intercepting method calls to the COM objects [9].

In F.A.G. the proxy was designed to compile in Visual C++ 6.0; therefore we changed several files that can be compiled by Microsoft eMbedded C++. Particularly we focalized on factory pattern and template to generate the code; moreover we added a lot of code to implement new functionality like synchronization and timeout.

Fig. 6 shows the graphical user interface of the tool. After a TLB or IDL file has been loaded all COM classes defined in the file are listed.

47 Fig. 6 The graphical user interface of the prototype tool

Checking the box to the left of a COM class causes a proxy for that class to be generated when the button at the bottom of the tool is pressed. Under each COM class, the interfaces implemented by the class is listed and, under each interface, the operations implemented by the interface. In addition, the available services are listed with their names set in brackets. Checking the box to the left of a service causes code to be generated that provides the service for the element under which the service is listed.

48 Checking the logging service results in a proxy that logs each invocation of the affected operation. The timing service causes the proxy to measure the execution time of the process and write it to the log at each invocation (if timing is checked but not logging, execution times will be measured but not saved). The synchronization service means that each invocation of the operation will be synchronized with all other invocations of all other operations on the proxy object for which the synchronization service is checked. The only synchronization policy currently supported is mutual exclusion. The timeout service has a numeric parameter. When this service is selected as in Fig. 6, an input field marked Milliseconds is visible near the bottom of the tool. Checking the service results in a proxy where invocations of the operation always terminate within the specified number of milliseconds. In the case that the object behind the proxy does not complete the execution of the operation within this time, the proxy forcefully terminates the execution and returns en error code.

49 3.3 – Added Services

We added two services to the existing tool, the F.A.G. Unit, Fully Automated codeGenerator, the Synchronization and Timeout .

3.3.1 – Synchronization

With multiple threads running around the system, it’s important to coordinate the activities. Fortunately, Windows CE supports almost the entire extensive set of standard Win32 synchronization objects. The concept of synchronization objects is fairly simple. A thread waits on a synchronization object. When the object is signaled, the waiting thread is unblocked and is scheduled (according to the rules governing the thread's priority) to run.

A synchronization service allows components that are not inherently thread-safe to be used in multi-threaded applications. The solution is to use forwarding proxies that use the basic mechanisms of the underlying operating system to implement the desired synchronization policies. A synchronization policy may be applied to a single operation or to a group of operations, e.g. all operations of an interface or a component. The synchronization policy implemented in the tool is the mutual exclusion, which blocks all operation calls except one. After the non-blocked call completes, the waiting calls are dispatched one by one according to the priority policy.

A mutex is a synchronization object that's signaled when it's not owned by a thread and nonsignaled when it is owned. Mutexes are extremely useful for coordinating exclusive access to a resource such as a block of memory across multiple threads. A thread gains ownership by waiting on that mutex with one of the wait functions. When no other threads own the mutex, the thread waiting on the mutex is unblocked, and implicitly gains ownership of the mutex. After the thread has completed the work that requires ownership of the mutex, the thread must explicitly release the mutex.

50 3.3.2 – Timeout

This service can be used to ensure that a call to a components operation always terminate within a specified deadline, possibly signaling a failure if the operation could not be completed within that time. The solution is to use a proxy that that use a separate thread to forward each operation call and then wait until either that thread terminates or the deadline expires. In the latter case the proxy signals the failure by returns an error code. Also, it is possible to specify different options for what should be done with the thread of the forwarded call if the deadline expires. The simplest option is to forcefully terminate the thread, but this may not always be safe since it may leave the component in an undefined and possibly inconsistent state. Another option is to let the operation call run to completion and disregard its output. Obviously, using this service requires that the client is able to handle timeouts. This service isn’t completed at this time.

51 4 - Conclusion

A sensible approach to reducing the complexity of modern software systems is the application of a technique: building systems out of components. Component-Based Software Engineering (CBSE) advocates this sensible approach; they are expected to be deployed and used only in binary form in a language-independent (and sometimes even platform-independent) manner. A component is a software artifact consisting of three parts: a service interface, a client interface and an implementation. This idea is simple: to build software systems using modules (components) like a builder builds a house, using independent modules. Each module has a specification and an implementation, and then, each is composed to build the final software.

We focused our attention in the domain of embedded real-time system, the real-time system is a system where the correctness not only depends on correct functionallity, but also on the timing of delivered functionality. Consequently, the correct results should be delivered not too early nor too late. Real-time systems are often embedded. Hence, resources such as computational bandwith and memory are scarce. Real-time and embedded operating systems are in most respects similar to general purpose operating systems: they provide the interface between application programs and the system hardware, and they rely on basically the same set of programming primitives and concepts. In most of the real-life applications, real-time systems often work in an embedded scenario and most of the embedded systems have real-time processing needs. Such software is called Real-time Embedded Software systems.

We used the Microsoft COM/DCOM technology; the COM/DCOM provides a means to address problems of application complexity and evolution of functionality over time. It is a widely available, powerful mechanism for customers to adopt and adapt to a new style multi- vendor distributed computing, while minimizing new software investment. A COM object is defined in terms of the individual interfaces that it supports. Conceptually, an interface is simply a group of semantically related functions. Interfaces are essential to COM programming because they are the only way to interact with a COM object. Instead of obtaining a pointer to an entire COM object, a COM client must

52 obtain a pointer to a particular interface, which is then used to access the functions defined as part of that particular interface.

For the embedded real-time system we use Microsoft Windows CE .NET, it is a highly customizable OS that is ideal for task-specific applications as it delivers broad configuration and application options across a wide variety of devices. Windows CE has always been very flexible because it is targeted at a wide range of embedded applications and devices.

We based our work from an existing tool, the F.A.G. Unit, Fully Automated codeGenerator, his objective is the possibility of using the COM-model in a Real-Time environment, his goal is to create a proxy which can be used for testing different areas of real-time systems for instance time specifications and resource usage. The F.A.G. Unit read the type library of the target COM object, and its corresponding xml application descriptor file, and with this generate .cpp and .h file for a proxy object. This proxy will serve as a middle hand between caller and called COM objects and perform certain tasks specified in the xml file, such as time measurement and logging.

In F.A.G. the proxy was designed to compile in Visual C++ 6.0; therefore we changed several files that can be compiled by Microsoft eMbedded C++. Particularly we focalized on factory pattern and template to generate the code. We added two services to the existing tool, the F.A.G. Unit, Fully Automated codeGenerator, the Synchronization and Timeout .

To develop this work we have studied the component based theory, and his implementation trought the Microsoft Technology COM/DCOM. We have developped many examples with this technology; before for desktop applications, and after for embedded applications, in particular for the operating system Microsoft Windows CE.

It’s our first work in english, and it’s our goal in the Erasmus Project that we have done in Vasteras from October to March.

53 5 – References

1. Szyperski C., Component Software – Beyond Object-Oriented Programming (2nd edition), ISBN 0-201-74572-0, Addison-Wesley, 2002;

2. Pedro J. Clemente, Jaun Hemandez, Aspect Component Based Software Engineering, University of Extremadura, Spain;

3. S. Agrawal & P. Bhatt, Real-time Embedded Software Systems: An Introduction, Tata Consultancy Services;

4. Herman Bruyninckx, Real-Time and Embedded Guide, K.U.Leuven, Mechanical Engineering;

5. Frank E. Redmond III , DCOM : Microsoft Distributed Component Object Model, ISBN 0-764-58044-2, IDG Books Worldwide Inc., 1997;

6. Article Contributed by Niranjan Babu Kalla, What is the .NET Framework?, http://www.aspfree.com;

7. PSION Teklogix Inc., Windows CE, Pocket PC and Software Development Considerations, November 2003;

8. Kaj Hanninen, Jukka Maki-Turja, Component technology in Resource Constrained Embedded Real-Time Systems, Department of Computer Science and Engineering Malardalen University, Vasteras, Sweden;

9.Frank LÄuders, Daniel FlemstrÄom, Anders Wall2, and Ivica Crnkovic, A Prototype Tool for Software Component Services in Embedded Real-Time Systems, Department of Computer Science and Engineering Malardalen University, Vasteras, Sweden;

54

Recommended publications