MSIT 4B Embedded Systems 1

Chapter 1

Embedded Systems

he basic objectives of this chapter is to provide the reader a clear overview of embedded systems. This chapter establishes the distinction between embedded systems and other computing devices. TSome of the basic components of the embedded systems are discussed.

1.0 INTRODUCTION

An is a special-purpose computer system usually built into a small device. An embedded system is required to meet very different requirements than a general-purpose personal computer. It has a and specific supporting devices and a compact operating system. These components are just adequate to meet the specific application for which it is designed. In general, an embedded system is a device meant for a specific application. It is a small compact micro computer which renders a specific job.

1.1 DIFFERENCES BETWEEN DESKTOP/LAPTOP COMPUTER AND EMBEDDED SYSTEM

A desktop or a laptop computer is a general purpose computing device. We can use this for varieties of applications such as computing, playing games, word processing, software development and so on. The general purpose computer permits addition of new compatible software tools for application development. However the embedded system differs from the laptop/desktop computers in the following ways

l Embedded systems do a particular job. They cannot be programmed to do different jobs.

MSIT 4B Embedded Systems 1 2 Chapter 1 - Embedded Systems

l Embedded Systems have very limited resources, especially memory. They do have only semiconductor memory. Secondary memories such as hard disk or compact disk are not available. l Most of the embedded systems work in real time, i.e., the tasks are usually time synchronized.

l Embedded systems normally operate on the battery. So power consumption is highly optimized and hence their hardware have high impact on the power consumption.

l Embedded systems are to be highly reliable. There is no compromise on this. Embedded systems are being used in almost all fields such as :

l Consumer

l Office Automation l Industrial Automation

l Medical Electronics

l Computer Networking l Telecommunications

l Wireless technologies

l Instrumentation l Security

l Finance Specific examples of embedded systems

l Automatic teller machines (ATMs) l Cellular telephones and Telephone

l Computer network equipment, including Routers, Switches and Firewalls

l Computer printers l Disk drives (floppy disk drives and hard disk drives)

l Engine controllers and antilock brake controllers for automobiles

l Home automation products, like thermostats, air conditioners, sprinklers, and security monitoring systems MSIT 4B Embedded Systems Chapter 1 - E-Commerce3

l Handheld calculators

l Household appliances, including microwave ovens, washing machines, television sets, DVD players/recorders

l Inertial guidance systems, flight control hardware/software and other integrated systems in aircraft and missiles

l Medical equipments, BP monitor, Sugar level monitor etc l Measurement equipment such as digital storage oscilloscopes, logic analyzers, and spectrum analyzers

l Multifunction wrist watches l Personal Digital Assistants (PDAs), i.e. small handheld computers

l Programmable logic controllers (PLCs) for industrial automation and monitoring

l Stationary videogame consoles and handheld game consoles

1.2 CHARACTERISTICS

Two major areas of differences are cost and power consumption. Since many embedded systems are produced in the tens of thousands to millions of units, reducing cost is a major concern. Embedded systems often use a (relatively) slow processor and small memory size to minimize costs. The slowness is not just clock speed. The whole architecture of the computer is often intentionally simplified to lower costs. For example, embedded systems often use peripherals controlled by synchronous serial interfaces, which are ten to hundreds of times slower than comparable peripherals used in PCs. Programs on an embedded system often must run with real-time constraints with limited hardware resources, often there is no disk drive, operating system, keyboard or screen. A flash drive may replace rotating media, and a small keypad and LCD screen may be used instead of a PC’s keyboard and screen.

Firmware is the name for software that is embedded in hardware devices, e.g. in one or more ROM/ Flash memory IC chips. Embedded systems are routinely expected to maintain 100% reliability while running continuously for long periods. Firmware is usually developed and tested to much stricter requirements than is general purpose software which can usually be easily restarted if a problem occurs.

1.3 OVERVIEW OF EMBEDDED SYSTEM ARCHITECTURE

Every embedded system consists of custom built hardware around a (CPU). The hardware also contains memory chips onto which the software is loaded. The software residing on 4 Chapter 1 - Embedded Systems the memory chip is also called firmware. An embedded architecture can be represented as a layered architecture as shown in the figure 1.1. The operating system runs above the hardware and the Application software runs above the operating system. The same layered architecture hold good for general purpose computers also. However the embedded systems sometimes need not posses operating systems, for example, Toys, remote control equipments, etc.,. However some other applications do need the operating system and all applications are to run under it. Usually most embedded systems in computing, mobile and network applications work under a real time operating system.

H/W

OS

App. S/W

Figure 1.1 Layered architecture of an Embedded System

The hardware block diagram of an embedded system is shown in figure 1.2. The building block comprises of the following

l Central processing unit

l Memory l Input devices

l Output devices

l Communication Interfaces l Application specific circuitry MSIT 4B Embedded Systems 5

1.3.1 Central Processing Unit

The central Processing unit can be a , micro processor or a DSP chip. The microcontroller is a low cost device which has all the necessary components such as Memory, Serial interfaces, Analog and Digital converters etc embedded in one single chip. These can be used for specific applications which work within the specifications of the microcontroller. A is more powerful compared to microcontroller and it needs external devices to be interfaced for building an embedded system. The DSP chip is mostly used for signal processing applications. For example, in the design of embedded systems that require audio and video coding. There are many different CPU architectures used in embedded designs. One common configuration for embedded systems is the , an application-specific , for which the CPU was purchased as intellectual property to add to the IC’s design.

Read Only Random Memory Access Memory

Output Input Central Processing Unit Device Device CPU Communication Interface

Application Specific Circuitry

Figure 1.2 Architecture of an Embedded System

1.3.2 Memory

The memory used in the embedded system design is a semiconductor memory. Usually two types of memories are used. The RAM is used for storing temporary information during processing. On the other hand most important memory used in the embedded system is a Read Only Memory called FLASH. 6 Chapter 1 - Embedded Systems

Normally the operating system is stored in FLASH. Embedded Systems do not have secondary memory devices interfaced to them.

1.3.3 Input devices

Embedded systems have minimal input devices. This means, they don’t have mouse, key board etc. Usually a small key pad with minimal buttons are provided to derive specific commands.

1.3.4 Output devices

Most embedded systems have either LED or Small LCD screen interfaced to them to display the information.

1.3.5 Communication Interfaces

Embedded systems need to interact with other devices. Based on the requirement, the embedded systems may have one or more of the following interfaces connected to them.

l Serial communication interface

l Parallel interface l USB

l PCMCIA

l JTAG l Ethernet

l Bluetooth

l Infra Red

1.3.6 Application specific circuitry

Sensors, transducers and control circuitry may need to be interfaced with the embedded systems for specific applications. MSIT 4B Embedded Systems 7

1.4 SOFTWARE TOOLS USED IN THE EMBEDDED SYSTEMS

Like a typical computer programmer, embedded system designers use compilers, assemblers and debuggers to develop an embedded system. These software tools can come from several sources:

l Software companies that specialize in the embedded market

l Ported from the GNU software development tools. Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor.

Embedded system designers also use a few software tools rarely used by typical computer programmers.

l Some designers keep a utility program to turn data files into code, so that they can include any kind of data in a program. l Most designers also have utility programs to add a checksum or CRC to a program, so it can check its program data before executing it.

1.4.1 Operating system

The Embedded Systems either have no operating system, or a specialized embedded operating system (often a real-time operating system - RTOS), or the programmer is assigned to port one of these to the new system. The Operating system is a bundle of software which helps the hardware in resource management besides accomplishing many user applications. Real time operating systems are discussed in detail in chapter 2

1.4.2 Real Time Operating System

A Real Time Operating System or RTOS is an operating system that has been developed for real- time applications, typically used for embedded applications. They usually have the following characteristics:

l Small footprint (doesn’t use much memory)

l Pre-emptable (any hardware event can cause a task to run) l Multi-architecture (code ports to another type of CPU)

l predictable response-times to electronic events

Many real-time operating systems have scheduler and hardware driver designs that minimize the periods for which interrupts are disabled, a number sometimes called the interrupt latency. Many also 8 Chapter 1 - Embedded Systems include special forms of memory management that limit the possibility of memory fragmentation and assure a minimal upper bound on memory allocation and de-allocation times.

1.5 SUMMARY

In this chapter we have understood the definition of embedded systems, its composition in terms of hardware and software. We have also reviewed the different applications of embedded systems. Embedded Systems need to communicate with outside world. So various possible communication interfaces are also discussed.

1.6 QUESTIONS

1. What is an embedded System?

2. How a desktop PC is different from embedded system?

3. List some of the important applications of embedded Systems?

4. Discuss the hardware architecture of a typical embedded system using a block diagram?

5. What are the communication interfaces an embedded system can have?

6. What is RTOS? ______MSIT 4B Embedded Systems 9

Chapter 2

Computing Platforms

his chapter enables the reader to understand buses, memory devices, and IO devices used in the Tdesign of embedded systems.

2.1 INTRODUCTION

This chapter describes computing platforms created using , I/O devices, and memory components. The microprocessor is an important element of the embedded computing system, but it cannot do its job without memories and I/O devices. Thus one need to understand how to interconnect microprocessors and devices using the CPU . There are many similarities between the platforms required for different applications, so we can extract some generally useful principles by examining a few basic concepts.

This chapter discusses the following hardware components of Embedded Systems

l CPU bus l Memories

l Types of I/O devices.

l Techniques for interfacing memories and I/O devices to the CPU bus. l Structure of the complete platform.

l Development and debugging.

MSIT 4B Embedded Systems 9 10 Chapter 2 - Computing Platforms

l Basic concepts in manufacturing testing.

l Alarm clock design.

2.2 THE CPU BUS

A computer system encompasses much more than the CPU, it also includes memory and I/O devices. The bus is the mechanism by which the CPU communicates with memory and devices. A bus is, at a minimum, a collection of wires, but the bus also defines a protocol by which the CPU, memory, and devices communicate. One of the major roles of the bus is to provide an interface to memory and also to I/O devices. Bus Protocols

The basic building block of most bus protocols is the four-cycle handshake, illustrated in Figure 2.1. The handshake ensures that when two devices want to communicate, one is ready to transmit and the other is ready to receive. The handshake uses a pair of wires dedicated to the handshake: enq (meaning enquiry) and ack (meaning acknowledge). Extra wires are used for the data transmitted during the handshake. The four cycles are described below.

1. Device 1 raises its output to signal and enquiry, which tells device 2 that it should get ready to listen for data.

2. When device 2 is ready to receive, it raises its output to signal an acknowledgment. At this point, devices 1 and 2 can transmit or receive.

3. Once the data transfer is complete, device 2 lowers its output, signaling that it has received the data.

4. After seeing that ack has been released, device 1 lowers its output.

At the end of the handshake, both handshaking signals are low, just as they were at the start of the handshake. The system has thus returned to its original state in readiness for another handshake-enabled data transfer. MSIT 4B Embedded Systems 11

enq

Device 1 Device 2 Ack

Device 1 Action

Device 2

1 2 3 4 Time

Figure 2.1 The four-cycle handshake.

Microprocessor buses build on the handshake for communication between the CPU and other system components. The term bus is used in two ways. The most basic use is as a set of related wires, such as address wires. However, the term may also mean a protocol for communicating between components. To avoid confusion, we will use the term bundle to refer to a set of related signals. The fundamental bus operations are reading and writing. Figure 2.2 shows the structure of a typical bus that supports reads and writes. The major components are l Clock provides synchronization to the bus components, l R/W is true when the bus is reading and false when the bus is writing, l Address is an a- bundle of signals that transmits the address for an access, l Data is an n-bit bundle of signals that can carry data to or from the CPU, and l Data ready’ signals when the values on the data bundle are valid. 12 Chapter 2 - Computing Platforms

All transfers on this basic bus are controlled by the CPU. The CPU can read or write a device or memory, but devices or memory cannot initiate a transfer. This is reflected by the fact that R/W and address are unidirectional signals, since only the CPU can determine the address and direction of the transfer.

The behavior of a bus is most often specified as a timing diagram. A timing diagram shows how the signals on a bus vary over time, but since values like the address and data can take on many values, some standard notation is used to describe signals, as shown in Figure 2.3.

Device 1 Device 2

Clock

R/W CPU a Address Data ready n Data

Memory

Figure 2.2 A typical Microprocessor bus.

High Rising Falling

A Low ions

Changing

B StabelStable

Timing constraint

C Stabel

Figure 2.3 Timing diagram notation MSIT 4B Embedded Systems 13

A’s value is known at all times, so it is shown as a standard waveform that changes between zero and one. B and C alternate between changing and stable states. A stable signal has, as the name implies, a stable value that could be measured by an oscilloscope, but the exact value of that signal does not matter for purposes of the timing diagram. For example, an address bus may be shown as stable when the address is present, but the bus’s timing requirements are independent of the exact address on the bus. A signal can go between a known 0/1 state and a stable/changing state. A changing signal does not have a stable value.

Changing signals should not be used for computation. To be sure that signals go to their proper values at the proper times, timing diagrams sometimes show timing constraints. We draw timing constraints in two different ways, depending on whether we are concerned with the amount of time between events or only the order of events. The timing constraint from A to B, for example, shows that A must go high before B becomes stable. The constraint from A to B also has a time value of 10 ns, indicating that A goes high at least 10 ns before B goes stable.

The state machine view of the bus transaction is also helpful and a useful complement to the timing diagram. Figure 2.4 shows the CPU and device state machines for the read operation. As with a timing diagram, we do not show all the possible value of address and data lines but instead concentrate on the transitions of control signals. When the CPU decides to perform a read transaction, it moves to a new state, sending bus signals that cause the device to behave appropriately. The device’s state transition graph captures its side of the protocol.

Some buses have data bundles that are smaller than the natural word size of the CPU. Using fewer data lines reduces the cost of the chip. Such buses are easiest to design when the CPU is natively addressable. A more complicated protocol hides the smaller data sizes from the instruction in the CPU. Byte addresses are sequentially sent over the bus, receiving one byte at a time; the bytes are assembled inside the CPU’s bus logic before being presented to the CPU proper.

Some buses use multiplexed address and data. As shown in Figure 2.5, additional control lines are provided to tell whether the value on the address/data lines is an address or data. Typically, the address comes first on the combined address/data lines, followed by the data. The address can be held in a register until the data arrive so that both can be presented to the device (such as a RAM ) at the same time. 14 Chapter 2 - Computing Platforms

Get Done Send Release data data ack

Adrs Start Adrs Start here here See ack Ack

Wait Wait

CPU Device

Figure 2.4 State diagrams for the bus read transaction.

Data enable

Data

Adrs

CPU Device

Adrs

Adrs enable

Figure 2.5 Bus signals for multiplexing address and data.

2.2.1 DMA

Standard bus transactions require the CPU to be in the middle of every read and write transaction. MSIT 4B Embedded Systems 15

However, there are certain types of data transfers in which the CPU does not need to be involved. For example, a high-speed I/O device may want to transfer a block of data into memory. While it is possible to write a program that alternately reads the device and writes to memory, it would be faster to eliminate the CPU’s involvement and let the device and memory communicate directly. This capability requires that some unit other than the CPU be able to control operations on the bus.

Direct memory access (DMA) is a bus operation that allows reads and writes not controlled by the CPU. A DMA transfer in controlled by a DMA controller, which requests control of the bus from the CPU. After gaining control, the DMA controller performs read and write operations directly between device and memory. Figure 2.6 shows the configuration of a bus with a DMA controller. The DMA requires the CPU to provide two additional bus signals.

l The bus request is an input to the CPU through which DMA controllers ask for ownership of the bus. l The bus grant signals that the bus has been granted to the DMA controller.

A device that can initiate its own bus transfer is known as a bus master. Devices that do not have the capability to be bus masters do not need to connect to a bus request and bus grant.

Bus request

DMA Device

Controller

Bus grant Clock CPU R/W Address Data ready Data

Memory

Figure 2.6 A bus with a DMA controller. 16 Chapter 2 - Computing Platforms

The DMA controller uses these two signals to gain control of the bus using a classic four-cycle handshake. The bus request is asserted by the DMA controller when it wants to control the bus, and the bus grant is asserted by the CPU when the bus is ready. The CPU will finish all pending bus transactions before granting control of the bus to the DMA controller. When it does grant control, it stops driving the other bus signals: R/W, address, and so on. Upon becoming bus master, the DMA controller has control of all bus signals (except, of course, for bus request and bus grant).

Once the DMA controller is bus master, it can perform reads and writes using the same bus protocol as with any CPU driven bus transaction. Memory and device do not know whether a read or write is performed by the CPU or by a DMA controller. After the transaction is finished, the DMA controller returns the bus to the CPU by de-assert the bus request, causing the CPU to de-assert the bus grant.

The CPU controls the DMA operation through registers in the DMA controller. A typical DMA controller includes the following three registers: l A starting address register specifies where the transfer is to begin.

l A length register specifies the number of words to be transferred.

l A allows the DMA controller to be operated by the CPU. What is the CPU doing during a DMA transfer? It cannot use the bus. If the CPU has enough instructions and data in the and registers, it may be able to continue doing useful work for quite some time and may not notice the DMA transfer. But once the CPU needs the bus, it stalls until the DMA controller returns bus mastership to the CPU.

To prevent the CPU from idling for too long, most DMA controllers implement modes that occupy the bus for only a few cycles at a time. For example, the transfer may be made 4, 8 or 16 words at a time. As illustrated in Figure 2.7, after each block, the DMA controller returns control of the bus to the CPU and goes to sleep for a preset period, after which it requests the bus again for the next block transfer.

2.2.2 System Bus Configurations

A microprocessor system often has more than one bus. As shown in Figure 2.8, high-speed devices may be connected to a high-performance bus, while lower-speed devices are connected to a different bus. A small block of logic known as a bridge allows the buses to connect to each other. Three reasons to do this are summarized below. l Higher- speed buses may provide wider data connections.

l A high-speed bus usually requires more expensive circuits and connectors. The cost of low- speed devices can be held down by using a lower-speed, lower-cost bus. MSIT 4B Embedded Systems 17

l The bridge may allow the buses to operate independently, thereby providing some parallelism in I/O operations. Let’s consider the operation of a bus bridge between what we will call a fast bus and a slow bus as illustrated in figure 2.9. The bridge is a slave on the fast bus and the master of the slow bus. The bridge takes commands from the fast bus on which it is a slave and issues those commands on the slow bus. It also returns the results from the slow bus to the fast bus for example, it returns the results of a read on the slow bus to the fast bus.

Bus master request

CPU

DMA 4 words 4 words 4 words

Time Figure 2.7 Cyclic scheduling of a DMA request.

CPU Low-speed device

Low-speed bus High-speed bus

Low-speed device Memory High-speed device

Figure 2.8 A multibus system 18 Chapter 2 - Computing Platforms Slow address address Slow enables Slow read/write Slow Slow data Slow bus Slow (master) Slow Slow Write data Read data Slow data/fast data, fast fast data, data/fast Slow Slow ack/ fast ack Slow ack Fast data/slow data/slow Fast data Slow Slow Write adrs Read adrs t

Figure 2.9 UML state diagram of bus bridge operation. Idle Fast address enablefas and write/slow adrs, slow

Fast address Fast address enable and fast read/slow adrs, slow Bridge (slave) Fast ack bus Fast Fast address address Fast enable Fast adrs Fast data Fast read/write’ read/write’ Fast

MSIT 4B Embedded Systems 19

The upper sequence of states handles a write from the fast bus to the slow bus. These states must read the data from the fast bus and set up the handshake for the slow bus. Operation on the fast and slow sides of the bus bridge should be overlapped as much as possible to reduce the latency of bus-to-bus transfers. Similarly, the bottom sequence of states reads from the slow bus and writes the data to the fast bus.

The bridge serves as a protocol translator between the two bridges as well. If the bridges are very close in protocol operation and speed, a simple state machine may be enough. If there are larger differences in the protocol and timing between the two buses, the bridge may need to use registers to hold some data values temporarily.

2.2.2 ARM Bus

Since the ARM CPU is manufactured by many different vendors, the bus provided off-chip can vary from chip to chip. ARM has created a separate bus specification for signal-chip systems. The AMBA bus (ARM99A) supports CPUs, memories, and peripherals integrated in the system-on-silicon. As shown in Figure 2.10, the AMBA specification includes two buses. The AMBA high performance bus (AHB) is optimized for high-speed transfers and is directly connected to the CPU. It supports several high- performance features: pipelining, burst transfers, split transactions, and multiple bus masters.

AMBA high- performance bus

ARM Low-speed I/O device SRAM CPU

External bridge DRAM controller

High-speed I/O Low-speed I/O device device

On-chip AMBA peripherals bus (APB)

Figure 2.10 Elements of the ARM AMBA bus system 20 Chapter 2 - Computing Platforms

A bridge can be used to connect the AHB to an AMBA peripheral bus (APB). This bus is designed to be simple and easy to implement; it also consumes relatively little power. The AHB assumes that all peripherals act as slaves, simplifying the logic required in both the peripherals and the bus controller. It also does not perform pipelined operations, which simplifies the bus logic.

2.2.2 SHARC Bus

The SHARC uses a different configuration since it contains both program and data memory on-chip. There are two external interfaces of interest: the external memory interface and the host interface. The SHARC’s DMA system can be used to transfer data between internal memory and external memory or devices.

The external memory interface allows the SHARC to address up to four gigawords of external memory. External memory can hold either instruction or data. The external data bus can vary in width from 16 to 48 , depending on the type of memory accesses (floating point instruction, etc.) and whether DMA will be used to mediate the access. Different units in the processor have different amounts of access to the external address space, while the PM address bus (which is only 24 bits wide) can access only12 megawords. The exchanger memory interface is fairly straightforward. Figure 2.11 summarizes the signals in the interface. External memory is divided into four banks of equal size. Bank 0 starts at 0x00400000, followed by banks 1, 2, and 3. The size is controlled by the internal register MSIZE (3:0); the size can range from 8 k words to 256 M words. The MS’(3:0) outputs signals indicating which bank is being accessed and can be used as chip select signals. Each bank has its own wait state generator. The WAIT register can be used to set up wait states for external memory banks. The memory above the banks is known as unbanked external memory.

The host interface is used to connect the SHARC to standard microprocessor buses. Since that CPU will typically be used to set up DSP operations to be performed by the SHARC, it is referred to as the host system. The host interface are summarized in Figure 2.12. Once the host processor has control of the bus using HBR’, HBG’, and REDY, it can read and write the SHARC’s internal memory and the IOP registers ADDR(31:0) External address bus DATA(47:0) External data bus MS’ (3:0) Memory bank select RD’ Read strobe WR’ Write strobe PAGE DRAM page boundary SW’ Synchronous write select ACK Acknowledge

Figure 2.11 SHARC external memory interface signals MSIT 4B Embedded Systems 21

HBR’ Host bus request HBG’ Host bus grant CS’ Chip select REDY Host bus acknowledge SBTS’ Suspend bus tristate

Figure 2.12 SHARC host interface signals.

The SHARC includes an on-board DMA controller as part of the I/O processor. The DMA controller can perform external port block data transfers and data transfers on the link and serial ports. The channels are assigned to various uses, including external memory, link ports, or serial ports. The controller has ten channels that can be directed to any of the external targets. The external and link port DMA channel can be used for bidirectional transfers, while the serial port DMA channels are unidirectional.

A DMA channel operates in a manner similar to the data address generators. An index register (IIx), which holds the base address, and a modify register (IMx), which holds the increment address, configure a in internal memory. The index register address is actually an offset; that value is offset by 0x00020000 (the address of the first internal RAM location). After each transfer, the DMA controller updates the current address by the IMx value. Each channel also has a count register (Cx) that holds the number of words to be transferred. DMA transfers are mostly fixed priority serial ports have the highest priority, then link ports, and then external ports. However, external port priorities can be put in a rotating priority mode. A DMA transfer is started by writing parameters to IIx, IMx, and Cx registers while the DMA enable bit (DEN) is low and then setting DEN to 1 to enable DMA.

Each DMA channel has its own interrupt. When a channel’s Cx register goes to zero, an interrupt is generated to signal the end of that transfer. The interrupt priorities of the channels are fixed, but DMA interrupts can be masked and disabled. The DMA controller supports chained DMA transfers. When used in chain mode, the controller automatically sets up the next DMA operations. A chain pointer register (CP) points to the next set of DMA parameters that are stored in internal memory. Upon completing one transfer, the DMA controller automatically reads the channel parameters from the location pointed to by the CP and uses them to update the registers for the next transfer. The transfer automatically begins without having to set DEN to 0 and back to1.

2.3 MEMORY DEVICES

In this section, we introduce the basic types of memory components that are commonly used in embedded systems. Now that we understand the operation of the bus, we are able to understand the pin outs of these memories and how values are read and written. We also need to understand the varieties of memory cells that are used to build memories. There are several varieties of both read-only and read/ 22 Chapter 2 - Computing Platforms write memories, each with its own advantages. After discussing some basic characteristics of memories, we describe RAMs and the ROMs.

2.3.1 Memory Device Organization

The most basic way to characterize a memory is by its capacity, such as 4 Mbits. However, manufactures usually make several versions of a memory of a given size, each with a different data width. For example, a 4-Mbit memory may be available in the following two version:

l As a 1 M X 4- bit array, a signal memory access obtains a 4 bit data item, with a maximum of 220 different addresses.

l As a 4 M by 1-bit array, a single memory access obtains a 1-bit data item, with a maximum of 222 different addresses.

The height/width ratio of a memory is known as its aspect ratio. The best aspect ratio depends on the amount of memory required. Internally, the data are stored in a two-dimensional array of memory cells as shown in Figure 2.13. Because the array is stored in two dimensions, the n-bit address received by the chip is split into a row and a column address (with n = r + c). The row and column select a particular memory cell. If the memory’s external width is 1 bit, the column address selects a single bit; for wider data widths, the column address can be used to select a subset of the columns. Most memories include an enable signal that controls the tri-stating of data onto the memory’s pins. A read/write signal (R/W in the figure) on read/write memories controls the direction of data transfer; memory chips do not typically have separate read and write data pins.

Address

Memory n r array

c

R/W R/W’

Enable

Data Figure 2.13 Memory cell organization MSIT 4B Embedded Systems 23

2.3.2 RANDOM - ACCESS MEMORIES

Random – access memories can be both read and written. They are called random access because, unlike magnetic disks, addresses can be read in any order. There are two major categories of Random-Access Memory (RAM):

l Static RAM (SRAM)

l Dynamic RAM (DRAM). These two types of memory have substantially different characteristics, as summarized below.

l SRAM is faster the DRAM

l SRAM consumes more power than DRAM. l More DRAM can be put on a single chip.

l DRAM values must be periodically refreshed.

A static RAM and its operation are shown in Figure 2.13. The static RAM has four inputs: CE’ is the chip enable input. It is active low. When CE’ = 1, the SRAM’s data pins are disabled, and when CE’ = 0, the data pins are enabled.

R/W controls whether the current operation is a read (R/W =1) or a write (R/W = 0). Read and write are normally specified relative to the CPU, so read means reading from RAM and write means writing to RAM.

Adrs specifies the address for the read or write.

Data is a bidirectional bundle of signals for data transfer. When R/W = 1, the pins are outputs, and when R/W = 0, the data pins are inputs. Notice that there is no . SRAMs do have timing constraints on when signals can change, but they do not change relative to a clock. 24 Chapter 2 - Computing Platforms

CE’

R/W’R/W

a Adrs n Data

CE’

R/W R/W’

Adrs

From SRAM From CPU Data

Time READ Write

Figure 2.14 Static RAM Operation

A read operation on the SRAM occurs as follows: 1. CE’ is set to 0 enabling the chip with R/W = 1.

2. An address is presented on the address lines.

3. After some delay, data appear on the data lines. A write operation is similar:

1. CE’ is set to 0.

2. R/W is set to 0 for writing. 3. An address is set on the address lines and data is set on the data lines.

The interface to a dynamic RAM is more complex because DRAMs are designed to minimize the number of required pins. The interface to a basic dynamic RAM is illustrated in Figure 2.15. In addition MSIT 4B Embedded Systems 25 to the signals found in an SRAM, the DRAM has row address select (RAS’) and column address select (CAS). These signals are needed because address lines are provided for only half the address. The timing diagram for a read shows that the address is presented in the following two steps:

CE’

R/W’R/W

RAS’

A CAS’

CE’ Adrs N

R/WR/W’

RAS’

CAS’

Adrs Row Column address address

Data

Read timing diagram Time Figure 2.15 Basic dynamic RAM.

l First, RAS’ is set to 0 and the row part of the address (the top bits of the address) is set on the address lines. l Next, CAS’ is set to 0 and the column part of the address (the bottom bits of the address) are put on the address lines. 26 Chapter 2 - Computing Platforms

DRAMs must be refreshed because of the internal circuitry use to store values. Unlike SRAMs, DRAMs store values on capacitors. Because of parasitic resistances within the chip, the charge stored on the capacitors can leak away. The typical lifetime of data in a dynamic RAM is about a millisecond. The data can be refreshed by performing an internal read operation in which the data value is thrown away. A single refresh request can refresh an entire row of the DRAM. DRAMs provide a special quick refresh mode known a s CAS- before-RAS refresh. As the name implies, this mode is initiated by setting CAS’ to 0 to refresh. Asserting CAS’ before RAS’ causes the RAM to refresh that row and update the . Logic external to the DRAM known as a periodically performs a CAS- before-RAS refresh at a rate that allows the entire memory to be refreshed within the required refresh interval. The interface between the DRAM and the CPU must take refreshes into account a read or write request cannot be satisfied until a refresh is complete. The memory controller stands between the bus and DRAM and inserts wait states during the intervals in which it is refreshing the memory.

Since programs will often access several locations in the same region of memory, an early feature developed to improve DRAM performance is page mode. As shown in Figure 2.16, a page mode access supplies the row address only once but supplies many column addresses. RAS’ is held down while CAS’ is strobed to signal the arrival of successive column addresses. Page mode is typically supported for both reads and writes.

An improved version of page mode is known as EDO for extended data out. A timing diagram for an EDO read is shown in Figure 2.17. An EDO access is similar to page mode in that it allows several column addresses after a single row address. The term EDO comes from the fact that the data are held valid until the falling edge of CAS’, rather than its rising edge as in page mode.

Another method for improving DRAM performance is to introduce a clock. Synchronous DRAMs require events to be referenced to a clock edge: the DRAM’s internal circuitry can be made faster because it does not have to derive timing information from asynchronous inputs. As shown in Figure 2.18, a synchronous DRAM has the normal DRAM inputs plus a clock. Changes to the inputs (RAS’, CAS’, etc.) occur on clock edges, as do DRAM outputs. As a result, the behavior of the part can be described as a finite state machine. MSIT 4B Embedded Systems 27

CE’

R/W’

RAS’

CAS’ Column 1 Column 2 Column 3 address address address Adrs

Time Figure 2.16 Page mode read accesses in DRAMs.

The figure shows the basic state transition graph. The default mode for the synchronous DRAM is to be ready to accept a row address; the DRAM can then go into read or write actions, depending on the command it receives from the inputs.

Other types of RAMs with more sophisticated interfaces have been developed for specialized applications:

l A video RAM is designed to speed up video operations. It includes a standard parallel interface, as well as a serial interface fed by a shift register. The shift register can provide bit-by-bit access to the data in parallel with other operations performed on the parallel interface. The serial interface is typically connected to a video display while the parallel interface connects to the microprocessor.

l Rambus is designed to be a high-performance but relatively low-cost RAM system. It includes multiple memory banks that can be addressed in parallel. A variety of features, such as separate control and data buses, contribute to a sustained data transfer rate of well over one gigabyte per second. 28 Chapter 2 - Computing Platforms

CE’

R/W’

RAS’

CAS’ Column 1 Column 2 Column 3 address address address Adrs

Data 1 Data 2 Data 3

Time Figure 2.17 Extended-data-out (EDO) access in DRAMs.

Clock Row active CE’ (default)

F/W’

RAS’

b CAS’ Read Write Adrs

N

STATE DIAGRAM Interface

Figure 2.18 Synchronous DRAM. MSIT 4B Embedded Systems 29

2.3.1 Read-Only Memories

Read-only memories (ROM) are preprogrammed with fixed data. They are very useful in embedded systems since a great deal of the code, and perhaps some data, does not change over time. Read-only memories are also less sensitive to radiation-induced errors.

There are several varieties of ROM available. The first-level distinction to be made is between factory-programmed ROM (sometimes called mask-programmed ROM) and field-programmmable ROM. Factory-progrmmed ROMs are ordered from the factoy with particular progrmmin. ROMs can typically be ordered in lots of a few thousand, but clearly factory progrmming is useful only when the ROMs are to be installed in some quantity. Field-programmable ROMs, on the other hand, can be programmed in the lab. Programming units are sometimes known as ROM burners. To program the ROM, the user generates a programming file in a standard format, plugs a ROM into the ROM burner, and sends the file to the burner for programming.

There are several different types of field-programming ROMs available, some of which can be programmed only once and others that can be reprogrammed. Antifuse-programmable ROM is programmable only once the programming permanently modifies the chip. This type of ROM is the cheapest but is less flexible than reprogrammable ROM. UV-erasable PROM (UV-EPROM) can be erased using ultraviolet light and then reprogrammed. The chip’s package includes a window to allow the UV light to penetrate to the chip; the window must be covered when the chip is in use since the UV content of sunlight is sufficient to erase the chip over a period of months. In addition to raw memory, many microcontrollers come with on-board UV-EPROM for program storage.

Flash PROM is the modern form of electrically erasable PROM. Early forms of electrically erasable memory required high voltages for erasure and programming. This meant that the chips had to be removed from the sytem for reprogramming. Flash memory uses standard system voltage for erasing and programming, allowing it to be reprogrammed inside a typical system. This allows applications such as automatic distribution of upgrades the flash memory can be reprogrammed while downloading th enew memory contents from a telephone line. Early flash memories had to be erased in their entirety; modern devices allow memory to be erased in blocks. Most flash memories today allow certain blocks to be protected. A common application is to keep the boot-up code in a protected block but allow update to other memory block on the device. As a result, this form of flash is commonly known as boot-block flash.

2.4 1/0 DEVICES

In this section we survey some input and output devices commonly used in embedded computing systems. Some of these devices are often found as on chip devices in micro controllers; others are generally implemented separately but are still commonly used. Looking at a few important devices now 30 Chapter 2 - Computing Platforms will help us understand both the requirements of device interfacing in this chapter and the uses of devices in programming in this and later chapters.

2.4.1. Timers and Counters

Timers and Counters are distinguished from one another largely by their use, not their logic. Both are built from logic with registers to hold the current value, with an increment input that adds one to the current register value. However, a timer has its count connected to a periodic clock signal to measure time intervals, while a counter has its count input connected to a periodic signal in order to count the number of occurrences of some external event. Because the same logic can be used for either purpose, the device is often called a counter/timer.

Figure 2.19 shows the internals of a counter/timer to illustrate its operation. An n –bit counter/timer uses an n-bit register to store the current state of the count and an array of half subtracts to decrement the count when the count signal is asserted. Combination logic checks when the count equals zero; the done output signals the zero count. It is often useful to be able to control the time-out, rather than require exactly 2" events to occur. For this purpose, a reset register provides the value with which the count register is to be loaded. The counter/timer provides logic to load the reset register. Most counters provide both cyclic and cyclic modes of operation. In the cyclic mode, once the counter reaches the done state, it is automatically reloaded and the counting continues. In a cyclic mode, the counter/timer waits for an explicit signal from the microprocessor to resume counting. A Watchdog timer is an I/O device that is used for internal operation of a system. As shown in Figure 2.20, the watchdog timer is connected into the CPU bus and also to the CPU’s reset line. The CPU’s software is designed to periodically reset the watchdog timer, before the timer ever reaches its time out limit. If the watchdog timer ever does MSIT 4B Embedded Systems 31

Count register Done Reset

D Q Half D Q

Half D Q Subtractor D Q

= 0 …

D Q Half Subtractor D Q

Update

Figure 2.19 Internals of a Counter/timer reach that limit, its time-out action is to reset the processor. In that case, the presumption is that either a software flaw or hardware problem has caused the CPU to misbehave. Rather than diagnose the problem, the system is reset to get it operational as quickly as possible.

Reset Time-out

Watchdog CPU timer

Figure 2.20 A Watchdog timer. 32 Chapter 2 - Computing Platforms

2.4.2 A/D and D/A Converters

Analog/Digital (A/D) and Digital/Analog (D/A) converters (typically known as ADCs and DACs, respectively) are often used to interface nondigital devices to embedded systems. The design of A/D and D/A converters themselves is beyond the scope of this book; we concentrate instead on the interface to the microprocessor bus. Because A/D conversion requires more complex circuitry, it requires a somewhat more complex interface.

Analog/Digital conversion requires sampling the analog input before converting it to digital form. A control signal causes the A/D converter to take a sample and digitize it. There are several different types of A/D converter circuits, some of which take a constant amount of time, while the conversion time of others depends on the sampled value. Variable-time converters provide done signal so that the microprocessor knows when the value is ready.

A typical A/D interface has, in addition to its analog inputs, two major digital inputs. A data port allows A/D registers to be read and written, and a clock input tells when to start the next conversion. D/A conversion is relatively simple, so the D/A converter interface generally includes only the data value. The input value is continuously converted to analog form.

2.4.3 KEY BOARDS

A keyboard is basically an array of switches, but it may include some internal logic to help simplify the interface to the microprocessor. In this chapter, we build our understanding from a single to a microprocessor – controlled keyboard. A switch uses a mechanical contact to make or break an electrical circuit. The major problem with mechanical switches is that they bounce as shown in Figure 2.21. When the switch is depressed by

Switch

Voltage

Time Figure 2.21 Switch bouncing MSIT 4B Embedded Systems 33 pressing on the button attached to the switch’s arm, the force of the depression causes the contacts to bounce several times until they settle down. If this is not corrected, it will appear that the switch has been pressed several times, giving false inputs. A hardware debouncing circuit can be built using a one-shot timer. Software can also be used to debounce switch inputs. A raw keyboard can be assembled from several switches. Each switch in a raw keyboard has its own pair of terminals, making raw keyboards impractical when a large number of keys is required. More expensive keyboards, such as those used in PCs, actually contain a microprocessor to preprocess button inputs. PC keyboards typically use a 4-bit microprocessor to provide the interface between the keys and the computer. The microprocessor can provide debouncing, but it also provides other functions as well, An encoded keyboard uses some code to represent which switch is currently being depressed. At the heart of the encoded keyboard is the scanned array of switches shown in Figure 2.22. Unlike a raw keyboard, the scanned keyboard array reads only one row of switches at a time. The demultiplexer at the left side of the array selects the row to be read. When the scan input is 1, that value is transmitted to one terminal of each key in the row. If the switch is depressed, the I is sensed at that switch’s column. Since only one switch in the column is activated, that value uniquely identifies a key. The row address and column output can be used for encoding, or circuitry can be used to give a different encoding.

A consequence of encoding the keyboard is that combinations of keys may not be represented. For example, on a PC keyboard, the encoding must be chosen so that combinations such as control-Q can be recognized and sent to the PC. Another consequence is that rollover may not be allowed, For example, if you press ‘a’, and then press ‘b’ before releasing ‘a’, in most applications you want the keyboard to send an ‘a’ followed by a ‘b’. Rollover is very common in typing at even modest rates. A naïve implementation of the encoder circuitry will simply throw away any character depressed after the

Scan

Row Columns

Figure 2.22 A scanned key array 34 Chapter 2 - Computing Platforms first one until all the keys are released. The keyboard microcontroller can be programmed to provide N- key rollover, so that rollover keys are sensed, put on a stack, and transmitted in sequence as keys are released.

2.4.4 LEDs

Light-emitting diodes (LEDs) are often used as simple displays by themselves and arrays of LEDs may form the basis of more complex displays. Figure 2.23 shows how to connect an LED to a digital output. A resistor is connected between the output pin and the LED to absorb the voltage difference between the digital output voltage and the 0.7 V drop across the on LED. When the digital output goes to 0, the LED voltage is in the device’s off region and the LED is not on.

2.4.5 Displays

A display device may be either directly driven or driven from a frame buffer. Typically, displays with a small number of elements are driven directly by logic, while large displays use a RAM frame buffer.

The n-digit array, shown in Figure 2.24, is a simple example of a display that is usually directly driven. A single – digit display typically consists of seven segments; each segment may be either an LED or a liquid crystal display (LCD) element. This display relies on the digits being visible for some time after the drive to the digit is removed, which is true for both LEDs and LCD’s. The digit input is used to choose which digit is currently being updated, and the selected digit activates its display elements based on the current data value. The display’s driver is responsible for repeatedly scanning through the digits and presenting the current value of each to the display.

Digital output

Digital Current-limiting resistor logic

LED

Figure 2.23 An LED connected to a digital output. MSIT 4B Embedded Systems 35

Demux Digit

Date

Figure 2.24 An n-digit display

A cathode ray tube (CRT) can be used as either a directly driven or frame buffered device. A CRT has horizontal and vertical deflection inputs that position the electron beam over the screen as well as intensity input that controls the electron beam intensity and therefore the brightness of the pixel at that point. A calligraphic display is an example of a directly driven CRT display. For example, if you want to drawn a diagonal line, you move the electron beam to the start of the line, turn on the beam, and vary both the horizontal and vertical deflection inputs until you reach the end of the line. CRTs are used in calligraphic mode in aircraft displays because they allow lines to be drawn more brightly.

As shown in Figure 2.25, a frame buffer is a random-access memory that is attached to the system bus. The microprocessor writes values into the frame buffer in whatever order is desired. When a CRT is connected to a frame buffer, it usually operates in raster order by reading pixels sequentially, displaying a row at a time. Because of tradition inherited from broadcast television, displays are scanned from top to bottom and left to right, so the display first draws the topmost line or raster, turns off the beam and moves to the beginning of the second row, and continues to the bottom of the display. At the bottom right corner of the image, the beam is turned off and the vertical and horizontal inputs are returned to the top left corner for the next scan.

Large flat panel displays are, at this writing, typically built with LCD. Each pixel in the display is formed by a single liquid crystal. LCD displays present a very different interface to the system because the array of pixel LCDs can be randomly accessed. Early LCD panels were called passive matrix because they relied on a two-dimensional grid of wires to address the pixels. Modern LCD panels use an active matrix system that puts a transistor at each pixel to control access to the LCD. Active matrix displays provide higher contract and a higher-quality display. 36 Chapter 2 - Computing Platforms

2.4.6 Touch screens

A touch screen is an input device overlaid on an output device. The touch screen registers the position of a touch to its surface. By overlaying this on a display, the user can react to information shown on the display.

The two most common types of touch screens are resistive and capacitive. A resistive touch screen uses a two – dimensional voltmeter to sense position. As shown in Figure 2.26, the touch screen consists of two conductive sheets separated by spacer balls. The top conductive sheet is flexible so that it can

Intensity

Data Frame buffer D/A Address Vertical

Horizontal

Figure 2.25 A frame buffer display system

+ Push Conductive sheets Spacer ball ADC

Contact Vxpos Voltage across the screen Vx

X position Figure 2.26 Cross section of a resistive touch screen. MSIT 4B Embedded Systems 37 be pressed to touch the bottom sheet. A voltage is applied across the sheet; its resistance causes a voltage gradient to appear across the sheet. The top sheet samples the conductive sheet’s applied voltage at the contact point. An analog/digital converter is used to measure the voltage and resulting position. The touch screen alternates between x and y position sensing by alternately applying horizontal and vertical voltage gradients.

2.5 COMMUNICATION SOFTWARE

Most of the embedded systems need a communication interface to interact with the external world. To communicate with another device, the embedded system will have the hardware interfaces. In addition, communication software needs to be integrated with the firmware. For instance, to make an embedded system network-enabled, in addition to say, Ethernet interface, TCP/IP protocol stack has to run on it. The advantage of these interfaces is that the embedded system can be accessed over a network such as a Local Area network, corporate Intranet or even the public Internet.

As TCP/IP protocol stack is now being ported onto many embedded systems, we will briefly review the TCP/IP protocol architecture. Note that the software that implements this architecture will need just a few kilobytes of memory and hence integrating the software into an embedded system is pretty easy.

2.5.1 TCP/IP Protocol Suite

The TCP/IP protocol suite was developed during the initial days of research on the Internet. This research led to the most important concept of packet switching. In packet switching, the data to be transmitted (say a file) is divided into small packets and each packet is transmitted from the source to the destination. Each packet may take a different route, but at the destination, all the packets are put together in sequence and given to the application software. Application Layer Note that during transmission some packets may be lost and packet may not be received in sequence. Using TCP/IP protocols, the problems encountered due Transport Layer to loss of packets and packets arriving out of sequence are taken care of. The (TCP/UDP) user just gives a command for a file transfer and the person at the other end IP Layer receives the file. The TCP/IP protocol suite is now an integral part of desktop operating Datalink Layer systems such as Window and Unix. Even tiny embedded systems are being provided with TCP/IP support to make them network-enabled. These systems Physical Layer include web cameras, web TVs etc. TCP/IP stack is also being embedded into systems running real-time operating systems and handheld operating systems. Fig. 2.27: TCP/IP Protocol Suite 38 Chapter 2 - Computing Platforms

The TCP/IP protocol suite is depicted in Fig.2.27. It consists of 5 layers viz.,

l Physical layer l Data link layer (referred also as Network layer)

l Internet Protocol (IP) layer

l Transport layer (TCP layer and UDP layer) l Application layer

Physical layer: This layer defines the characteristics of the transmission such as data rate and signal encoding scheme. Data Link layer: This layer defines the protocols to manage the links establishing a link, transferring the data received from the upper layers, and disconnecting the link. In Local Area Networks, the data link layer is divided into tow sub-layers: Medium Access Control (MAC) sub-layer and Logical Link Control (LLC) sub-layer.

The IEEE 802.3 Ethernet, IEEE 802.11 Wireless LAN standards specify the first two layers functionality. These two layers are implemented through a combination of hardware and firmware. Above this interface, the software that implements the higher layer protocols has to run.

Internet protocol (IP) layer : The two important functions of this layer are addressing and routing. IP layer functionality is implemented in software. IP layer software runs on every end system and router connected to the Internet. The presently running IP layer software is called IP version 4. This version will be slowly replaced by IP version 6.

Each system connected to the internet is given a unique address known as IP address. In IP version 4, the IP address length is 32 bits and in IP version6, it is 128 bits. Most of the embedded system use 32 –bit IP address. IP layer also takes care of routing of packets from the source to the destination. Based on the IP address, the router forwards the packet towards the destination. IP provides an unreliable service, i.e., the packets may be lost, arrive out of order or with variable delay.

Embedded operating systems presently support 32-bit IP address. It is not a major concern because it is likely to take to take many years by the time all the end systems and routers support 128-bit addressing. Transport layer : This layer provides end-to-end data transfer service between two systems connected to the internet. Since IP layer does not provide a reliable service, it is the responsibility of the transport layer to incorporate reliability through acknowledgements, retransmissions, etc. The transport layer software runs on every end system. For applications which require reliable data transfer, connection-oriented transport protocol called MSIT 4B Embedded Systems 39

Transmission Control Protocol (TCP) is defined. For applications which require less protocol overhead, such as network management and voice/video over IP, User Datagram Protocol (UDP) is used as the transport layer.

Transmission Control Protocol (TCP) : It is the job of TCP layer to ensure that the data is delivered to the application layer without any errors. So, the functions of TCP are:

l To Check whether the packets are received in sequence or not. IF they are not is sequence, they have to be arranged in sequence.

l To check whether each packet is received without errors using the checksum. If packets are received in error, TCP layer has to ask for retransmissions.

l To check whether all packets are received or whether some packets are lost. It may so happen that one of the routers may drop a packet (Discard it) as its buffer is full or the router itself may go faulty. IF packets are lost, the TCP layer has to inform the other end system to retransmit the packet. Dropping a packet is generally due to congestion on the network.

l Sometimes, one system may send the packets very fast and the router or end system may not be able to receive the packets at that speed. The mismatch in speeds is taken care of by flow control mechanism. In flow control, one system informs the other system not to send any more packets till further information.

User Datagram Protocol (UDP) : TCP provides a reliable service by taking care of error control and flow control. However, the processing required for the TCP layer is very high. Hence, it is called a ‘heavy-weight’ protocol. In some applications such as real-time voice/video communication and network management, such high processing requirements create problems. So, another transport protocol is used for such applications. It is User Datagram Protocol (UDP)/ UDP provides a connectionless service. It sends the packets to the destination one after the other, without bothering whether they are being received correctly or not . It is the job of the application layer to take care of the problems associated with lack of acknowledgements and error control. Simple Network Management Protocol (SNMP) which is used for network management runs above the UDP.

As the processing power required to process UDP datagrams is less, UDP is used for real-time transmission of voice and video over IP networks.

Application layer : This layer differs from application to application. Two processes on two end systems communicate with application layer as the interface.

The application process (say for transferring a file) generates an application byte stream which is divided into TCP segments and sent to IP layer. The TCP segment is encapsulated in the IP datagram and sent to the data link layer. The IP datagram is encapsulated in the data link layer frame. The data link layer frame is sent to the physical layer interface and then the bit stream is sent over the transmission 40 Chapter 2 - Computing Platforms

medium. At the destination, each layer strips off the header, does the necessary processing based on the information in the header and passes the remaining portion of the data to the higher layer.

NAME

FTP HTTP SMTP TELNET SNMP

TCP UDP

IP

Figure 2.28: Application Layer Protocol in TCP/IP Protocol Stack

The application layer protocols in the TCP/IP protocol stack are shown in fig 2.28. The various application layer protocols are :

l Simple Mail Transfer Protocal (SMTP) , for electronic mail containing ASCII text.

l Multimedia Internet Mail Extension (MIME), for electronic mail with multi-media content

l File Transfer Protocol (FTP) for file

l TELNET for remote login.

l Hyper Text Transfer Protocol (HTTP) for World Wide Web service.

l Simple Network Management Protocol (SNMP) for network management. Note that SNMP runs above the UDP layer and not the TCP layer.

To make an embedded system network-enables, the TCP/IP protocol stack is integrated with the operating system software and the application software. The entire software is then transferred to the memory of the embedded system.

Though we talk about TCP ‘connection’, it needs to be noted that there is no real connection between the end systems. It is a virtual connection. In other words, TCP connection is only an abstraction. MSIT 4B Embedded Systems 41

2.6 PROCESS OF GENERATING EXECUTABLE IMAGE

On a desktop computer, the procedure for creating and executing the application software is as follows:

l Create the source file

l Create the executable file

l Create the command for execution.

l The process is loaded into the RAM by the loader

l Loader transfers the control to the process and the process executes.

Application development on desktop computers is called native development as the development and execution are done on the same hardware platform. Embedded software cannot be developed directly on the embedded system. Initially, the development is done on a desktop computer and then the software is transferred to the embedded system. This is known as cross-platform development.

The procedure for creating and executing an application in an embedded system is different for the following reasons:

l On desktop computers, there is a distinction between the operating system and the application software, whereas in an embedded system everything is a single piece of code.

l In an embedded system, there is only one application that needs to run continuously. Multiple applications need not be loaded on to the embedded system.

l On a desktop computer, in which part of the memory the application is loaded is immaterial; but in embedded systems you need to decide where the code will reside so that the processor executes the instructions from that memory location.

l Desktop computers use . In a multi-tasking system, when a new process has to be executed, the presently running process is transferred to the virtual memory (which can be on the hard disk). In embedded systems, the secondary storage is not available.

The operating system, communication software and application software have to be converted into a single executable image and transferred to the memory of the embedded system. This process of creating an executable image is illustrated in Fig.2.29. 42 Chapter 2 - Computing Platforms

Executable image

C/C++ Compiler Source Files Relocatable Linker & files (.o, .a) Object Locater Files

Assembly source Assembler Shared files Object Files (.o, .a)

Linker Library command File file (.Ink)

Figure 2.29 Process of Creating an Executable Image

The source files, written in C or C++ are converted into object files using the compiler,. The source files written in the assembly language of the target processor are converted into object files using the assembler. Each object file contains the binary code (instructions) and program data. Each object file created in the above process will have the following information:

l Name of the source file

l Size of the file, size of the binary instructions and size of the data l Processor-specific binary instructions and data

l Symbol table that contains details of variables, their data types and addresses

l Debugging information if the compilation is done with the debug option The format of this object file is called the Object File Format. This format has been standardized. The MSIT 4B Embedded Systems 43 two standards are (i) Common Object File Format (COFF) and (ii) Executable and Linking Format (ELF). The format of the object files has been standardized to ensure that the object files generated by different compilers can be combined together. The most widely used format is ELF format. In ELF format, the object file is divided into various “Sections”.

The linker combines the various object files including the library files used in the program and creates an executable image or a single relocatable object file or as shared library file.

The linker command file contains a number of instructions called linker commands or linker directives. The commands tell the linker how to combine the object files and the exact locations in the memory where the binary code and date have to be placed on the target embedded system. Linker commands also describe the “memory map” of the embedded system. The memory map describes the various types of memory (RAM, EPROM, Flash) on the target hardware, their starting address and the length of the memory. The linker commands indicate the addresses in the memory map where the executable image has to be stored. The output of the linker is an executable image that can be transferred to the memory chip of the target hardware. Executable image is also referred as bootable image, runtime image, target image.

Embedded software development is done in two stages. Initially, the software is developed on a PC or a workstation. This is called the host system. Subsequently, the software is transferred to actual embedded hardware called the target system. The host system and the target system can be connected through a serial interface such as RS232 or through Ethernet. This is depicted in Fig. 2.30. The processors of the host system and the target system are generally different. Hence, this development is known as cross- platform development.

Ethernet

Serial Bus Host System Target System

Figure 2.30: Cross-platform Development using Host System and Target System 44 Chapter 2 - Computing Platforms

Develop Software on Host system

Compilation & Compilation Linking

Download to Target System

Debugging on Target System

N Is the Yes system OK?

Transfer Software to ROM or Flash

Run Software

Figure 2.31: Protocol of Cross-Platform Development MSIT 4B Embedded Systems 45

2.6.1 Cross-platform Development

The process of cross-platform development is shown as a flowchart in Fig.2.31. The source code is written on the host system, compiled and linked using Cross-platform development tools and then downloaded onto the target and tested. If the software is not working as per requirements, it can debugged on the target itself. After ensuring that everything is OK, the executable image is transferred to ROM or Flash memory. Then, the embedded system can run on its own.

As the processes on the host system and the target system will be different, a number of cross- platform development tools are required. These tools are: l Cross-compiler

l Cross-assembler

l Cross-linker l Cross-compiled libraries

l Operating system dependent libraries and headers for target processor

The executable image can be transferred to the target hardware by one of the following mechanisms: l Programming the EEPROM or Flash

l Downloading the image through a communication interface which requires a file transfer utility and an embedded loader or an embedded monitor on the embedded system l Downloading through JTAG port.

An embedded loader or an embedded monitor is used to do the hardware initialization and run the initial bootup code.

Embedded Loader: Embedded loader is a program that resides in the ROM. This program gets executed when power is switched on. This code is to initialize the hardware and execute the boot image. After the boot image is executed, the executable image is transferred to the RAM and then the software can be transferred to the EEPROM or Flash.

Embedded Monitor: Manufacturers of processor evaluation board supply the embedded monitor software. On power-on, this software is executed. It can be accessed from the host to download the image as well as debug the software by setting breakpoint.

2.6.2 Boot Sequence

An embedded system can be booted in one of the following ways: 46 Chapter 2 - Computing Platforms

l Executed from ROM using RAM for data

l Execute form RAM after loading the image from RAM l Execute form RAM after downloading form the host

The process for executing from ROM using RAM for data is illustrated in Fig. 2.32.

1

IP

SP

Reset Vector

3 2

4

Boot Image Stack

ROM RAM

Figure 2.32 Boot Sequence

1. On power-on, the CPU always executes the code contained at a specific address in ROM. This code is called Reset Vector.

2. The Reset Vector is a jump instruction to jump to another portion of the memory where the code that resides to boot the system is stored. This is called bootstrap code. This code initializes memory including RAM. 3. The executable image contains the data sections. Data sections are both readable and writable, and hence they are ciop8ied or RAM. Stack space is reserved in RAM.

4. CPU’s Stack Pointer is set to point to the beginning of the stack. MSIT 4B Embedded Systems 47

Once the boot sequence is completed, the targets software will start running, this software does the following: l Initialization of the hardware: CPU, memory, bus interface and devices are initialized

l Initialization of the operating system: operating system kernel objects such as tasks, semaphores, memory management services are initialized and stacks for each task are created.

l Initialization of the application code. Up to this point, we studied the architecture and the process of developing the software for embedded systems without reference to any specific processor. Now, you need to appreciate that the development tools requires are different for each processor used in the embedded system. The cross-platform development tools are specific to the processor. The format o the linker command file depends on the types of memory device used in the embedded system. Accordingly, the boot sequence also varies. The process of creating executable image is dependent on the target hardware. Development/Testing Tools

You need a number of special tools for development and testing the hardware and software. These tools are listed in this section.

2.6.3 Hardware Development/Testing Tools

For hardware development and testing, some important tools are as follows: l Digital Multimedia: This is the most important instrument to measure voltages, currents, and to check the continuity of connections on the circuits.

l Logic Analyzer: Logic analyzer is used to check the timings of the signals.

l Oscilloscope: Oscilloscope is used to analyze the time domain waveforms. Storage oscilloscopes can store a portion of the waveform.

l Spectrum analyzer: Spectrum analyzer is used to analyze the signals in frequency domain.

2.6.4 Software Development/Testing Tools

For software development, the important development tools are listed below.

Operating System Development Suite: If you have to use an off-the-shelf operating system, you need to obtain the development suite from the operating system vendor. This development suite contains 48 Chapter 2 - Computing Platforms the API calls to access the OS services. The development suite may run either on a Window system or a Unix/Linux system. You can use either an open source OS such as Embedded Linux or a commercial OS. Cross-platform development tools: For the processor of your choice, you need to have cross-platform development tools such as the cross-compiler, cross-assembler, cross-debugger etc. as discussed in the previous section. Cross-compiler generates the object code of the given processor for the source code developed in a high level language such as C or C++. A number of GNU tools are available which can be downloaded from www.gnu.org for a number of processors. These cross-compiler provide an Integrated Development Environment (IDE) with editor, compiler, debugger etc. all bundled together.

ROM Emulator: ROM emulator emulates ROM in the RAM. It can be used to debug the software by setting the breakpoints in the memory.

EPROM Programmer: if your hardware board does not support in-circuit programming or if you have only EPROM for program memory, you need an EPROM programmer. Along with it, you also need an EPROM eraser.

Instruction Set Simulator (ISS): ISS is a software utility that creates a virtual version of the processor on the PC.

CPU Socket ICE

Host System Target System

Figure 2.33: In-Circuit Emulator

In- Circuit Emulator (ICE): ICE is a device that emulates the CPU. You can plug-in this device in the place of CPU on the target hardware board as shown in Fig. 2.33. This device fits into the CPU socket MSIT 4B Embedded Systems 49 of the target board on one side, and on the other side it is connected to the host through RS232 or USB port. You can run a debugger on the host to debug the code while it is running on the target hardware. ICE is processor-specific and costly, but an excellent tool for debugging.

2.7 SUMMARY

In this chapter we have covered a detailed information on different hardware components of an embedded systems. Specifically we have learnt the timing diagrams during different bus and memory operations. Different types of memories used in the design of embedded systems are presented. Brief discussions on the input/output devices are also covered. A small discussion on the communication software and different development and testing tools are also covered.

2.8 QUESTIONS

1. Discuss four-cycle hand shake with the help of a diagram?

2. What are state diagrams? Discuss the state diagram for the bus read transaction?

3. What is DMA? Explain the DMA operation with appropriate diagram?

4. Write notes on ARM bus and SHARC bus?

5. With a diagram explain the memory device organization?

6. With timing diagram explain the operation of a static RAM?

7. With timing diagram explain the operation of a Dynamic RAM?

8. Write notes on different I/O devices used in embedded systems? 50 Chapter 3 - Embedded/Real - Time Operating Systems

Chapter 3

Embedded/Real - Time Operating Systems

3.0 OBJECTIVES

l Understand the commonalities and differences in the operating systems

l Understanding of commercial and open source operating systems used in embedded/real-time systems as well as handheld/mobile devices.

mbedded software can be developed using either commercially available operating systems or with open source operating systems. In this chapter, we will study the features that are common Eto all the operating systems and also the features that differ.

3.1 OFF-THE-SHELF OPERATING SYSTEMS

To reduce development time and effort, it is preferable to use an off-the-shelf operating system that suits application needs. Depending on the application, we can choose one of the following categories of operating systems:

n Non real time embedded operating systems: These operating systems are suitable for non real time applications. They use a preemptive kernel, but strict dead lines cannot be met.

n Real time operating systems: These operating systems provide the necessary functionality for achieving real time performance through very low interrupt latency. These are ‘deterministic’ operating systems, i.e. the worst case response time can be predicted.

n Handheld/mobile operating systems: These operating systems are meant for handheld computers and mobile devices such as smart phones. Operating systems such as Palm OS,

50 Chapter 3 - Embedded/Real - Time Operating Systems MSIT 4B Embedded Systems 51

Symbian OS and Windows CE have been developed to address this market. But in recent years, many other embedded/real time operating systems are being ported onto the handheld/ mobile devices.

3.2 COMMONALITIES OF THE OPERATING SYSTEMS

Because of the immense competition in the embedded/real time operating system market, the features supported by operating systems of different vendors appear almost the same. Of course, every vendor claims that his operating system is the best. The features listed below are common to most of these operating systems:

n Integrated Development Environment (IDE): To facilitate easy and fast development, vendors supply an IDE that includes editor, compiler, debugger and also the necessary cross platform development tools.

n POSIX compatibility: POSIX1003.1-2001 standard specifies the Application Programming Interface to achieve portability of applications. Many off -the-shelf operating systems provide compliance to POSIX.

n TCP/IP support: TCP/IP protocol stack, including applications layer protocols such as FTP, SMTP, HTTP, is integrated along with the operating system software. Function calls will be provided to access the network-related services.

n Device drivers: A number of device drivers are provided for commonly used devices such as serial port, parallel port, USB etc. A Device Driver Kit (DDK) is also generally included that facilitates fast development of device drivers.

3.3 PORTABLE OPERATING SYSTEM INTERFACE (POSIX)

POSIX is a standard developed by IEEE. Before POSIX was standardized, every OS vendor used to give his proprietary Application Programming Interface (API) for application development. This interface is a set of function calls to access the operating system objects and services. If you develop an application using the API supplied by one vendor, it was not possible to port the application to another operating system. POSIX standard addressed this problem and the API was standardized. IEEE POSIX1003.1c- 2001 standard specifies the AAPE for portable operating system interface. IEEE POSIX 1003.13 “Standardized Application Environment Profile POSIX Real-time Application Support” addresses the API for real-time embedded systems. This standard gives the various C language functions calls and library functions that need to be implemented by the Operating System (OS) vendors. The concept of 52 Chapter 3 - Embedded/Real - Time Operating Systems

threads became popular only because of this standard. In fact, threads are referred as POSIX threads. POSIX standards are also available for other programming languages such as ADA and FORTRAN.

3.4 DIFFERENCES IN OPERATING SYSTEMS

Off-the-shelf operating systems differ in the following aspects

n Support for processors: The operating system code consists of

(a) processor-independent code: and

(b) processor-dependent code.

Operations such as context switching, in which the contents of the CPU registers have to be stored in the memory, have to be done through assembly language programming. Hence, a small portion of the operating system software is processor dependent. As a result, every operating system may not support all the processors. Commercial operation systems support the popular processors such as Intel /Pentium, MIPS, PowerPC, Intel StrongARM etc. For the specific processor of your choice, you need to check whether the operating system ‘port’ is available.

n Footprint: The footprint or the memory occupied by the kernel differs from OS to OS. Real- time operating system kernels require only a few Kilobytes of memory. The operating system vendor specifies the minimum amount of RAM and ROM required for the kernel. Of course, you need to calculate the total memory requirement for your embedded system keeping in view the size of the application software and the communication software.

n Java environment: In tune with the recent trends in using Java for embedded systems, some vendors provide a Java Virtual Machine (JVM) support on their OS. If you want to develop applications in Java, you need to check on the availability of the JVM.

n Board Support Packages (BSPs): Some vendors supply hardware boards built around different processors with OS ported onto the hardware. These are called BSPs. BSPs speed up software development.

n Scheduling algorithms: Some operating systems provide a number of scheduling algorithms such as round-robin, first-in-first-out etc., whereas some operating systems support only priority- based preemptive scheduling. If you want flexibility in scheduling algorithms, your choice may be comparatively limited.

n Priority inheritance: Whether priority inheritance is a good feature or not, it is a debatable issue. Some operating systems do not support this feature, as their developers dislike it. Some MSIT 4B Embedded Systems 53

operating systems do support it as their developers feel that the designer should be given flexibility in application development.

n Maximum number of tasks: The maximum number of tasks supported differs from OS to OS. Some OSs support only 64 tasks, some OSs claim ‘unlimited’ number though there exists a certain limit.

n Assigning task priorities: In some OSs, each task should have a unique priority and the priority is fixed. In some other OSs, the priority can be changed dynamically during execution time.

3.5 EMBEDDED OPERATING SYSTEMS

Embedded operating systems use a preemptive priority based kernel, but they do not meet strict deadlines. The stripped-down versions of the desktop operating systems can be used as embedded operating systems i.e., in the operating system software, remove all the unnecessary features and make the kernel occupy a small memory and you have the embedded operating system. This strategy is used in developing the following three embedded operating systems:

l Embedded NT

l Windows XP Embedded

l Embedded Linux

3.5.1 Embedded NT

Many embedded systems use the Single Board Computer (SBC) hardware. This hardware is essentially same as the desktop computer’s hardware. The embedded system however does a focused job; hence the application and the operating system together need to be bundled and transferred to the target hardware. Typical applications include Internet Kiosk, ATM etc. For such applications, Microsoft’s Embedded NT is excellent choice. Embedded NT is based on Windows NT4.0. The requirement of Embedded NT for minimal operating system functionality without any network support is 9 MB of RAM and 8 MB of program such as Flash. The exact memory requirement of the target hardware depends on the application. If the application is a single Win32 application with networking capability, minimum RAM requirement is 16 MB. With network components and device drivers, the requirement is about 16 MB of RAM and 16 MB of program memory. Application development using Embedded NT is very easy. You can develop the applications is Visual Studio environment (Visual Basic or Visual C++) and the application can be ported onto the target system. 54 Chapter 3 - Embedded/Real - Time Operating Systems

To develop applications using Embedded NT, the development system has to be installed on a system with the following configurations:

n Windows NT 4.0

n Service Pack 4.0 or later

n Internet Explorer 5.0 or later

n Visual Studio 6.0 development environment

n Windows NT 4.0 Service Resource Kit

Development tools called the Component Designer and the Target Designer are provided in Embedded NT. The Component Designer facilitates defining and adding components to the Target Designer. The Target Designer is used to create a bootable NT target system. It generates a target image, which consists of a number of directories. All these directories and files need to be copied onto the target machine’s file system. Target system can be created in any of the following ways:

n Create a headless system: Embedded systems that do not have a monitor, keyboard or mouse are called headless systems. To create a headless system, the following components need to be added:

Debugging a headless system is a problem because of lack of input/output devices. To overcome this problem, “Console Administration Component” is provided. This component allows monitoring the embedded system through a serial interface.

l Null VGA

l Null Keyboard Drive

l Null Mouse

3.5.2 Windows XP Embedded

Microsoft’s Windows XP Embedded (abbreviated Embedded XP in this section), is the successor to Embedded NT. Like Windows NT, it is also a preemptive multitasking operating system and uses Win32 applications and drivers. Hence, the main attraction of this operating system is that applications developed on desktop using Visual Studio can be ported onto the embedded system. However, as compared to many other embedded operating systems, it requires huge resources particularly the RAM and the Flash memory.

Embedded XP is now being used in a number of embedded systems such as set top boxes, point of sale terminals, Internet Kiosks, etc. Embedded XP has many attractive features such as support for IP MSIT 4B Embedded Systems 55 version 6, IrDA compliance, 802.11 LAN connectivity, Universal Plug and Play feature, etc. It also has support for Telephony APIs (TAPI) and NetMeeting to provide multimedia applications.

The applications software to be embedded can be created using embedded Visual Studio which provides the following utilities to create the target image:

n Target Designer, which is used to create bootable runtime image of the application you have developed. You can transfer the runtime image to the target system, and the system will boot with Embedded XP and the application will run from this image.

n Component Designer, which is used to create components required for the embedded application.

n Deployment tools, which are used to prepare the target device for the runtime image.

The bootable image is created using the Target Designer and transferred to a CDROM or a Flash (disk on a chip). The memory requirement of the embedded system depends on the application, but minimum of 9 MB RAM and 8 MB of ROM are required. The attractive feature of Windows-based embedded operating systems is that applications developed using Visual Studio can be easily ported onto the embedded system.

3.5.3 Embedded Linux

Open source software revolution started with the development of Linux. Like Linux kernel, Embedded Linux kernel is covered by the GNU General Public License (GPL) and hence the complete source code is available free of cost. GPL also permits redistribution of the source code even if it is modified as per the requirements of your application. However, modified source code should be made available, usually for a nominal cost which is not more than the cost of reproducing the software. You can get the details of open source and its implications at www.gnu.org/copyleft/gpl.html. It is expected that open source software will have profound impact on the industry in the coming years and Embedded Linux is likely to be used extensively in embedded applications. It is now being used on a number of devices such as PDAs, Set Top Boxes, and cellular phones, Internet appliances and also in major mission critical equipment such a telecommunication switches and routers.

Embedded Linux (WWW.embedded-linux.org) is available freely and openly in source code form and is gaining popularity. The vendors who offers embedded Linux solutions and support for both real-time and non real-time application include Coventive, FSM Labs, Lineo, Lynux Works, Mizi, MontaVista, PalmPalm, RedHat, ridgeRun, TimeSys etc.

The main attractions of embedded Linux are:

n Open source software 56 Chapter 3 - Embedded/Real - Time Operating Systems

n Availability of a large number of software resources in source code form (device drivers, application software, networking software, protocol software, speech/image processing software, etc.) n Support for POSIX

n No royalty

n Availability of many people with expertise in Unix/Linux programming. As many non-standard variations of embedded Linux are available (mainly due to the fact that the source code is available to everyone to anyone can make the changes), it may lead to non-portable applications.

3.5.4 Embedded Linux Consortium Platform Specifications

Embedded Linux Consortium (ELC) released the Embedded Linux Consortium Platform Specifications (ELCPS) to bring in standardization for using Linux in embedded systems. ELCPS Versions 1.0 was released in December 2002. It is compatible with POSIX 1003.13. ELCPS defines three environments to cater to different embedded system requirements. These are

n Minimal system environment

n Intermediate system environment n Full system environment

For each of these environment, the APIs to be supported are defined in ELCPS. Operating system vendors can ensure conformance to these specifications so that the applications can be made portable. If you are modifying the Linux kernel to suit your embedded system requirements, you need to ensure ELCPS conformance.

Minimal system environment: This configuration is applicable for embedded systems with on processor and associated memory. These embedded systems work in isolated mode without any user interaction and have no secondary storage or file system. Only one process with one or more Linux tasks or POSIX threads will be running. Hence, the set of APIs required for this type of embedded software development will be minimal and also the OS footprint will be very small.

Intermediate system environment: Embedded systems of this category will have one or more processors, but need not have any secondary storage. A file system can be built, say, in a Flash memory device. There will be multiple processes, asynchronous input/output operations and support for dynamic linking of objects (libraries). Support for secondary storage is also provided. MSIT 4B Embedded Systems 57

Full system environment: The embedded systems of this category will have one or more processors, secondary storage, network support, and user interfaces. This environment supports full multi-purpose Linux.

The vendor can obtain conformance for any of the three environments. For each environment, the various API calls to be supported such as for task management, inter-process communication, file management, -safe general ISOC library interface, mathematical function library calls, header file definitions for symbolic constants etc. are specified.

Embedded Linux Consortium Platform Specifications address three types of embedded systems: (a) minimal system environment for systems with one processor and memory: (b) intermediate system environment for systems with one or more processors and secondary storage and (c) full system environment which requires one or more processors, secondary storage, user interface and network support.

3.6 REAL-TIME OPERATING SYSTEMS

There are nearly 100 real-time operating systems in the commercial market. So, shopping for a real- time operating system is not an easy task. We will briefly discuss the following operating systems:

n QNX Neutrino

n VxWorks

n MicroC/OS-II

n RTLinux

3.6.1 Qnx Neutrino

QNX Neutrino is a popular real-time operating system of QNX Software System Limited (www.qnx.com). It supports a number of processors such as ARM, MIPS, Power PC, SH-4, StrongARM, x86 and Pentium. Board Support Packages (BSP) and Device Driver Kit help in fast development of the prototype. It provides an excellent Integrated Development Environment (IDE). It has support for C, C++ and Java languages and TCP/IP protocol stack.

It has support for multiple scheduling algorithms such as round-robin, FIFO etc. and the same application can use different scheduling algorithms for different tasks. Up to 65,535 tasks are supported and each task can have 65,535 threads. Minimum time resolution is one nanosecond. Even small embedded systems can use this OS as it requires 64K kernel ROM and 32K kernel RAM. 58 Chapter 3 - Embedded/Real - Time Operating Systems

3.6.2 VxWorks

Wind River’s VxWorks (www.windriver.com) is one of the most popular real-time operating systems. This OS has been used in the Mars Pathfinder. It supports a number of processors including PowerPC, Intel StrongARM, ARM, Hitachi SuperH, Motoroal ColdFire, etc.

It supports both preemptive and round-robin scheduling algorithms. 256 priority levels can be assigned to the tasks. It supports priority inheritance. Those who are against priority inheritance need not use this feature, which is an option provided to the developer.

3.6.3 RTLinux

FSM Labs (www.fsmlabs.com) has two editions of RTLinux, RTLinuxPro and RTLinuxFre. RTLinuxPro is a priced-edition and RTLinuxFree is the open source release. RTLinux is a hard real-time operating system with support for many processors such as x86, Pentium PowerPC, ARM, Fujitsu, MIPS and Alpha. A footprint of 4 MB is required for RTLinux. It does not support priority inheritance.

RTLinux runs underneath the Linux operating system. The Linux OS becomes an idle task for RTLinux. RTLinux tasks are given priority as compared to Linux tasks. Interrupts form Linux are disabled to achieve real-time performance. This interrupt disabling is done using a layer of emulation software between the Linux kernel and the interrupt controller hardware.

The tasks, which do not have any timing constraints, will run in the Linux kernel only. Soft real-time capability is provided by Linux system. Hard real-time tasks run in the real-time kernel. Worst case time is 15 microseconds between giving an interrupt signal and starting of the real-time handler. MiniRTL, a tiny implementation of RTLinux, runs on 486 machines. This implementation is targeted towards PC/104 boards.

3.7 HANDHELD OPERATING SYSTEMS

Handheld computers are becoming very popular as their capabilities are increasing day by day. Handheld computer integrated with mobile phone are called smart phones. The smart phones support data, voice and video services. The important requirements for a mobile operating system are:

To keep the cost of the handheld computer low, small footprint of the operating system is required. A footprint for 64 KB to 2MB for the OS would be attractive. Generally, low-cost handheld computers will have 2 to 4 MB of ROM and 2 to 16 MB of RAM. The operating system should have support for soft real-time performance. For audio and video applications, timing constraints are imposed though the deadlines are not critical.TCP/IP stack needs to be integrated along with the operating MSIT 4B Embedded Systems 59 system. Communication protocol stacks for Infrared, Bluetooth, IEEE 802.11 interfaces need to be integrated.

There should be support for data synchronization. A special utility is required using which the data, in say, the desktop computer and the handheld computer can be synchronized. For instance, when you connect your handheld computer to your desktop computer and run the data synchronization software, the address book information in both the computers should be made to contain the same addresses. Each handheld operating system video presently gives proprietary software for data synchronization.

The popular handheld/mobile operating systems are

l Palm OS

l Symbian OS

l Windows CE

l Windows CE.NET

An overview of these operating system is given below.

Data synchronization between two devices is presently done through proprietary protocols. SyncML, a markup language, has been standardized that uses standard protocols and a standard markup for data synchronization.

3.7.1 Palm OS

Palm OS (www.palmos.com) is perhaps the most popular handheld operating system. In addition to Palm Computing Inc.’s palmtops, many other vendors such as Sony, Hndspring use this OS in their handheld computer hardware. The OS will run on even low-end processors of 16 MHz to 33 MHz clock speed with 512 KB ROM, and 32 KB RAM. Bluetooth and IrDA protocol stacks are integrated into the OS to provide wireless interfaces to the handheld computer.

Using Palm OS SDK, application software can be developed for these devices. The applications can be developed in C or C++ or Java. The data between a handheld and a desktop computer can be synchronized using a utility called HosSync. The Palm Conduit Development Kit (CDK) is used to develop conduits (plug-ins to HotSync) in VC++ or VB or Java. Palm OS also provides a virtual file system API to develop a file system. Software development can be done on a desktop and tested on an emulator. The emulator can run on Windows, Unix/Linux, and Mac operating systems.

Though nearly 37% of the market share of handheld operating systems was held by Palm OS, now there is stiff competitions from Symbian OS and windows CE. 60 Chapter 3 - Embedded/Real - Time Operating Systems

3.7.2 Symbian OS

Symbian OS (www.symbian.com) has a good market share of handheld operating systems. Licensees of Symbian OS include major mobile phone suppliers such as Ericsson, Kenwood, Motorola, Nokia, Panasonic, Psion, Sanyo, Siemens and Sony. Ericsson’s R380 smart phone and Nokia’s 9210 communicator are based on Symbian OS. The OS architecture supports both voice service and packet data service. Support for Bluetooth and IrDA is provided. The software development kit enable programmers develop applications in C++ or Java. Data synchronization is achieved with Symbian Connect.

3.7.3 Windows CE

Microsft’s Pocket PC hardware runs the Windows CE operating system. This OS runs on Intel x86 family processor as well as Alpha, NEC, Toshiba, PowerPC and Hitachi’s Super H. The handheld computers of Casio, HP, NEC, Philips, Sharp etc. run this operating system. It requires 2 MB ROM.

Embedded Visual Tools are used to develop applications above Windows CE. The main attraction of these tools is that a subset of Win32 API is used to develop applications. Application software developers well versed in VB or VC++ can develop applications for Windows CE without much effort or training and this is the main attraction of using this operating system. “Pocket” version of Microsoft Office applications (Outlook, Excel, Work etc) are available. For end users, this is a blessing as they need not learn a new user interface for running a word processor or a spreadsheet. ActiveSync is used to synchronize the data between the desktop and the handheld computer.

3.7.4 Windows CE.Net

Microsoft’s Windows CE.NET is successor to Windows CE. It is a real-time embedded operating system and is now becoming a popular operating system for smart phones, PDAs, set top boxes, IP Phones, CD players, digital cameras, DVD players, etc. In tune with the other operating systems of Microsoft, this OS facilitates development of applications either using Microsoft Visual Studio.NET or using embedded Visual C++. Hence, developing new applications or porting existing applications is very easy. With support for TCP/IP, IP version 6, Bluetooth, 802.11 and Ethernet and Infrared interfaces as well as Pocket Internet Explorer, handheld computers and smart phones will look like desktop computers in your palm. The minimal kernel functionality requires just 200 KB. Average ISR latency is 2.8 microseconds on a 166 MHz Pentium processor.

Windows CE.NET 4.2 Emulation Edition and Evaluation Kit are freely downloadable for non-commercial purposes and evaluation. Without the target hardware, the application software can be tested using emulators. Applications can be developed and tested on a desktop computer running Windows XP. MSIT 4B Embedded Systems 61

Microsoft provides nearly 2 million lines of source code under the Shared Source Program that enables modification to the source code to suit your application requirements.

3.8 SUMMARY

In this chapter we have discussed some of the commercially available operating systems that are used in embedded systems. Also we have discussed some of the open source operating systems such as embedded linux and RT linux. However, these operating systems differ in a number of aspects which include: processors supported, footprint, support for Java or lack of it, availability of Board Support Packages, support for different scheduling algorithms, number of tasks supported and assignment of priorities, support for priority inheritance and the licensing fee. An overview of the salient features of selected operating systems is presented. Embedded NT, Windows XP embedded and Embedded Linux are reviewed. An overview of some real-time operating systems viz., QNX Neutrino, VxWorks and RTLinux is also given. Palm OS, Symbian OS, Windows CE and Windows CE.NET handheld/mobile operating system features are also discussed briefly.

3.9 QUESTIONS

1. List the various commercially available embedded operating systems and explain their features.

2. List the various open source embedded operating systems and explain their features.

3. List the various mobile/handheld operating systems?.

4. What is POSIX compatibility?

5. Discuss some of the main differences of among the various off the shelf operating Systems?

6. Discuss the features of Embedded NT operating system?

7. What are the issues involved in synchronization of data between the handheld computer and the desktop computer? Explore the standardization activities for data synchronization.

8. What type of an operating system you will use for the following applications:

a) An Internet kiosk installed in a rural area.

b) A navigation system used in an aircraft

c) A navigation system used in a car

d) A DVD player

e) A desktop computer to be used as a process control system with hard real-time requirements

f) An electronic toy helicopter 62 Chapter 4 - Device Drivers for Embedded Systems

Chapter 4

Device Drivers for Embedded Systems

4.0 INTRODUCTION

hile application developers often have access to good software tools, the task of designing and implementing device drivers have continued to be time-consuming and prone to errors, Wlargely due to a lack of adequate tools. Developing device drivers for a highly integrated microcontroller can be daunting, partly due to the sheer complexity of the device, but also due to some other difficulties. This chapter will give an overview of device driver design and traditional development techniques, and then discuss portability and the options available using modern tools.

4.1 CHIP INITIALIZATION

When a new electronic board is available, software must be written to handle system start-up. This is usually done by responding to a reset interrupt or jumping to a fixed start address. Basic initialization of stack pointer, compiler environment and bus controller settings are done during this phase. The CPU will most often not interfaced to external hardware unless some low-level configurations are made:

l Bus interface (address and data buses, chip-select signals).

l Memory configuration (DRAM refresh, wait-states, handshaking).

l Interrupt system (IRQs, interrupt priorities and masks).

Once low-level configuration has been performed, execution normally continues in the application program’s main() function. At this point, the application logic and device drivers can start execution.

62 Chapter 4 - Device Drivers for Embedded Systems MSIT 4B Embedded Systems 63

4.2 PERIPHERAL MODULE DEVICE DRIVERS

Device drivers provide a software interface for accessing hardware from software. The application developer now needs a driver library that can be used by the application program to access hardware services in peripheral modules (UARTs, Timers, A/D or D/A converters, CAN or DMA controllers etc). Peripheral module device drivers take care of:

l Initialization (e.g. setting up baud rate or timer periods)

l Run-time control (e.g. sending character strings or starting DMA transfers)

l Interrupt handling (responding to hardware events)

Driver logic is normally implemented by modifying or testing special function register (SFR) control and status bits in a suitable order. A modern high-end microcontroller can have several thousand SFR bits, each of which must be carefully initialized and manipulated at run-time in proper sequence. Implementation of device drivers sometimes requires engineering tricks, such as using compiler-specific interrupt programs, low-level hardware access from assembly language, advanced use of the linker command file and so forth. Unfortunately device driver development is more inefficient and time-consuming compared to other software development.

“Device drivers accommodate a very high information density and are dependent on many parts that can appear in a large number of combinations. This fact is the reason for the low productivity for device driver modeling, four times lower than ordinary software code.” Because of the close relationship to the hardware, traditional software tools have no or little support for device driver development. In fact, for the most part device drivers have been hand-crafted.

4.3 LACK OF PORTABILITY

Most devices have differences in their programming model, and a device driver for one device is not likely to work with another device. Typical device differences that affect device driver implementation are

l Feature set

l SFR control/status bit configuration

l Chip pin-out

l Interrupt structure

l Memory configuration 64 Chapter 4 - Device Drivers for Embedded Systems

As device drivers have a tight connection to the target device and the development environment, they are usually not portable. This reduces possibilities for code reuse and increase the cost, time, and efforts needed to migrate a software project from one microcontroller device to another.

4.4 TRADITIONAL DEVELOPMENT TECHNIQUES

Before writing a driver library, the hardware manuals must be studied in great detail, as both the chip internals and the electronic board design must be fully understood. This is time-consuming and usually considered to be a rather tedious task. When using a new microcontroller derivative that is similar to a familiar one, it might take just as long time to find out the difference between the two, as to understand a new architecture. Minor but important changes can be well hidden as small-print.

Design and implementation requires not only programming experience but also expertise in hardware design and development tools. Coding is error-prone, as it can be very difficult to initialize and manipulate all configuration and status bits properly in the correct sequence. Debugging can prove to be particularly tricky as the drivers may not work without external hardware, and may need to conform to timing constraints or other resource limitations. They may need to run at full speed and can sometimes not be single-stepped in a debugger.

n To summarize, hand-crafted device driver development requires the following tasks:

l Reading the hardware manual and learning the chip internals

l Understanding the electronic board design

l Designing the device driver library

l Implementing the device driver library

l Debugging the device driver library

l Test and integration.

We are convinced that all of these items can be eliminated entirely or in part using new innovative development tools, thus decreasing time-to-market and development cost, while at the same time increasing the utilization of advanced chip features and portability.

A typical example of increased utilization of microcontroller features is copying a memory block from one address to another. Although an unused DMA channel may be available in the processor, most programmers still write a software loop to copy bytes rather than using the faster hardware mechanism. By simplifying the use of peripheral modules like the DMA controller, more programmers will make better use of the hardware and deliver more competitive software solutions faster. MSIT 4B Embedded Systems 65

4.5 VISUAL DEVICE DRIVER DEVELOPMENT TOOLS

What we need is a Rapid Application Development (RAD) tool that helps the developer with as much of the chip-specific knowledge and programming efforts as possible, thus giving him more time to concentrate on the application.

Figure 4.1: A device driver project for a Mitsubishi M16C/62 device

We will present driver development using the Mitsubishi M16C/62 product as an example. The main benefits of automatic device driver development are:

l Visual configuration of the microcontroller device l Automatic generation of device driver source code

l Faster migration to new devices.

A few moments of dialog box configurations and clicking the code generation button is all it takes; tailor-made device driver source code is generated instantly. Figure 4.1 shows the program window 66 Chapter 4 - Device Drivers for Embedded Systems

containing a chip symbol with the peripheral modules and a graphic overview of the current pin usage, as well as a project explorer currently displaying the device driver functions to be generated.

4.6 LOW-LEVEL CONFIGURATION OF A DEVICE

Device driver development is highly target-dependent, and this article uses the 16-bit Mitsubishi M16C/ 62 device as an example. This is a general purpose microcontroller with a useful selection of peripheral modules.

Figure 4.2: System setup of the bus controller MSIT 4B Embedded Systems 67

It is usually best to configure low-level settings, such as the bus and interrupt controller, first as these may affect global resources (in particular the number of available port pins). Figure 4.2 shows system configuration of the bus controller. External interfacing pins (chip select 2) have been enabled, like some other features (for instance memory expansion mode). Other option pages can be used to configure control pins, clocking, and the watchdog timer.

Figure 4.3 shows the dialog box for configuring the interrupt controller. INT1 and INT2 have been enabled with Both edges and Rising edge triggering, respectively. For this configuration, the code generator will create an interrupt controller initialization function,

Figure 4.3: Configuration of the interrupt controller

some support functions for priority manipulation and interrupt handlers for the INT1 and INT2 interrupts. As the example above illustrates, also very detailed target-dependent features can be configured very easily using dialog boxes. The graphic user interface prevents any attempt to make illegal settings or create resource conflicts, and gives a good overview of the features available in the selected chip and bus mode.

4.7 PERIPHERAL MODULE CONFIGURATION

All peripheral modules (such as UARTs, DMA or CAN controllers, A/D or D/A converters, etc) can 68 Chapter 4 - Device Drivers for Embedded Systems

be configured in the same manner. A typical serial communication system has both transmit and receive interrupts enabled, so the code generator will create interrupt handlers and implement interrupt- driven UART drivers using transmit and receive buffers. The baud rate and communications protocol have been configured to 8 data bits, 1 stop bit, no parity, and 19200 baud. Other peripheral modules can be configured in the same manner. Code generation

Once the peripheral modules have been configured satisfactorily, all SFR initialization values are calculated and device driver source code is generated automatically. Dependent on the project settings, the set of device driver functions or their internal For the configurations made earlier in the simple example project above, the device driver functions listed in table 1 will be generated by default. The drivers are generated as one *.c file and one *.h file for each peripheral module, and contain both initialization, run- time control, and interrupt handler functions. Bus controller driver functions Purpose

MA_Init_CPU() Initialize the bus controller according to the dialog box. MA_Wait_CPU() Cause transition to wait mode. MA_Stop_CPU() Cause transition to stop mode. MA_PowerControl_CPU() Change clocking/power consumption. MA_SoftwareReset_CPU() Reset the chip. MA_TrigWDT_CPU() Trig the watchdog timer. MA_CRC_CPU() Calculate checksum. MA_IntHandler_WDT_CPU() Interrupt handler for the watchdog interrupt. MA_IntHandler_NMI_CPU() Interrupt handler for the NMI interrupt. Interrupt controller driver functions Purpose MA_Init_INT() MSIT 4B Embedded Systems 69

Initialize the interrupt controller according to the dialog box. MA_SetPriorityMask_INT() Change the priority mask for interrupt filtering. MA_IntHandler_INT1_INT() Interrupt handler for the external INT1 interrupt. MA_IntHandler_INT2_INT() Interrupt handler for the external INT2 interrupt. Serial communications driver functions Purpose MA_InitCh1_SCI() Initialize the UART channel 1 according to the dialog box. MA_PutCharCh1_SCI() Send a character on channel 1. MA_PutStringCh1_SCI() Send a character string on channel 1. MA_GetCharCh1_SCI() Receive a character from channel 1. MA_GetStringCh1_SCI() Receive a character string from channel 1. MA_IntHandler_TXI1_SCI() Interrupt handler for the channel 1 transmit empty interrupt. MA_IntHandler_RXI1_SCI()

Interrupt handler for the channel 1 receive full interrupt.

Table 4.1: Device driver functions generated from the example project Figure 4.4 shows an example of how unnecessary driver functions can be removed from code generation; the functions for sending and receiving character strings are removed in this case. 70 Chapter 4 - Device Drivers for Embedded Systems

Figure 4.4: Selection of driver functions during code generation

The automatically generated device driver functions can be used like any other source code library; i.e. the functions can be called in a suitable order from the application program.

4.8 DEVICE DRIVER PORTABILITY

While the implementation of the device driver functions are highly target-dependent and non-portable, the name and parameter list (the function prototypes) of the device driver functions provided to the application program can be fairly standardized, thus increasing portability.

It is therefore possible to make device drivers that appear to be portable to the application program although the implementation is highly non-portable. By using tools like IAR MakeApp, target-dependent device driver libraries can be created instantly and provide a fairly portable application programming MSIT 4B Embedded Systems 71 interface (API) to control the device from the application program. Migration to a new device is a matter of regenerating the device driver library, and integrate the new driver library with the application logic.

4.9 SUMMARY

Device driver development has traditionally been handled by specialists using their own expertise and standard programming tools. Due to advances in tool design, Shorter time-to-market l Reduced development cost

l Increased quality

l Increased utilization of advanced chip features Faster migration to new devices. This chapter has outlined some of the problems involved in device driver design, and presents one tool that can be used to meet the challenges:

4.10 QUESTIONS

1. What is a device driver? Explain the features of peripheral module device drivers?

2. Does the device driver development depends on the CPU architecture? explain

3. Why device drivers are not portable? Explain

4. Discuss the device driver traditional development techniques?

5. What are the visual development tools available for device driver development? Briefly explain its features? 72 Chapter 5 - Communication Interfaces

Chapter 5

Communication Interfaces

his chapter will enable the reader to understand various interfaces that are essential for Tcommunication between embedded devices and outside world 5.1 NEED FOR COMMUNICATION INTERFACES

The need for providing communication interfaces arises due to the following reasons:

l The embedded system needs to send data to a host (a PC or a workstation). The host will analyse of data and present the data through a Graphical User Interface.

l The embedded system may need to communicate with another embedded system to transmit/ receive data. Providing a standard communication interface is preferable rather than providing a proprietary interface.

l A number of embedded systems may need to be networked to share data. Network interfaces need to be provided in such a case.

l An embedded system may need to be connected to the Internet so that anyone can access the embedded system. An example is a real-time weather monitoring system. The weather monitoring system can be Internet-enabled using TCP/IP protocol stack and HTTP server.

l Mobile devices such as cell phones and palmtops need to interact with other devices such as PCs and laptops for data synchronization. For instance, you need to ensure that the address book on your palmtop is the same as that on your laptop. When the palmtop comes near the laptop, automatically the two can form a network to exchange data.

72 Chapter 5 - Communication Interfaces MSIT 4B Embedded Systems 73

l For some embedded system, the software may need upgradation after it is installed in the field. The software can be upgraded through communication interfaces. Due to these reasons, providing communication interfaces based on standard protocols is a must. Not surprisingly, many micro-controllers have on-chip communication interfaces such as a serial interface to meet these requirements. Now, we will discuss the following communication interfaces. l RS 232/UART l RS 422, RS 485 l Universal Serial Bus l Infrared l IEEE 1394 Firewire l Ethernet l IEEE 802.11 wireless interface l Bluetooth For each of these interfaces, we will discuss the hardware details and the protocol stack that needs to be implemented in software.

5.2 RS232/UART

RS 232 is a standard developed by Electronic Industry Association (EIA). This is one of the oldest and most widely used communication interfaces. The PC will have two RS232 ports designated as COMI and COM2. Most of the micro-controllers have an on-chip serial interface. The evaluation boards of the processors are also connected to the host system using RS232. RS232 is used to connect a DTE (Data Terminal Equipment) to a Data Circuit Terminating Equipment (DCE). A DTE can be a PC, serial printer or a plotter. DCE can be a modem, mouse, digitizer or a scanner. RS232 interface specifies the physical, mechanical, electrical, and procedural characteristics for serial communication. RS 232 is a standard for serial communication, i.e. the bits are transmitted serially. The communication between two devices is in full duplex, i.e. the data transfer can take place in both directions.

5.2.1 RS232 Communication Parameters

When two devices have to communicate through RS232, the sending device sends the data character 74 Chapter 5 - Communication Interfaces

by character. The bits corresponding to the character are called data bits. The data bits are prefixed with bit called start bit, and suffixed with one or two bits called stop bits called stop bits. The receiving device decodes the data bits using the start bit and stop bits. This mode of communication is called asynchronous communication because no clock signal is transmitted. In addition to start bit and stop bits, an additional bit called parity bit is also sent. Parity bit is used for error detection at the receiving end.

For two devices to communicate with each other using RS232, the communication parameters have to be set on both the systems. And, for a meaninginful communication, these parameters have to be the same. The various communication parameters are listed below: Date rate: The rate at which data communication takes place. The PC supports various data rates such as 50, 300, 600, 1200, 2400, 4800, 9600, 19200, 38400, 57600 and 115200 bps. The oscillator in the RS232 circuitry operates at 1.8432 MHz and it is divided by 1600 to obtain the 115200 data rate.

Data bits: Number of bits transmitted for each character. The character can have 5 or 6 or 7 or 8 bits. If you send ASCII characters, the number of bits is 7.

Start bit: The bit that is prefixed to the data bits to identify the beginning of the character.

Stop bits: These bits are appended to the data bits to identify the end of character. If the data bits are 7 or 8, one stop bit is appended. If the data bits are 5 or 6, two stop bits are appended. Parity: The bit appended to the character for error checking. The parity can be even or odd. For even parity, the parity bit (1 or 0) will be added in such a way that the total number of bits will be even. For odd parity, the parity bit will make the total number of bits odd. If the parity is set to ‘none’ is used; and the parity bit is ignored. For example, if the data bits are 1010110, the parity bit is 0 if even parity is used; and the parity bit is 1 if odd parity is used. At the receiving end, the device will calculate the parity bit and if the received parity bit matches with the calculated parity bit, it can be assumed that the data is without errors. But this is not the reality. If two bits are in error, the receiver cannot detect that there is an error!

Flow Control: If one of the device sends data at a very fast rate and the other device cannot absorb the data at that rate, flow control is used. Flow control is a protocol to stop/resume data transmission. This protocol is also known as handshaking. If you are sure that there will be not flow control problem, there is no nee for handshaking. You can do hardware handshaking in RS232 using two signals: Request to Send (RTS) a Clear to Send (CTS). When a device has data to send, it asserts RTS and the receiving device asserts CTS. You can also do software handshaking a device can send a request to suspend data transmission by sending the character Control S (0*13). The signal to resume data transmission is sent using the character Control Q (0*11). This software handshaking is also known as XON/XOFF.

5.2.2 RS 232 Connector Configurations

RS232 standard specifies two types of connectors: 25-pin connector and 9-pin connector. In the 25- MSIT 4B Embedded Systems 75 pin configuration only a few pins are used. The description of some of these pins is given in Table 5.1 and 5.2

Pin number Function (abbreviation) 1 Chassis ground 2 Transmit data (TXD) 3 Receive data (RXD) 4 Request To Send (RTS) 5 Clear To Send (CTS) 6 Data Set Ready (DSR) 7 Signal Ground (GND) 8 Carrier Detect (CD) 20 Data Terminal Ready (DTR) 22 Ring Indicator (RI) Table 5.1: 25-pin Connector Pin Details

Pin number Function (abbreviation) 1 Carrier Detect (CD) 2 Receive Data (RXD) 3 Transmit Data (TXD) 4 Data Terminal Ready (DTR) 5 Signal Ground (GND) 6 Data Set Ready (DSR) 7 Request to Send (RTS) 8 Clear to Send (CTS) 9 Ring Indicator (RI)

Table 5.2: 9-pin Connector Pin Details

For transmission of 1’s and 0’s, the voltage levels are defined in the standard. The voltage levels are different for control signals and data signals. These voltage levels are given in Table 5.3. The voltage level is with reference to the local ground and hence RS232 uses unbalanced transmission.

Signal Voltage Level Data input +3 volts and above for 0-3 volts and below for 1 Data output +5 volts and above for 0-5 volts and below for 1 Control input +3 volts and above for 1 (ON)-3 volts and below for 0 (OFF) Control output +5 volts and above for 1(ON)-5 volts and below for 0 (OFF)

Table 5.3: Voltage Levels for Data and Control Signals

Note that the voltage levels used in RS232 are different from voltage levels used in embedded system 76 Chapter 5 - Communication Interfaces

(as most chips use 5 volts and below only). Another problem is that the processor gives out the data in parallel format, not in serial format. These problems are overcome through the use of UART (Universal Asynchronous Receive Transmit) chips.

5.2.3 UART

The processors process the data in parallel format, not in serial format. To bridge the processor and the RS232 port, Universal Asynchronous Receive Transmit (UART) chip is used. UART has two sections: receive section and transmit section. Receive section receives the data in serial format, converts it into parallel format and gives it to the processor.

The transmit section takes the data in parallel format form the processor and converts it into serial format. The UART chip also adds the start bit, stop bits and parity bit. Many micro-controllers have on- chip UART. However, the necessary voltage level conversion has to be done to meet the voltage levels of RS232. This is achieved using a level shifter as shown in Fig. 5.1

UART chip operates at 5 volts. The level conversion to the desired voltage is done by the level shifter, and then the signals are passed on to the RS232 connector.

Level RS232 UART Shifter connect or

Fig. 5.1: Hardware for RS 232 Interface

RS232 standard specifies a distance of 19.2 meters. However, you can achieve distances up to 100 meters using RS232 cables. The data rates supported will be dependent on the UART chip and the clock used. Most of the processor including Digital Signal Processors have on-chip UART. ICs such as MAX 3222, MAX 3241 of Maxim can be used as level shifters.

5.3 USB

Universal Serial Bus has gained immense popularity in recent years. Desktops, laptops printers, display devices, video cameras, hard disk drives, CDROM drives, audio equipment etc. are now available with USB interface. MSIT 4B Embedded Systems 77

USB Host Controller

USB USB

USB Hub Keyboard Printer

USB

USB

Digital Camera Scanner

Fig: 5.4: USB Device Connection hierarchy

Using USB, a number of devices can be networked using Master/Slave architecture. A host, such as the PC, is designated as a master. As shown in Fig. 5.4, a number of devices, up to a maximum of 127, can be connected in the form of an inverted tree. On the host, such as a PC, there will be a host, controller a combination of hardware and software to control all the USB devices. Devices can be connected to the host controller either directly or through a hub. A hub is also a USB device that extends the number of pots from 2 to 8 to connect other USB devices. A USB device can be self-powered, or powered by the bus. USB can supply 500 mA current to the device.

5.3.1 USB Physical Interface

A shielded 4-wire twisted copper cable is used with the pin connections as shown in Table 5.4. data is transmitted over a differential twisted pair of wires labeled D+ and D-. 78 Chapter 5 - Communication Interfaces

PIN NUMBER Function (abbreviation) 1 +5 V Power (VBUS) 2 Differential data line (D+) 3 Differential data line (D-) 4 Power and Signal ground (GND)

Table 5.4: Pin Connections for USB Interface

5.3.2 Features of USB

Data rates: USB 1.1 standard supports 12 Mbps data rate, and 1.5 Mbps for slower peripherals.USB2.0 supports data rates up to 480 Mbps. Special features: USB supports plug and play, i.e. you can connect USB devices to the hub or the host without any need for configuration settings. The host will detect and identify the device by exchanging a set of packets. This is known as “Bus Enumeration”. The devices are hot-pluggable, i.e. there is no need to switch off the power to connect the device.

Communication protocol: The communication between the host and the devices is in the form of packets. When a device is connected, the host obtains the configuration and properties of the device and assigns a unique ID to identify the device in the network. When a device is removed, the hub informs the host. Short data packets are exchanged for handshaking, acknowledgements, and for informing the capabilities of the devices. Packets of the size up to 1023 bytes are exchanged for data transfer. When a device is plugged in, the host automatically gets the complete information about the device, either directly or through the hub. An ID is assigned to the device and the communication can start. Device classes: Each USB device has a unique ID (between 1 and 127) and a device descriptor that provides information about the device class and its properties. The device classes are display, communication, audio, mass storage and human interface (such as keyboards, front panel knobs, control panels in VCR, data gloves etc.). Providing a USB interface to an embedded system is just to integrate a USB chip such as USS-820D of Agere Systems. Maxim’s MAX 3450E,3451E and 3452E are some of the USB transceivers.

USB is a powerful, versatile and simple communication interface. So, not surprisingly, many peripherals are now provided with a USB interface. RS232 will be confined tot he technology museums.

5.4 INFRARED

Infrared interfaces are used in remote control units of TV, VCR, air-conditioner, etc. However, these MSIT 4B Embedded Systems 79 interfaces are all based on proprietary protocols. Infrared Data Association (IrDA), a non-profit industry association (www.irda.org) founded in 1993, released the specifications for low-cost infrared communication between devices. Infrared interfaces are now commonplace for a number of devices such as palmtops, cell phones, digital cameras, printers, keyboards, mice, LCD projectors, ATMs, smart cards etc. Infrared interface provide a low-cost, short range, point-to-point communication between two devices. The only drawback with infrared is that it operates in line of sight communication mode and it cannot penetrate through walls. It supports only data.

The block diagram of IrDA module is shown in Fig.5.5(a) and the protocol architecture is shown in Fig. 5.5(b). As shown in Fig. 5.5(a), the device will have an infrared transceiver. The transmitter is a LED and the receiver is a photodiode. Agilent’s HSDL-1001 can be used as a transceiver. For low data rates, the processor of the embedded system itself can be used whereas for high data rates, a different processor may be needed.

The data to be sent on the infrared link is packetized and encoded as per the IrDA protocols and sent over the air to the other device. The receiving device will detect the signal, decode and depacketize the data.

Processor UART/Packetizer Encode/Decod Transceiver er LED,

Figure 5.5 (a) IrDA Module

Higher Layers (Application, IrCOMM) Link Management Protocol (IrLMP) Link Access Protocol (irLMP) Physical Layer (IrLMP)

Figure 5.5 (b) Irda protocol architecture

As shown in Fig. 5.5(b), for communication through infrared interface, the physical layer (IrPHY) and data link layer (IrLAP) are specified in the standards. Link Management is done through IrLMP, above which the application layer protocols will be running. Physical layer: IrPHY specifies the data rates and the mode of communication. IrDA has two specifications viz., IrDA Data and IrDA control. IrDA Data has a range of 1 meter with bi-directional communication. Serial IR (SIR) supports data rate up to 115 Kbps and Fast IR (FIR) supports data rates up to 4 Mbps. IrDA Control has a range of 5 meters with bi-directional communication speed up to 75 Kbps. A host such as PC can communicate with 8 peripherals using IrDA protocols. 80 Chapter 5 - Communication Interfaces

Data link layer: The data link layer is called the IrLAP. IrLAP is based on HDLC protocol. Master/ Slave protocol is used for communication between tow devices. The device that starts the communication is the master. The master sends the command and the slave send a response. Link management layer: This layer facilitates a device to query the capabilities of other devices. It also provides the software capability to share IrLAP between multiple tasks.

Higher layers: The higher layer protocols are application specific. IrCOMM protocol emulates the standard serial port. When two devices such as palmtop and mobile phone both fitted with infrared interface come face to face, they can exchange the data (say, the address book) using the application layer protocols.

5.5 ETHERNET

Ethernet interface is now ubiquitous. It is available on every desktop and laptop. With the availability of low-cost Ethernet chips and the associated protocol stack, providing an Ethernet interface is very easy and useful to the embedded system. Through the Ethernet interface, the embedded system can be connected to the LAN. So, a number of embedded systems in a manufacturing unit can be connected as a LAN; and, another node on the LAN, a desktop computer, can monitor all these embedded systems. The data collected by an embedded system can be transferred to a database on the LAN.

The Ethernet interface provides the physical layer and data link layer functionality. Above the data link layer, the TCP/IP protocol stack and the application layer protocols will run. This protocol architecture is shown in Fig. 5.6 Application, Layer (SMTP, FTP, HTTP)

TCP Layer

IP Layer

Logical Link Control Data Link Layer Medium Access Control

Physical Layer

Fig. 5.6: Ethernet LAN Protocol architecture MSIT 4B Embedded Systems 81

Physical layer: The Ethernet physical layer specifies a RJ 45 jack using which the device is connected to the Local Area Network. Unshielded twisted pair or coaxial cable can be used as the medium. Two pairs of wires are used for transmission, one for transmit path and one for receive path. Ethernet transmits balanced differential signals. In each pair, one wire carries signal voltage between 0 to +2.5 volts and the second wire carriers signals with voltage between –2.5 volts and 0 volts, and hence the signal difference is 5 volts. The various pin connection details of RJ 45 connector are given in Table 5.5. speeds of 10 Mbps and 100 Mbps are supported.

Pin Function (abbreviation) number 1 Transmit Data (TD+) 2 Transmit Data (TD-) 3 Receive Data (RD+) 4 No connection (NC) 5 No connection (NC) 6 Receive Data (RD-) 7 No connection (NC) 8 No connection (NC)

Table 5.5: Pin Connections for Ethernet Interface

Data link layer: The data link layer is divided into Medium Access Control (MAC) layer and Logical Link Control (LLC) layer. The MAC layer uses the Carrier Sense Multiple Access/Collision Detection (CSMA/CD) protocol to access the shared medium. The LLC layer specifies the protocol for logical connection establishment, flow control, error control and acknowledgements. Each Ethernet interface will have a unique Ethernet address of 48 bits.

Due to the availability of low-cost Ethernet chip (such as CS 8900 of Cirrus Logic), with little additional cost, an embedded system can be provided with Ethernet connectivity. Even 8-bit micro-additional cost, an embedded system can be provided with Ethernet connectivity. Even 8-bit micro-controller based embedded systems can be provided the Ethernet interface. To make the embedded system network- enabled, as shown in Fig. 5.7, the upper layer protocols viz., TCP/IP stack has to run above the Ethernet. The TCP/IP stack has to be embedded along with the Operating System and application software in the firmware. If the embedded system has to work as a web server, the HTTP server software has to run on the system.

5.6 IEEE 802.11

IEEE 802.11 family of standard is for Wireless Local Area Networks and Personal Area Networks. These standards cover the physical and MAC layers of Wireless LANs. The LLC layer is same as for the Ethernet LAN. The architecture of IEEE 802.11 standard for Wireless LAN is shown in Fig. 5.8. 82 Chapter 5 - Communication Interfaces

Each wireless LAN node has a radio and an antenna. All the nodes running the same MAC protocol and competing to access the same medium will form a Basic Service Set (BSS). This BSS can interface to a backbone LAN through Access Point (AP). The backbone LAN can be a wired LAN such as Ethernet LAN. Two or more BSSs can be interconnected through the backbone LAN. In trade magazines, the Access Points are referred as “Hotspots”.

AP Hotspot

Backb

AP AP Hotspot Hotspot

Laptop

Fig. 5.8: IEEE 802.11 Wireless LAN

The physical medium specification for 802.11 WLANs are: l Diffused Infrared with anto operating wavelength between 850 and 950 nm. The data rate supported using this medium is 1 Mbps. 2 Mbps data rate is optional.

l Direct Sequence Spread Spectrum operating in 2.4 GHz ISM band. Up to 7 channels each with a data rate of 1 Mbps or 2 Mbps can be used. l Frequency hopping spread spectrum operating in 2.4 GHz ISM band with 1 Mbps data rate. 2 Mbps data rate is optional.

ISM (Industrial, Scientific and Medical) band is a ‘free’ band and hence no government approvals are required to operate radio systems in this band. ISM band frequency range is 2400 – 2483.5 MHz. Extensions of IEEE 802.11 have been developed to support higher data rates. 802.11b standard has been MSIT 4B Embedded Systems 83 developed which supports data rates up to 22 Mbps at 2.4 GHz, with a range of 100 meters. Another extension, 802.11a, operates in the 5 GHz frequency band and can support data rates up to 54 Mbps, with a range of 100 meters. 802.11g supports 54 Mbps data rates in the 2.4 GHz band.

Random DIFS back off Medium Busy Frame

SIFS

DIFS

Fig. 5.9: CSMA/CA protocol

The MAC protocol used in 802.11 is called CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The CSMA/CA operation is shown in Fig. 5.9. Before transmitting, a node senses the radio medium and if the channel is free for a period longer than pre-defined value (known as Distributed Inter Frame Spacing or DIFS), the node transmits immediately. If the channel is busy, the node keep sensing the channel and if it is free for a period of DIFS, the node waits for some more period, called random back-off interval and then transmits its frame. When the destination receives the frame, it has to send an acknowledgement (ACK). To send the ACK, the destination will sense the medium and if it is free for a pre-defined short time (known a Short Inter Frame Space or SIFS), the ACK is sent. If the ACK does not reach the station, the frame has to be retransmitted using the above procedure. A maximum of 7 retransmissions are allowed after which the frame is discarded. This procedure is known as CSMA/CA. 84 Chapter 5 - Communication Interfaces

Access Point

Laptop Laptop Laptop Laptop (a) Communication through Access Point (b) Direct Communication

Fig.5.10: Communication between nodes in Wireless LAN

An important feature of IEEE 802.11 wireless LAN is that two or more nodes can communicate directly also without the need for a centralized control. The two configurations in which the wireless LAN can operate are shown in Fig. 6.10. In Fig. 5.10(a), the configuration uses the Access Point as described earlier. In Fig. 5.10(b), direct communication between two devices is shown. When two or more devices form a network without the need for centralized control, they are called ad hoc networks. For instance, a mobile phone can form a network with a laptop and synchronize data automatically.

Embedded systems are now being provided with wireless LAN connectivity to exchange data. The main attraction of wireless connectivity is that is can be used in environments where running a cable is difficult such as in shop floors of manufacturing units. IEEE 802.11b is popularly known as WiFi or wireless Fidelity. If WiFi Access Points are installed, say all over the city, wireless Internet access can be provided to people on the move.

To provide wireless Internet access to mobile devices, a special protocol called Mobile IP has to run on the mobile device.

5.7 BLUETOOTH

A typical office cabin or a home or even a car is equipped with a number of electronic gadgets such as MSIT 4B Embedded Systems 85 desktop, laptop, printer, modem, mobile phone, etc. These devices are interconnected through wire for using a service (e.g. print service) or for sharing information (e.g. transferring a file from desktop to laptop). These devices form a Personal Area Network (PAN). When we bring two devices, say a laptop and a mobile phone, close to each other, these two can automatically form a network and exchange data. For example, we can transfer the address book from the mobile phone to the laptop.

The networks, formed spontaneously by coming closer of two or more devices, are termed as ad-hoc networks. In an ad-hoc network, the topology and the number of nodes at any time are not fixed the topology may change dynamically with time, and the number of nodes in the network may also change with time. All the headaches associated with administering such networks can be avoided if these device are made to communicate through radio links and also if one device can find out the presence of other devices and their capabilities (i.e. if one device can ‘discover’ other devices). The need for such PANs is everywhere in office cabins, at homes and also in cars.

A number of technologies have been proposed for PANs. Notable among them are Bluetooth, IrDA and IEEE 802.11. Bluetooth holds a great promise because it can provide wireless connectivity to embedded systems at a very low cost.

The salient features of Bluetooth technology are:

l It is a low-cost technology its cost will soon be as low as a cable connection. Since most of the Bluetooth-enabled devices have to operate through a battery, the power consumption is also very low.

l It is based on radio transmission in the ISM band. Any government authority does not control ISM band and hence no special approval is required to use Bluetooth radio systems.

l It caters to short ranges. The range of a Bluetooth device is typically 10 meters, though with higher power, the range can be increased to 100 meters.

l It is based on open standards formulated by a consortium of industries and a large number of equipment vendors are committed to this technology.

Bluetooth Special Interest Group (SIG) founded in February 1998 by Ericsson, Intel, IBM, Toshiba and Nokia released version 1.0 of Bluetooth specifications in July 1999. Version 1.1 of Bluetooth specification was released in February 2001. Most of electronic devices can be Bluetooth-enabled. These include a PC, laptop, PDA, digital camera, mobile phone, pager, MP3 player, headset, printer, keyboard, mouse, LCD projector, domestic appliances such as TV, microwave oven, music players etc. to make a device Bluetooth-enabled, a module containing the Bluetooth hardware and firmware is attached to the device. And, a piece of software is run on the device. A Bluetooth-enabled device can communicate with another Bluetooth-enabled device over the radio medium to exchange information or transfer data. 86 Chapter 5 - Communication Interfaces

5.8 SUMMARY

In this chapter we have presented the various communication interfaces an embedded system can have in order to communicate the data/information with outside world. All popular interfaces are covered with some greater detail.

5.9 QUESTIONS

1) Explain why Communication interfaces are necessary for an embedded system?

2) Explain the serial communication interface with details?

3) Explain various components of USB interface?

4) What is Ethernet? Explain how it works?

5) Write a short note on Irda interface?

6) Discuss the features of IEEE 802.11 standard?

7) Explain the salient features of blue tooth technology? MSIT 4B Embedded Systems 87

Chapter 6

Intel SA1110 based Embedded Development reference board

6.1 INTRODUCTION

his chapter presents an embedded reference board based on Intel® Strong ARM Processor. Strong ARM processors are being used in many products such as PDA, Hand held devices, Cell Tphones, etc. The main reasons for wide usage of Strong ARM processors include high MIPS rate, low power consumptions and availability of library functions under LINUX operating system. Usually a reference board is a multi-function development platform. The peripherals provide wide ranging options to design suitable hardware and interface them for realizing many applications irrespective of the fields of interest.

The material presented here serves two purposes, namely 1) It provides an insight to the reader about the actual composition of an embedded development system which supports all applications.

2) It provides a good amount of knowledge for the user to understand applications on embedded development platforms. Embedded system reference boards are general purpose platforms used to develop application on them before porting them onto specific embedded hardware. One such reference board which provide real hardware details based on Intel® Strong Arm Microprocessor (SA-1110) is shown in figure 6.1. It is a highly integrated micro controller that incorporates a 32-bit Strong Arm processor core, system control module, multiple communication channels, an LCD controller, general-purpose I/O ports, and a memory and PCMCIA control module. It provides various interfaces such as LCD, PCI, IrDA, PCMCIA, USB, AC’97 audio codec interface, along with other interfaces like PS/2, Serial, Parallel, FDD, HDD, and CDROM and an optional Ethernet interface.

MSIT 4B Embedded Systems 87 88 Chapter 6 - Intel SA110 Based Embedded Development reference board

6.2 FEATURES

1. Highly integrated SA-1110 microprocessor featuring a 100 MHz memory bus and a flexible memory controller that provides high-bandwidth support for SDRAM, Flash, and variable-latency I/O devices.

2. Support for maximum of 64 MB SDRAMs in a 54-pin TSOP footprint. System partitioning enables the system to support up to 103 MHz SDRAM.

3. On-board flash memory support for maximum of 16MB, fast-page mode, 3V, 56-pin TSOP footprint Intel Strata Flash memory.

4. Multiple probe points for access to key signals.

5. Extremely flexible circuit routing employing complex programmable logic devices (CPLDs). User designed daughter cards could be interfaced to the CPU local bus with the interface logic incorporated into the CPLD by the user.

Figure 6.1 Intel SA1110 Embedded reference board MSIT 4B Embedded Systems 89

The figure 6.2 shows the position of various IO facilities on the reference board

JP1 JP1 JP1 JP2 JP1 JP1

JP2 JP6

JP2 JP7 JP2

JP4

JP5

J

JP8 JP9 JP1 JP1 JP1 Figure 6.2 I/O facilities on the reference board 90 Chapter 6 - Intel SA110 Based Embedded Development reference board

AUDIO INTEL STRONG COMPANION Processor CHIP USB

SERIAL SERIAL

LPT1

SDRAM FDD LPC SUPER IRDA PCMCIA I/O KEYBOARD

MOUSE

FLASH PCI

LOCA ETHERNET L BUS RJ45 CONTROLLER

FPGA PCI/IDE HDD CONTROLLER CD FOR LOCAL BUS I/F PCI SLOT 0

PCI SLOT 1

Figure 6.3 Block diagram of Intel SA1110 embedded reference board

6.3 REFERENCE BOARD SUPPORT

1. USB host controller interface that supports both low speed (1.5Mbps) and high speed (12Mbps) USB devices MSIT 4B Embedded Systems 91

2. Audio interface that can be directly interfaced to AC‘ 97 CODEC for controlling voice data to the speaker or from the microphone

3. Standard serial port (16C550 UART)

4. PCMCIA interface

5. IEEE 1284 Parallel Port

6. Floppy Disk Controller

7. IrDA port and transceiver capable of transfers up to 4 MB/sec

8. PS/2 for Keyboard and mouse

9. RJ45 (for Ethernet interface)

10. Hard Disk Drive

11. CD ROM Drive

12. The two PCI slots provide option to include PCI compliant devices to the development platform.

The block diagram shown in figure 6.3 is the simplified architecture of the embedded reference board. LPC super IO and Companion chips have been used to provide the necessary IO facility for SA1110 processor. SA1110 processor itself can directly connect to FLASH, SDRAM, FPGA and PCMCIA. Besides it gives out many IO lines. The reference board runs on Embedded Linux operating system which is loaded into the Flash. This general purpose reference board thus provide abundant opportunities for the user to develop applications which employ Intel SA1110 processor. This board enables the user to develop software applications directly and produce executable codes. As an example, consider the real life project described below.

6.4 PROJECT

Project : Development of drivers for SA1110 based interface card which interfaces with Digital camera

The purpose is to write the boot code to initialize the USB and LCD peripherals in the below mentioned hardware along with PCMCIA. The initialization routines will be written at the boot loader level to ensure that they work as intended with the LCD panel and the USB client. 92 Chapter 6 - Intel SA110 Based Embedded Development reference board

Hardware Architecture

The SA1110 Processor Card, can be plugged to the Main board shown in fig.6.4. The Main board consists of Camera interface circuit and adequate memory for storing the images from the camera. The component residing on the main board is LVDS, FPGA, SRAM memory, Buffers etc. The main board is designed to accommodate any processor card (Ex.PXA250).

SA1110 Processor card has non-volatile Flash of upto16MB to store the Boot and Applications. After the power up SA1110 boots up and transfer all its application code in the Flash to SDRAM for faster access during normal operation.

Once the SA1110 Card is interfaced with Main Board it communicates with the FPGA of the main board and sends the GRAB signal to FPGA. In turn the FPGA interact with Camera to get the images and transfers that to the SDRAM in the Main board. Then SA1110 process the image with different algorithms, and also displaying the image using LCD and sending the result to the PC is taken care by SA1110.

Besides this card supports the Touch Screen interface with LCD display and the Infrared Communication up to 1.5meter.

Options for development:

Phase 1: First one is to write a BSP (Boot loader) program which initializes the USB as a client, LCD and the PCMCIA. The LCD will be initialized to handle the colour display from Hitachi. The testing and ensuring the working of this phase is covered in detail below. The menu driven program in this phase will use the touch screen or a serial port in a terminal mode to external host.

Phase 2: Create an OS environment such as Linux and port the code with this environment to the hardware board. Then develop the device drivers (porting them and debugging them) for the above three interfaces. For the USB client there needs to be a simple Host side interface program so it can function properly. Host for USB is assumed to be IBM-PC compatible platform. This will be done along with the boot loader developed from phase 1. The APIs can be developed easily for the OS such as Linux. These APIs can then be used in the application by STI. MSIT 4B Embedded Systems 93

SA-1110 CARD FUNCTIONAL BLOCK DIAGRAM

LCD CONN IRDA USB

LCD BUFFER JTAG C O N LCD N POWER Data bus E T RESET E C R T CIR Address bus M O SA- I 1110 N R TOUCH SPI 206MHZ A T SCREEN I SDRAM O N FLASH 32MB 145MHZ POWER POWER 4/8/16MB BOOT+ APPLICATION CONN SUPPLY APPLICATI +FAILED ON IMAGES

Figure 6.4 SA1110 card and camera interface through USB

The same board can also connects to an another system board through the connectors indicated.

Phase 1: Steps involved would be to: l A simple debug environment using the SPI would be created to use either the touch screen or Serial port. Required modifications are outlined above (aspire can do this).

l Develop the LCD initialization program at boot loader. l Develop the USB client initialization program

l Develop PCMCIA initialization program

l Test all of these initialization programs with tests to ensure their working. l Provide complete documentation, code, etc. 94 Chapter 6 - Intel SA110 Based Embedded Development reference board

Phase 2: Steps involved would be:

l Port Linux operating system to the hardware. l Develop/Port the LCD/USB and PCMCIA code and APIs to the Linux environment.

l Provide a HOST side interface program for USB client (IBM PC compatible Host)

l Test them thoroughly with all possible combinations. l Provide complete documentation, code, etc. Testing

Gpio5 > Ucb irq

Txdc >Ucb sdi

Sclk c >Ucb sclk PX

MY

Rxdc > Ucb sdo SA1110 UCB1200 MX Sfrm c> Ucb fs PY Reset out

As a part of the development of the driver (boot code), The following deliverables need to be given with the source code:

l Documentation to indicate the low level driver architecture specifications.

l Source code with well commented and as per the coding standards l Provide support for any minor changes to ensure that the usage is not affected as per the spec.

l With Phase 2 the APIs and the documentation for their usage will also be provided. MSIT 4B Embedded Systems 95

l Any debug help which are needed for a month after the delivery of the code which are needed for the proper functioning of the code.

6.5 SUMMARY

This chapter provided the insights of a typical embedded reference board based on Intel SA1110 processor. Some of the functional block diagrams are provided to get familiarity of a typical reference board. A small project is also discussed to give the reader a practical real life exposure.

6.6 QUESTIONS

1. List some of the important products where SA1110 processors are used?

2. Explain the functional organization of the SA1110 based reference board?

3. Discuss how the SA1110 card is interfaced with the Camera and drivers re developed? 96 MSIT 4B Embedded Systems

REFERENCES

1. Computers as components – Principles of embedded System design – Wayne Wolf, Morgan Kaufman, India, 2004. 2. Embedded Systems, Architecture, programming and design, Rajkamal, Tata Mcgraw Hill, India, 2005.

3. Embedded real-time systems, concepts,design and programming, KVKK Prasad, Dreamtech pub, India 2005. ***** MSIT 4B Embedded Systems 97

ABBREVIATIONS

ATM : Asynchronous Transfer mode DVD : Digital Video Drive PDA : Personal Digital Assistant PLC : Programmable Logic Controller LCD : Liquid Crystal Display CRC : Cyclic Redundancy Check GNU : Reverse Acronym for Not Unix PCMCIA : Personal Computer Memory Card Interface Association USB : Universal Serial Bus JTAG : Joint Test Action Group DSP : RTOS : Real Time Operating System Enq : Enquiry Ack : Acknowledgment DMA : Direct Memory Access ARM : Advanced Risc Machine SRAM : Static Random Access Memory DRAM : Dynamic Random Access Memory EDO : Extended Data output IrDA : InfraRed Data Association CPLD : Complex Programmable Logic Devices FPGA : Field Programmable Gate arrays WiFi : Wireless Fidelity CSMA/CA : Carrier Sense Multiple Access and Collision Avoidance WLAN : Wireless LAN UART : Universal Asynchronous Receiver and Transmitter