We Are Greatly Indebted to Dr G.M.Ajit, Director of DOEACC, Calicut for Permitting Us

Total Page:16

File Type:pdf, Size:1020Kb

We Are Greatly Indebted to Dr G.M.Ajit, Director of DOEACC, Calicut for Permitting Us

ACKNOWLEDGEMENT

We are greatly indebted to Dr G.M.Ajit, Director of DOEACC, Calicut for permitting us to do project at Information Technology Lab and providing us with adequate facilities for successful completion of project. We would like to express deep attitude to over our guide Mr.B.S.Ramanjaneyulu, Senior Designer Engineer DOEACC Calicut for his suggestions regarding the project. We also express our heartfelt thanks to all the member of DOEACC faculty for their kind cooperation. We owe humbly to Mr. Shaji Mohan for inspiring us to take up this topic. We thank him for his timely suggestions which help us a lot. We extend our sincere gratitude to Mr Sukumaran, Principal L.B.S College Of Engineering. We express our sincere thanks to Mr. Shaji Mohan, H.O.D department of Computer Science and Engineering. We also thank all lectures of CSE Department for their moral support. We are indebted to all lab staffs for their kind cooperation. Last but not least we thank all our friends for wholehearted support and encouragement. Above all we thank almighty for making venture success.

1 ABOUT THE ORGANISATION

Centre for Electronics Design and Technology of India [CEDTI], Calicut is an autonomous agency under the Ministry Of Communications and Information Technology Govt. of India. It has headquarters in New Delhi. There are various units of DOEACC all over India. The organization was established in the year 1988 with a view of promoting developments in the field of Electronics and Information Technology. CEDTI is situated near Regional Engineering College, Calicut.

The major objectives are:

 To train manpower in electronics design, product design and information technology.

 To undertake product development, contract and consultancy.

 To develop entrepreneurs and designers in electronics and information technology.

CEDTI has been renamed as DOEACC center, Calicut.

2 SYNOPSIS

Wireless sensor networks are a new class of adhoc networks that are expected to find increasing deployment in coming years as they enable reliable monitoring and analysis of unknown and untested environments. MANET research is gaining ground due to the ubiquity of small, inexpensive wireless communicating devices. Since not many MANETs are currently deployed, research in this area is mostly simulation based..

Due to the dynamic nature of the network topology and the resource constraints, routing in MANETs is a challenging task. A routing protocol should quickly adapt to the topology changes and efficiently search for new paths.

In our project titled ‘Routing Protocol for Wireless Sensors’ we chose to monitor the performance of the DSDV routing protocol for wireless sensor networks taking the distance from the sink node as the main constraint.

There have been several routing protocols proposed for wireless ad hoc networks. Destination Sequence Distance Vector (DSDV) was chosen due to its relative simplicity.

3 CONTENTS

1. PREAMBLE. 1.1 General introduction 1.2 Objective of the study 1.3 Scope of study 1.4 Limitations of study

2. PROJECT OVERVIEW 2.1 Introduction 2.1.1 Mobile Adhoc Networks 2.1.2 Routing Protocols in MANETs 2.1.3 motes 2.2 Statement of Problem 2.3 Methodology 2.3.1 DSDV Routing Protocol 2.3.2 TraceRouteTest Application

3. SOFTWARE SPECIFICATION 3.1 TinyOS 3.1.1 Introduction 3.1.2 Components 3.1.3 Concurrency Model 3.2 nesC 3.2.1 Introduction 3.2.2 Interfaces 3.2.3 Component Specification 3.2.4 Wiring 3.2.5 Concurrency in nesC 3.3 Sample Application : Blink 4. DESIGN AND IMPLEMENTATION 4.1 TOSSIM 4.1.1 Introduction

4 4.1.2 Compiling and Running a Simulation 4.1.3 Adding debugging statements 4.2 TinyViz 4..2.1 TinyViz Plugins 4.2.2 Lossy Builder 4.2.3 Lossy Model Actuation

4.2.4 Implementation of TraceRouteTest

5. TESTING 5.1 Performance Evaluation 5.1.1 Performance measured under Different Source-Sink Distances

6. CONCLUSION

BIBLIOGRAPHY

5 1. PREAMBLE.

1.1 Introduction

1.1.1 Evolution of wireless networks

Wireless communications have become very pervasive. The number of mobile phones and wireless Internet users has increased significantly in recent years. Traditionally, first-generation wireless networks were targeted primarily at voice and data communications occurring at low data rates. Recently, we have seen the evolution of second- and third-generation wireless systems that incorporate the features provided by broadband. In addition to supporting mobility, broadband also aims to support multimedia traffic, with quality of service (QoS) assurance. We have also seen the presence of different air interface technologies, and the need for interoperability has increasingly been recognized by the research community. First-Generation Mobile Systems The first generation of analog cellular systems included the Advanced Mobile Telephone System (AMPS) which was made available in 1983.It was first deployed in Chicago, with a service area of 2100 square miles. AMPS offered 832 channels, with a data rate of 10 kbps. In Europe, TACS (Total Access Communications System) was introduced with 1000 channels and a data rate of 8 kbps. AMPS and TACS use the frequency modulation (FM) technique for radio transmission. Traffic is multiplexed onto an FDMA (frequency division multiple access) system. Second-Generation Mobile Systems Compared to first-generation systems, second-generation (2G) systems use digital multiple access technology, such as TDMA (time division multiple access) and CDMA (code division multiple access). Global System for Mobile Communications, or GSM, uses TDMA technology to support multiple users. The protocols behind 2G networks support voice and some limited data communications, such as Fax and short messaging service (SMS), and most 2G protocols offer different levels of encryption, and security. While first-generation systems support primarily voice traffic, second-generation systems support voice, paging, data, and fax services. Examples of second-generation systems are GSM, Cordless Telephone (CT2), Personal Access Communications Systems (PACS), and Digital European Cordless Telephone (DECT). A new design was introduced into the mobile switching center of second-generation systems. In particular, the use of base station controllers (BSCs) lightens the load placed on the MSC (mobile switching center) found in first-

6 generation systems. This design allows the interface between the MSC and BSC to be standardized. 2.5G Mobile Systems The move into the 2.5G world will begin with General Packet Radio Service (GPRS). GPRS is a radio technology for GSM networks that adds packet-switching protocols, shorter setup time for ISP connections, and the possibility to charge by the amount of data sent, rather than connection time. The next generation of data heading towards third generation and personal multimedia environments builds on GPRS and is known as Enhanced Data rate for GSM Evolution (EDGE).GPRS will support flexible data transmission rates as well as continuous connection to the network. GPRS is the most significant step towards 3G. Third-Generation Mobile Systems Third-generation mobile systems are faced with several challenging technical issues, such as the provision of seamless services across both wired and wireless networks and universal mobility. In Europe, there are three evolving networks under investigation: (a) UMTS (Universal Mobile Telecommunications Systems), (b) MBS (Mobile Broadband Systems), and (c) WLAN (Wireless Local Area Networks). 1.1.2 Importance of wireless networks

The term wireless networking refers to technology that enables two or more computers to communicate using standard network protocols, but without network cabling. Strictly speaking, any technology that does this could be called wireless networking. The current buzzword however generally refers to wireless LANs. . This technology, fuelled by the emergence of cross-vendor industry standards such as IEEE 802.11, has produced a number of affordable wireless solutions that are growing in popularity with business and schools as well as sophisticated applications where network wiring is impossible, such as in warehousing or point-of-sale handheld equipment.

An ad-hoc, or peer-to-peer wireless network consists of a number of computers each equipped with a wireless networking interface card. Each computer can communicate directly with all of the other wireless enabled computers. They can share files and printers this way, but may not be able to access wired LAN resources, unless one of the computers acts as a bridge to the wired LAN using special software.

A wireless network can also use an access point, or base station. In this type of network the access point acts like a hub, providing connectivity for the wireless computers. It can connect (or “bridge”) the wireless LAN to a wired LAN, allowing wireless computer access to LAN resources, such as file servers or existing Internet Connectivity.

There are two types of access points:

 Hardware access points (HAP): Hardware access points offer

7 comprehensive support of most wireless features, but check your requirements carefully.

 Software Access Points (SAP): They run on a computer equipped with a wireless network interface card as used in an ad-hoc or peer-to-peer wireless network. With appropriate networking software support, users on the wireless LAN can share files and printers located on the wired LAN and vice versa. The Vicomsoft InterGate suites are software routers that can be used as a basic Software Access Point.

Each access point has a finite range within which a wireless connection can be maintained between the client computer and the access point. The actual distance varies depending upon the environment; manufacturers typically state both indoor and outdoor ranges to give a reasonable indication of reliable performance. Also it should be noted that when operating at the limits of range the performance may drop, as the quality of connection deteriorates and the system compensates.

Typical indoor ranges are 150-300 feet, but can be shorter if the building construction interferes with radio transmissions. Longer ranges are possible, but performance will degrade with distance.

Outdoor ranges are quoted up to 1000 feet, but again this depends upon the environment. There are ways to extend the basic operating range of Wireless communications, by using more than a single access point or using a wireless relay /extension point.

1.1.3 Short Range Wireless Networks

Wireless Local Area Networks

A wireless LAN or WLAN is a local area network that uses radio waves as its carrier: the last link with the users is wireless, to give a network connection to all users in the surrounding area. Areas may range from a single room to an entire campus. The backbone network usually uses cables, with one or more wireless access point connecting the wireless users to the wired network

Wireless Personal Area Networks

A personal area network (PAN) is a computer network used for communication among computer devices (including telephones and personal digital assistants close to one person. The devices may or may not belong to the person in question.

The reach of a PAN is typically a few meters. PANs can be used for communication among the personal devices themselves (intrapersonal communication), or for connecting to a higher level network and the Internet (an uplink).Personal area networks may be wired with computer bus such as USB and Firewire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA and Bluetooth.

8 Bluetooth

Bluetooth is the name given to a new technology standard using short-range radio links, intended to replace the cable(s) connecting portable and/or fixed electronic devices. The standard defines a uniform structure for a wide range of devices to communicate with each other, with minimal user effort. Its key features are robustness, low complexity, low power and low cost. The technology also offers wireless access to LANs, PSTN, the mobile phone network and the Internet for a host of home appliances and portable handheld interfaces. The immediate need for Bluetooth came from the desire to connect peripherals and devices without cables. The available technology-IrDA OBEX (IR Data Association Object Exchange Protocol) is based in IR links that are limited to line of sight connections. Bluetooth integration is further fueled by the demand for mobile and wireless access to LANs, Internet over mobile and other existing networks, where the backbone is wired but the interface is free to move. This not only makes the network easier to use but also extends its reach. The advantages and rapid proliferation of LANs suggest that setting up personal area networks, that is, connections among devices in the proximity of the user, will have many beneficial uses. Bluetooth could also be used in home networking applications. With increasing numbers of homes having multiple PCs, the need for networks that are simple to install and maintain, is growing. There is also the commercial need to provide “information push” capabilities, which is important for handheld and other such mobile devices and this has been partially incorporated in Bluetooth. Bluetooth’s main strength is its ability to simultaneously handle both data and voice transmissions, allowing such innovative solutions as a mobile hands-free headset for voice calls, print to fax capability, and automatically synchronizing PDA, laptop, and cell phone address book applications. These uses suggest that a technology like Bluetooth is extremely useful and will have a significant effect on the way information is accessed and used.

A Bluetooth PAN is also called a piconet, and is composed of up to 8 active devices in a master-slave relationship (up to 255 devices can be connected in “parked” mode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 meters, although ranges of up to 100 meters can be reached under ideal circumstances. Recent innovations in Bluetooth antennas have allowed these devices to far exceed the range which they were originally designed for.

Bluetooth profiles

In order to use Bluetooth, a device must be able to interpret certain Bluetooth profiles. These define the possible applications. The following profiles are defined and adopted by the Bluetooth SIG:

 Advanced Audio Distribution Profile (A2DP) : Referred to as the AV profile, it is designed to transfer a stereo audio stream like music from an MP3 player to a headset or car radio.

9  Audio/Video Remote Control Profile (AVRCP) : This profile is designed to provide a standard interface to control TVs, Hi-fi equipment, etc. to allow a single remote control (or other device) to control all of the A/V equipment that a user has access to. It may be used in concert with A2DP or VDP.

 Basic Imaging Profile (BIP) : This profile is designed for sending images between devices and includes the ability to resize, and convert images to make them suitable for the receiving device. It may be broken down into smaller pieces.

 Dial-up Networking Profile (DUN) : This profile provides a standard to access the Internet and other dial-up services over Bluetooth. The most common scenario is accessing the Internet from a laptop by dialing up on a mobile phone, wirelessly.

Future of Bluetooth

Bluetooth technology already plays a part in the rising Voice over IP (VOIP) scene, with Bluetooth headsets being used as wireless extensions to the PC audio system. As VOIP becomes more popular, and more suitable for general home or office users than wired phone lines, Bluetooth may be used in Cordless handsets, with a base station connected to the Internet link.

In May 2005, the Bluetooth Special Interest Group (SIG) announced its intent to work with UWB manufacturers to develop a next-generation Bluetooth technology using UWB technology and delivering UWB speeds. This will enable Bluetooth technology to be used to deliver high speed network data exchange rates required for wireless VOIP, music and video applications.

ZigBee

ZigBee is an established set of specifications for wireless personal area networking (WPAN), i.e., digital radio connections between computers and related devices. This kind of network eliminates use of physical data buses like USB and Ethernet cables. The devices could include telephones, hand-held digital assistants, sensors and controls located within a few meters of each other. ZigBee is one of the global standards of communication protocol formulated for embedded application software and has been ratified in late 2004 under IEEE 802.15.4 Wireless Networking Standards.

The fourth in the series, WPAN Low Rate/ZigBee is the newest and provides specifications for devices that have low data rates, consume very low power and are thus characterized by long battery life. Other standards like Bluetooth and IrDA address high data rate applications such as voice, video and LAN communications.

The ZigBee Alliance has been set up as an association of companies working together to enable reliable, cost-effective, low-power, wirelessly networked, monitoring and control products based on an open global standard. Once a manufacturer enrolls in this Alliance for a fee, he can have access to the standard and implement it in his products in the form of ZigBee chipsets that would be built into the end devices.

10 Philips, Motorola, Intel, HP are all members of the Alliance. The goal is to provide the consumer with ultimate flexibility, mobility, and ease of use by building wireless intelligence and capabilities into every day devices. Device types

There are three different types of ZigBee devices:

 ZigBee coordinator (ZC): The most capable device, the coordinator forms the root of the network tree and might bridge to other networks. It is able to store information about the network. There is exactly one ZigBee coordinator in each network. It also acts as the repository for security keys.

 ZigBee Router (ZR): Routers can act as an intermediate router, passing data from other devices. Zigbee Routers are devices that typically have their receivers continuously active, requiring a more robust power supply; however, this enables heterogeneous networks, in which some devices receive continuously while others, for the most part, remain asleep, transmitting only when an external stimulus is detected.

 ZigBee End Device (ZED): Contains just enough functionality to talk to its coordinator; it cannot relay data from other devices. A ZED requires the least amount of memory, and therefore can be less expensive to manufacture. It also tends to be the focus of low battery consumption since when it does communicate with its coordinator, it tends to do so infrequently.

1.1.4 Wireless Sensor Networks Wireless sensor networks are potentially one of the most important technologies of this century. Consequently, billions of dollars are being committed to the research and development of sensor networks in order to address the many technical challenges and wide range of immediate applications. Advances in hardware development have made available the prospect of low cost, low power, miniature devices for use in remote sensing applications. The combination of these factors has improved the viability of utilizing a sensor network consisting of a large number of intelligent sensors, enabling the collection, processing analysis and dissemination of valuable information gathered in a variety of environments. A sensor network is an array (possibly very large) of sensors of diverse type interconnected by a communications network. Sensor data is shared between the sensors and used as input to a distributed estimation system which aims to extract as much relevant information from the available sensor data. The fundamental objectives for sensor networks are reliability, accuracy, flexibility, cost effectiveness and ease of deployment. A sensor network is made up of individual multifunctional sensor nodes. The sensor node itself may be composed of various elements such as various multi-mode sensing hardware (acoustic, seismic, infrared, magnetic, chemical, imagers, micro radars) , embedded processor, memory, power-supply, communications device (wireless or wired) and location determination capabilities (through local or global techniques).

11 Sensor networks involve three areas: sensing, communications, and computation (hardware, software, algorithms). Very useful technologies are wireless database technology such as queries, used in a wireless sensor network, and network technology to communicate with other sensors, especially multihop routing protocols. For example, Zigbee is a wireless protocol used by Motorola in home control systems. Sensor networks are predominantly data-centric rather than address-centric. That is, queries are directed to a region containing a cluster of sensors rather than specific sensor addresses. Given the similarity in the data obtained by sensors in a dense cluster, aggregation of the data is performed locally. That is, a summary or analysis of the local data is prepared by an aggregator node within the cluster, thus reducing the communication bandwidth requirements. Aggregation of data increases the level of accuracy and incorporates data redundancy to compensate node failures. A network hierarchy and clustering of sensor nodes allows for network scalability, robustness, efficient resource utilization and lower power consumption Dissemination of sensor data in an efficient manner requires the dedicated routing protocols to identify shortest paths. Redundancy must be accounted for to avoid congestion resulting from different nodes sending and receiving the same information. At the same time, redundancy must be exploited to ensure network reliability. Data dissemination may be either query driven or based on continuous updates. A sensor network can be described by services, data and physical layer respectively. Recognizing the significance of sensor networks and the associated network protocol requirements, the IEEE has defined a standard for personal area networks, (the IEEE 802.15 standard), specifically for networks with a 5 to 10 m radius. Implicit throughout the operation of a sensor network is a variety of information processing techniques for the manipulation and analysis of sensor data, extraction of significant features, along with the efficient storage and transmission of the important information. Benefits of WSN  Sensing accuracy: The utilization of a larger number and variety of sensor nodes provides potential for greater accuracy in the information gathered as compared to that obtained from a single sensor. The ability to effectively increase sensing resolution without necessarily increasing network traffic will increase the reliability of the information for the end user application.

 Area coverage: A distributed wireless network incorporating sparse network properties will enable the sensor network to span a greater geographical area without adverse impact on the overall network cost.

 Fault tolerance: Device redundancy and consequently information redundancy can be utilized to ensure a level of fault tolerance in individual sensors.

 Connectivity: Multiple sensor networks may be connected through sink nodes, along with existing wired networks (eg. Internet). The clustering of networks enables each individual network to focus on specific areas or events and share

12 only relevant information with other networks enhancing the overall knowledge base through distributed sensing and information processing.

 Minimal human interaction: The potential for self-organizing and self- maintaining networks along with highly adaptive network topology significantly reduce the need for further human interaction with a network other than the receipt of information.

 Operability in harsh environments: Robust sensor design, integrated with high levels of fault tolerance and network reliability enable the deployment of sensor networks in dangerous and hostile environments, allowing access to information previously unattainable from such close proximity.

 Dynamic sensor scheduling: Dynamic reaction to network conditions and the optimization of network performance through sensor scheduling. This may be achieved by enabling the sensor nodes to modify communication requirements in response to network conditions and events detected by the network, so that essential information is given the highest priority.

 Changing network topology: the variability of network topologies due to node failures, introduction of additional nodes, variations in sensor location, changes to cluster allocations in response to network demands, requires the adaptability of underlying network structures and operations. Advanced communication protocols are required to support high level services and real-time operation, adapting rapidly to extreme changes in network conditions.

 Resource optimization: Optimised sensor scheduling for distributed networks, through accurate determination of the required density of sensor nodes in order to minimize cost, power and network traffic loads, while ensuring network reliability and adequate sensor resolution for data accuracy.

 Limitations: power, memory, processing power, life-time. These physical constraints may be minimized through further technological breakthroughs in materials and sensor hardware designs.

 Failure prone: individual sensors are unreliable, particularly in harsh and unpredictable environments. Addressing sensor reliability can reduce the level of redundancy required for a network to operate with the same level of reliability.

 Network congestion resulting from dense network deployment: The quantity of data gathered may exceed the requirements of the network and so evaluation of the data and transmission of only relevant and adequate information needs the be performed. Security is a critical factor in sensor networks, given some of the proposed applications. An effective compromise must be obtained, between the low bandwidth requirements of sensor network applications and security demands (which traditionally place considerable strain on resources) Current sensor network applications include military sensing, air traffic control, video surveillance, traffic surveillance industrial and manufacturing automation, robotics,

13 infrastructure monitoring and environment monitoring. Future applications and capabilities may include the following:

 Low cost, scalable surveillance solutions using unmanned aerial vehicles as an integrated sensor network for defense.

 Advanced surveillance networks incorporating automated anomaly detection and adaptive reasoning in conjunction with secure protocols for event reporting.

 Early disaster monitoring of sensitive environments (in the event of bushfires or flooding for example), employing a large geographically distributed sparse sensor network, utilizing inbuilt communication capabilities to potentially save lives as well as minimize associated economic impacts.

 Reconfigurable networks able to optimize performance and information collection and dissemination according to varying local conditions and sensor node failure or isolation.

 Habitat monitoring of environmentally sensitive areas using wireless distributed sensor networks to collect valuable information such as species diversity, ecosystem structure, and environmental change to determine the impact of factors such as global climate change and over-development.

 Irrigation control utilizing a network of intelligent sensors able to collect various information pertaining to local weather, water supply and soil conditions in such a way as too provide feedback for the efficient distribution of water for crop management, integrated with long and short term weather forecasts.

 Industrial sensing for equipment monitoring and maintenance, as well as efficiency enhancements in process flow.

1.2 Objective of Study

Today wireless networks are becoming popular as they enable reliable monitoring and analysis of unknown and untested environments. Wireless ad hoc networks are termed as mobile distributed multihop wireless networks without predetermined topology (preexisting fixed infrastructure) or central control.

Routing (and forwarding) is a core problem in networks for delivering data from one node to another.

The main objective of our project is to monitor the performance of DSDV routing protocol for wireless sensor networks, in terms of the varying source-sink distances.

1.3 Limitations of Study

DSDV is one of the simplest routing protocols. Many other better routing protocols like AODV, DSR etc also exist.

14 Here, the performance evaluation was done on the basis of link error rate and path length. Further study could have been done considering the energy consumption of sensors and node densities.

2. PROJECT OVERVIEW

2.1 Introduction

2.1.1 Mobile Adhoc Networks (MANET)

With recent performance advancements in computer and wireless communications technologies, advanced mobile wireless computing is expected to see increasingly widespread use and application, much of which will involve the use of the Internet Protocol (IP) suite. The vision of mobile ad hoc networking is to support robust and efficient operation in mobile wireless networks by incorporating routing functionality into mobile nodes. Such networks are envisioned to have dynamic, sometimes rapidly-changing, random, multihop topologies which are likely composed of relatively bandwidth-constrained wireless links.

Within the Internet community, routing support for mobile hosts is presently being formulated as “mobile IP” technology. This is a technology to support nomadic host “roaming”, where a roaming host may be connected through various means to the Internet other than its well known fixed-address domain space. The host may be directly physically connected to the fixed network on a foreign subnet, or be connected via a wireless link, dial-up line, etc.

Supporting this form of host mobility (or nomadicity) requires address management, protocol interoperability enhancements and the like, but core network functions such as hop-by-hop routing still presently rely upon pre- existing routing protocols operating within the fixed network. In contrast, the goal of mobile ad hoc networking is to extend mobility into the realm of autonomous, mobile, wireless domains, where a set of nodes–which may be combined routers and hosts–themselves form the network routing infrastructure in an ad hoc fashion.

Applications

The technology of Mobile Ad hoc Networking is somewhat synonymous with Mobile Packet Radio Networking (a term coined via during early military research in the 70’s and 80’s), Mobile Mesh Networking (a term that appeared in an article in The Economist regarding the structure of future military networks) and Mobile, Multihop, Wireless Networking (perhaps the most accurate term, although a bit cumbersome).

There is current and future need for dynamic ad hoc networking technology. The emerging field of mobile and nomadic computing, with its current emphasis on mobile IP operation, should gradually broaden and require highly-adaptive mobile networking technology to effectively manage multihop, ad hoc network clusters

15 which can operate autonomously or, more than likely, be attached at some point(s) to the fixed Internet.

The set of applications for MANETs is diverse, ranging from small, static networks that are constrained by power sources, to large-scale, mobile, highly dynamic networks. The design of network protocols for these networks is a complex issue. Regardless of the application, MANETs need efficient distributed algorithms to determine network organization, link scheduling, and routing.

However, determining viable routing paths and delivering messages in a decentralized environment where network topology fluctuates is not a well-defined problem. While the shortest path (based on a given cost function) from a source to a destination in a static network is usually the optimal route, this idea is not easily extended to MANETs. Factors such as variable wireless link quality, propagation path loss, fading, multi-user interference, power expended, and topological changes, become relevant issues. The network should be able to adaptively alter the routing paths to alleviate any of these effects.

Moreover, in a military environment, preservation of security, latency, reliability, intentional jamming, and recovery from failure are significant concerns. Military networks are designed to maintain a low probability of intercept and/or a low probability of detection. Hence, nodes prefer to radiate as little power as necessary and transmit as infrequently as possible, thus decreasing the probability of detection or interception. A lapse in any of these requirements may degrade the performance and dependability of the network.

Some applications of MANET technology could include

 Industrial and commercial applications: involving cooperative mobile data exchange.

 Mesh-based mobile networks: can be operated as robust, inexpensive alternatives or enhancements to cell-based mobile network infrastructures.

 Military: There are also existing and future military networking requirements for robust, IP-compliant data services within mobile wireless communication networks any of these networks consist of highly-dynamic autonomous topology segments.

 Upcoming new technologies: Also, the developing technologies of “wearable” computing and communications may provide applications for MANET technology. When properly combined with satellite-based information delivery, MANET technology can provide an extremely flexible method for establishing communications for fire/safety/rescue operations or other scenarios requiring rapidly-deployable communications with survivable, efficient dynamic networking.

There are likely other applications for MANET technology which are not presently realized or envisioned by the authors. It is simply put, improved IP-based networking technology for dynamic, autonomous wireless networks.

16 Characteristics of MANETs

A MANET consists of mobile platforms (e.g., a router with multiple hosts and wireless communications devices)–herein simply referred to as “nodes”–which are free to move about arbitrarily. The nodes may be located in or on airplanes, ships, trucks, cars, perhaps even on people or very small devices, and there may be multiple hosts per router. A MANET is an autonomous system of mobile nodes. The system may operate in isolation, or may have gateways to and interface with a fixed network. In the latter operational mode, it is typically envisioned to operate as a “stub” network connecting to a fixed internetwork. Stub networks carry traffic originating at and/or destined for internal nodes, but do not permit exogenous traffic to “transit” through the stub network.

MANET nodes are equipped with wireless transmitters and receivers using antennas which may be omni directional (broadcast), highly-directional (point-to-point), possibly steerable, or some combination thereof. At a given point in time, depending on the nodes’ positions and their transmitter and receiver coverage patterns, transmission power levels and co-channel interference levels, a wireless connectivity in the form of a random, multihop graph or “ad hoc” network exists between the nodes. This ad hoc topology may change with time as the nodes move or adjust their transmission and reception parameters.

MANETs have several salient characteristics:

 Dynamic topologies: Nodes are free to move arbitrarily thus, the network topology–which is typically multihop–may change randomly and rapidly at unpredictable times, and may consist of both bidirectional and unidirectional links.

 Bandwidth-constrained, variable capacity links: Wireless links will continue to have significantly lower capacity than their hardwired counterparts. In addition, the realized throughput of wireless communications–after accounting for the effects of multiple access, fading, noise, and interference conditions,etc.–is often much less than a radio’s maximum transmission rate. One effect of the relatively low to moderate link capacities is that congestion is typically the norm rather than the exception, i.e. aggregate application demand will likely approach or exceed network capacity frequently. As the mobile network is often simply an extension of the fixed network infrastructure, mobile ad hoc users will demand similar services. These demands will continue to increase as multimedia computing and collaborative networking applications rise.

 Energy-constrained operation: Some or all of the nodes in a MANET may rely on batteries or other exhaustible means for their energy. For these nodes, the most important system design criteria for optimization may be energy conservation.

 Limited physical security: Mobile wireless networks are generally more prone to physical security threats than are fixed-cable nets. The increased

17 possibility of eavesdropping, spoofing, and denial-of-service attacks should be carefully considered. Existing link security techniques are often applied within wireless networks to reduce security threats. As a benefit, the decentralized nature of network control in MANETs provides additional robustness against the single points of failure of more centralized approaches.

In addition, some envisioned networks (e.g. mobile military networks or highway networks) may be relatively large (e.g. tens or hundreds of nodes per routing area). The need for scalability is not unique to MANETS. However, in light of the preceding characteristics, the mechanisms required to achieve scalability likely are.

These characteristics create a set of underlying assumptions and performance concerns for protocol design which extend beyond those guiding the design of routing within the higher-speed, semi-static topology of the fixed Internet.

2.1.2 Routing Protocols in MANETs

Mobile ad-hoc network (MANET) routing protocols play a fundamental role in a possible future of ubiquitous devices. The challenge is for MANET routing protocols to provide a communication platform that is solid, adaptive and dynamic in the face of widely fluctuating wireless channel characteristics and node mobility.

Destination-Sequenced Distance Vector (DSDV)

In classical distance vector routing each node vi maintains a set of distances or cost d(i,j,D) for each possible destination D via hop vj which is a direct neighbor of vi. When a node has to select one of its direct neighbors vj to relay a data packet, it selects the entry with the minimum distance. To keep the set of distances up to date, each node periodically broadcasts its routing table to all neighbors. A node which receives an update message from one of its neighbors updates its own routing table and applies a shortest path algorithm, for instance Dijkstra’s algorithm.

The Destination-Sequenced Distance Vector Routing (DSDV) applies this approach to mobile ad-hoc networks. In DSDV, routing information is exchanged when significant new information is available, for instance, when the neighborhood of a node changes. This initial approach generates considerable overhead in highly mobile ad-hoc networks. To reduce the overhead two different solutions have been proposed: full dump, and incremental dump. A full dump contains the whole routing table of a node. In contrast, an incremental dump contains only the changes since the last full dump.

DSDV avoids loops by associating a sequence number with each routing table entry. This allows nodes to distinguish between old and new routing information. A node selects the route table entry with the highest sequence number. In case of several routes for one destination with equal sequence numbers the lower cost route will be selected.

Ad Hoc On-Demand Distance-Vector Protocol (AODV)

18 The Ad Hoc On-Demand Distance-Vector Protocol (AODV) is an other distance vector routing for mobile ad-hoc networks. AODV is an on-demand routing approach, i.e. there are no periodical exchanges of routing information.

The protocol consists of two phases:  Route discovery  Route maintenance

A node wishing to communicate with another node first seeks for a route in its routing table. If it finds one the communication starts immediately, otherwise the node initiates a route discovery phase. The route discovery process consists of a route-request message (RREQ) which is broadcasted. If a node has a valid route to the destination, it replies to the route-request with a route-reply (RREP) message. Additionally, the replying node creates a so called reverse route entry in its routing table which contains the address of the source node, the number of hops to the source, and the next hop’s address, i.e. the address of the node from which the message was received. A lifetime is associated with each reverse route entry, i.e. if the route entry is not used within the lifetime it will be removed.

The second phase of the protocol is called route maintenance. It is performed by the source node and can be subdivided into:

 Source node moves: source node initiates a new route discovery process

 Destination or an intermediate node moves: a route error message (RERR) is sent to the source node. Intermediate nodes receiving a RERR update their routing table by setting the distance of the destination to infinity. If the source node receives a RERR it will initiate a new route discovery.

To prevent global broadcast messages AODV introduces a local connectivity management. This is done by periodical exchanges of so called HELLO messages which are small RREP packets containing a node’s address and additional information.

Dynamic Source Routing (DSR)

The basic idea of source routing is that the source node includes the full routing information into each data packet, e.g. (vS, v1, v2, . . . , vD), for a packet to vD which is routed via v1, v2, etc.

The Dynamic Source Routing (DSR) applies the method of source routing to mobile ad-hoc networks. The main question is how to obtain a source route for a certain destination.

DSR uses two phases:

 Route discovery  Route maintenance.

19 If a source node vS does not have a route for a certain destination vD it initiates a route discovery process by broadcasting a route request RREQ to its neighbors. The RREQ is a small packet containing vS, vD, a unique id RREQ_ID, and LSD which is the list of nodes which forwarded the RREQ. An intermediate node receiving a RREQ for the first time appends its address to LSD and broadcasts it to its neighbors, but not back to the node where the request came from.

If the destination node vD receives the route request it extracts LSD, creates a route reply message RREP containing LSD and returns it to the source node.

The second phase of the approach is route maintenance. When a node vi forwards a data packet to node vj it expects a confirmation from vj. If vi does not get any confirmation in a certain time interval, it will send a route error message RERR to the source node containing the link over which the forwarding has failed. Subsequently, the source node searches for an alternative in its routing table or initiate a new route discovery process.

2.1.3 Introduction to motes

A motes has a sensor unit, a power unit, a transceiver unit, a ADC unit and a processor. We have two kinds of motes  MICA2

 MICA2DOT These two contain a processor and a transceiver unit and they have connectors for attaching sensor boards to them. A sensor board consists of a set of sensing units. We have MTS300CA sensor board. This can be interfaced with MICA2 mote. The sensing units in MTS300CA are  Light

 Temperature

 Acoustic and

 Sounder The motes can be programmed by attaching them to the Mote Interface Board (MIB500CA). The MIB can be interfaced with the PC by connecting it to the parallel port. The mote has an on board flash that can be programmed. The mote that is to be programmed is connected to the 51 pin male connector on the board. Motes run a multithreaded operating system called TinyOS. TinyOS is based on component model. Each component declares the commands it uses and the events it will signal. A simple FIFO scheduler will be part of each program uploaded onto the mote. The program will consists of codes for components that will be used. The components communicate with each other by passing commands. Events are usually initiated by hardware devices. Based on the event, the component related to that event will generate one or more commands to other components.

TinyOS system, libraries and applications are written in NesC language. The machine to which the MIB is connected will contain the TinyOS code (i.e.) the code

20 for the components. We write our code by using the component code that is already present in TinyOS. After writing to compile the program we use ncc compiler. The output by default is called main.exe. Then avr-objcopy converts the exe file produced by ncc into a text format that can be used for programming the mote's flash. The sensor board collects data and sends it to the mote which can either store the data or transmit the data to the base station. A base station is nothing but a mote attached to a Mote Interface Board (MIB) that is interfaced to a PC via the parallel port.

2.2 Statement of Problem

Routing (and forwarding) is a core problem in networks for delivering data from one node to another. Today, wireless networks are becoming popular because of their “3 Anys”–Any person, Anywhere and Any time. However, wireless networks have special limitations and properties such as limited bandwidth, highly dynamic topology, link interference, limited range of links, and broadcast. Therefore, routing protocols for wired networks cannot be directly used in wireless networks; routing protocols for wireless networks need to be designed and implemented separately.

In a wireless ad hoc network, there is no predetermined topology (preexisting fixed infrastructure) and no central control. The nodes in ad hoc networks communicate without wired connections among themselves by creating a network instantaneously.

There are different criteria for designing and classifying routing protocols for wireless ad hoc networks.

Link state routing (LSR) vs. distance vector routing (DVR)

As with conventional wired networks, Link state routing (LSR) and Distance vector routing (DVR) are two underlying mechanisms for routing in wireless ad hoc networks. In LSR, routing information is exchanged in the form of link state packets (LSP). The LSP of a node includes link information about its neighbors. Any link change will cause LSPs to be flooded into the entire network immediately. Every node can construct and maintain a global network topology from the LSPs it receives, and compute, by itself, routes to all other nodes. The problem with LSR is that excessive routing overhead may be incurred because nodes in a wireless ad hoc network move quickly and the network topology changes fast.

In DVR, every node maintains a distance vector which includes, but is not limited to, the triad (destination ID, next hop, (shortest) distance) for every destination. Every node periodically exchanges distance vectors with its neighbors. When a node receives distance vectors from its neighbors, it computes new routes and updates its distance vector. The complete route from a source to a destination is formed, in a distributed manner, by combining the next hop of nodes on the path from the source to the destination. The problems with DVR are slow convergence and the tendency of creating routing loops.

Precomputed routing vs. On-demand routing

Depending on when the route is computed, routing protocols can be divided into two categories:

21  Precomputed routing  On-demand routing

Precomputed routing is also called proactive routing or table-driven routing .In this method, the routes to all destinations is computed a priori. In order to compute routes in advance, nodes need to store the entire or partial information about link states and network topology. In order to keep the information up to date, nodes need to update their information periodically or whenever the link state or network topology changes. The advantage of precomputed routing is that when a source needs to send packets to a destination, the route is already available, i.e., there is no latency. The disadvantage is that some routes may never be used. Another problem is that the dissemination of routing information will consume a lot of the scarce wireless network bandwidth when the link state and network topology change fast (this is especially true in a wireless ad hoc network). The conventional LSR and DVR are examples of proactive routing.

On-demand routing is also called reactive routing. In this method, the route to a destination may not exist in advance and it is computed only when the route is needed.

The idea is as follows: When a source needs to send packets to a destination, it first finds a route or several routes to the destination. This process is called route discovery. After the route(s) are discovered, the source transmits packets along the route(s). During the transmission of packets, the route may be broken because the node(s) on the route move away or go down. The broken route needs to be rebuilt. The process of detecting route breakage and rebuilding the route is called route maintenance.

The major advantage of on-demand routing is that the precious bandwidth of wireless ad hoc networks is greatly saved because it limits the amount of bandwidth consumed in the exchange of routing information by maintaining routes to only those destinations to which the routers need to forward data traffic. On-demand routing also obviates the need for disseminating routing information periodically, or flooding such information whenever a link state changes. The primary problem with on-demand routing is the large latency at the beginning of the transmission caused by route discovery.

Apart from proactive route computation and reactive route discovery, there is another routing mechanism, called flooding. In flooding, no route will be computed or discovered. A packet is broadcast to all nodes in a network with the expectation that at least one copy of the packet will reach the destination. Scoping may be used to limit the overhead of flooding. Flooding is the easiest routing method because it requires no knowledge of the network topology. Under light traffic conditions exchanges distance vectors with its neighbors. When a node receives distance vectors from its neighbors, it computes new routes and updates its distance vector. The complete route from a source to a destination is formed, in a distributed manner, by combining the next hop of nodes on the path from the source to the destination. The problems with DVR are slow convergence and the tendency of creating routing loops.

22 Periodical update vs. event-driven up-date

Routing information needs to be disseminated to network nodes in order to ensure that the knowledge of link state and network topology remains up-to-date. Based on when the routing information will be disseminated, we can classify routing protocols as periodical update and event-driven update protocols.

Periodical update protocols disseminate routing information periodically. Periodical updates will simplify protocols and maintain network stability, and most importantly, enable (new) nodes to learn about the topology and the state of the network. However if the period between updates is large, the protocol may not keep the information up- to-date. On the other hand, if the period is small, too many routing packets will be disseminated which consumes the precious bandwidth of a wireless network.

In an event-driven update protocol, when events occur, (such as when a link fails or a new link appears), an update packet will be broadcast and the up-to-date status can be disseminated over the network soon. The problem might be that if the topology of networks changes rapidly, a lot of update packets will be generated and disseminated over the network which will use a lot of precious bandwidth, and furthermore, may cause too much fluctuation of routes. Periodical update and event-driven update mechanisms can be used together, forming what is called a hybrid update mechanism. For example, in DSDV, a node broadcasts its distance-vector periodically. Moreover, whenever a node finds that a link is broken, it distributes a message immediately.

Decentralized computation vs. distributed computation

Based on how (or where) a route is computed, there are two categories of routing protocols: decentralized computation and distributed computation. In a decentralized computation-based protocol, every node in the network maintains global and complete information about the network topology such that the node can compute the route to a destination itself when desired.

The route computation in LSR is a typical example of decentralized computation. In a distributed computation-based protocol, every node in the network only maintains partial and local information about the network topology. When a route needs to be computed, many nodes collaborate to compute the route. The route computation in DVR and the route discovery in on-demand routing belong to this category.

Source routing vs. hop-by-hop routing

Some routing protocols place the entire route (i.e., nodes in the route) in the headers of data packets so that the intermediate nodes only forward these packets according to the route in the header. Such a routing is called “source routing”. Source routing has the advantage that intermediate nodes do not need to maintain up-to-date routing information in order to route the packets they forward, since the packets themselves already contain all the routing decisions.

This fact, when coupled with on-demand route computation, eliminates the need for the periodic route advertisement and neighbor detection packets required in other kinds of protocols. The biggest problem with source routing is that when the network

23 is large and the route is long, placing the entire route in the header of every packet will waste a lot of scarce bandwidth.

In hop-by-hop routing, the route to a destination is distributed in the “next hop” of the nodes along the route. When a node receives a packet to a destination, it forwards the packet to the next hop corresponding to the destination. The problems are that all nodes need to maintain routing information and there may be a possibility of forming a routing loop.

2.3 Methodology

2.3.1 Destination-Sequenced Distance Vector (DSDV) Protocol

Our proposed routing method allows a collection of mobile computers, which may not be close to any base station and can exchange data along changing and arbitrary paths of interconnection, to afford all computers among their number a (possibly multi-hop) path along which data can be exchanged.

Packets are transmitted between the stations of the network by using routing tables which are stored at each station oft he network. Each routing table, at each of the stations, lists all available destinations, and the number of hops to each. Each route table entry is tagged with a sequence number which is originated by the destination station. To maintain the consistency of routing tables in a dynamically varying topology, each station periodically transmits updates, and transmits updates immediately when significant new information is available.

Since we do not assume that the mobile hosts are maintaining any sort of time synchronization, we also make no assumption about the phase relationship of the update periods between the mobile hosts. These packets indicate which stations are accessible from each station and the number of hops necessary to reach these accessible stat ions, as is often done in distance-vector routing algorithms.

Routing information is advertised by broadcasting or multicasting the packets which are transmitted periodically and incrementally as topological changes are detected – for instance, when stations move within the network. Data is also kept about the length of time between arrival of the first and the arrival of the best route for each particular destination. Based on this data, a decision may be made to delay advertising routes which are about to change soon, thus damping fluctuations of the route tables. The advertisement of routes which may not have stabilized yet is delayed in order to reduce the number of rebroadcasts of possible route entries that normally arrive with the same sequence number.

The DSDV protocol requires each mobile station to advertise, to each of its current neighbors, its own routing table (for instance, by broadcasting its entries). The entries in this list may change fairly dynamically over time, so the advertisement must be made often enough to ensure that every mobile computer can almost always locate every other mobile computer of the collection. In addition, each mobile computer agrees to relay data packets to other computers upon request. This agreement places a premium on the ability to determine the shortest number of hops for a route to a

24 destination; we would like to avoid unnecessarily disturbing mobile hosts if they are in sleep mode.

In this way a mobile computer may exchange data with any other mobile computer in the group even if the target of the data is not within range for direct communication. If the notification of which other mobile computers are accessible from any particular computer in the collection is done at layer 2, then DSDV will work with whatever higher layer (e.g., Network Layer) protocol might be in use.

All the computers interoperating to create data paths between themselves broadcast the necessary data periodically, say once every few seconds. In a wireless medium, it is important to keep in mind that broadcasts are limited in range by the physical characteristics of the medium. This is different than the situation with wired media, which usually have a much more well defined range of reception.

The data broadcast by each mobile computer will contain its new sequence number and the following information for each new route:

 The destination’s address  The number of hops required to reach the destination  The sequence number of the information received regarding that destination, as originally stamped by the destination.

The transmitted routing tables will also contain the hardware address, and (if appropriate) the network address, of the mobile computer transmitting them, within the headers of the packet. The routing table will also include a sequence number created by the transmitter. Routes with more recent sequence numbers are always preferred as the basis for making forwarding decisions, but not necessarily advertised. Of the paths with the same sequence number, those with the smallest metric will be used. By the natural way in which the routing tables are propagated, the sequence number is sent to all mobile computers which may each decide to maintain a routing entry for that originating mobile computer.

Routes received in broadcasts are also advertised by the receiver when it subsequently broadcasts its routing information; the receiver adds an increment to the metric before advertising the route, since incoming packets will require one more hop to reach the destination (namely, the hop from the transmitter to the receiver). Again, we do not explicitly consider here the changes required to use metrics which do not use the hop count to the destination.

One of the most important parameters to be chosen is the time between broadcasting the routing information packets. However, when any new or substantially modified route information is received by a Mobile Host, the new information will be retransmitted soon (subject to constraints imposed for damping route fluctuations), effecting the most rapid possible dissemination of routing information among all the cooperating Mobile Hosts. This quick re-broadcast introduces a new requirement for our protocols to converge as soon as possible. It would be calamitous if the movement of a Mobile Host caused a storm of broadcasts, degrading the availability of the wireless medium.

25 Mobile Hosts cause broken links as they move from place to place, the broken link may be detected by the layer-2 protocol, or it may instead be inferred if no broadcasts have been received for a while from a former neighbor. A broken link is described by a metric of ∞ (i.e., any value greater than the maximum allowed metric). When a link to a next hop has broken, any route through that next hop is immediately assigned an ∞ metric and assigned an updated sequence number. Since this qualifies as a substantial route change, such modified routes are immediately disclosed in a broadcast routing information packet. Building information to describe broken links is the only situation when the sequence number is generated by any Mobile Host other than the destination Mobile Host.

Sequence numbers defined by the originating Mobile Hosts are defined to be even numbers, and sequence numbers generated to indicate cm metrics are odd numbers. In this way any “real” sequence numbers will supersede an m metric. When a node receives an co metric, and it has a later sequence number with a finite metric, it triggers a route update broadcast to disseminate the important news about that destination.

In a very large population of Mobile Hosts, adjustments will likely be made in the time between broadcasts oft he routing information packets. In order to reduce the amount of information carried in these packets, two types will be defined. One will carry all the available routing information, called a “full dump”. The other type will carry only information changed since the last full dump, called an “incremental”. By design, an incremental routing update should fit in one network protocol data unit (NPDU). The full dump will most likely require multiple NPDUS, even for relatively small populations of Mobile Hosts. Full dumps can be transmitted relatively infrequently when no movement of Mobile Hosts is occurring, When movement becomes frequent, and the size of an incremental approaches the size of a NPDU, then a full dump can be scheduled (so that the next incremental will be smaller).

It is expected that mobile nodes will implement some means for determining which route changes are significant enough to be sent out with each incremental advertisement. For instance, when a stabilized route shows a different metric for some destination, that would likely constitute a significant change that needed to be advertised after stabilization.

If a new sequence number for a route is received, but the metric stays the same, that would be unlikely to be considered as a significant change. When a Mobile Host receives new routing information (usually in an incremental packet as just described), that information is compared to the information already available from previous routing information packets .Any route with a more recent sequence number is used. Routes with older sequence numbers are discarded.

A route with a sequence number equal to an existing route is chosen if it has a “better” metric, and the existing route discarded, or stored as less preferable. The metrics for routes chosen from the newly received broadcast information are each incremented by one hop. Newly recorded routes are scheduled for immediate advertisement to the current Mobile Host’s neighbors. Routes which show an improved metric are scheduled for advertisement at a time which depends on the average settling time for routes to the particular destination under consideration.

26 Timing skews between the various Mobile Hosts are expected. The broadcasts of routing information by the Mobile Hosts are to be regarded as somewhat asynchronous events, even though some regularity is expected. In such a population of independently transmitting agents, some fluctuation could develop using the above procedures for updating routes. It could turn out that a particular Mobile Host would receive new routing information in a pattern which causes it to consistently change routes from one next hop to another, even when the destination Mobile Host has not moved. This happens because there are two ways for new routes to be chosen; they might have a later sequence number, or they might have a better metric.

A Mobile Host could conceivably always receive two routes to the same destination, with a newer sequence number, one after another (via different neighbors), but always get the route with the worse metric first. Unless care is taken, this will lead to a continuing burst of new route transmittals upon every new sequence number from that destination. Each new metric is propagated to every Mobile Host in the neighborhood, which propagates to their neighbors and so on.

One solution is to delay the advertisement of such routes, when a Mobile Host can determine that a route with a better metric is likely to show up soon. The route with the later sequence number must be available for use, but it does not have to be advertised immediately unless it is a route to a destination which was previously unreachable.

Thus, there will be two routing tables kept at each Mobile Host; one for use with forwarding packets, and another to be advertised via incremental routing information packets. To determine the probability of imminent arrival of routing information showing a better metric, the Mobile Host has to keep a history of the weighted average time that routes to a particular destination fluctuate until the route with the best metric is received.

Examples of DSDV in operation

Figure 1: Movement in an ad-hoc network

27 Consider MH4 in Figure 1. Table 1 shows a possible structure of the forwarding table which is maintained at MH4.

Suppose the address of each Mobile Host is represented as MHi . Suppose further that all sequence numbers are denoted SNNN_MHi , where MHi specifies the computer that created the sequence number and SNNN is a sequence number value. Also suppose that there are entries for all other Mobile Hosts, with sequence numbers SNNN_MHi before MH1 moves away from MH2.

The install time field helps determine when to delete stale routes. With our protocol, the deletion of stale routes should rarely occur, since the detection of link breakages should propagate through the ad-hoc network immediately. Nevertheless, we expect to continue to monitor for the existence of stale routes and take appropriate action.

Table 1: Structure of the MH4 forwarding table

From table 1, one could surmise, for instance, that all the computers became available to MH4 at about the same time, since its install-time for most of them is about the same. One could also surmise that none of the links between the computers were broken, because all of the sequence number fields have times with even digits in the units place. Ptr1_MHi would all be pointers to null structures, because there are not any routes in Figure 1 which are likely to be superseded or compete with other possible routes to any particular destination.

Table 2 shows the structure of the advertised route table of MH4.

Table 2:The structure of the advertised route table of MH4.

Now suppose that MH1 moves into the general vicinity of MH5 and MH7, and away from the others (especially MH2).Only the entry for MH1 will show a new metric, but in the intervening time, many new sequence number entries have been received. The

28 first entry thus must be advertised in subsequent incremental routing information updates until the next full dump occurs.

When MH1 moved into the vicinity of MH5 and MH7, it triggered an immediate incremental routing information update which was then broadcast to MH6. MH6, having, determined that significant new routing information had been received, also triggered an immediate update which carried along the new routing information for

MH1. MH4, upon receiving this information, would then broadcast it at every interval until the next full routing information dump. At MH4, the incremental advertised routing update would have the form as shown in Table 3.

Table 3: MH4 advertised table (updated)

In this advertisement, the information for MH4 comes first, since it is doing the advertisement. The information for MH1 comes next, not because it has a lower address, but because MH1 is the only one which has any significant route changes affecting it. As a general rule, routes with changed metrics are first included in each incremental packet. The remaining space is used to include those routes whose sequence numbers have changed. In this example, one node has changed its routing information, since it is in a new location.

All nodes have transmitted new sequence numbers recently. If there were too many updated sequence numbers to fit in a single packet, only the ones which fit would be transmitted. These would be selected with a view to fairly transmitting them in their turn over several incremental update intervals.

There is no such required format for the transmission of full routing information packets. As many packets are used as are needed, and all available information is transmitted. The frequency of transmitting full updates would be reduced if the volume of data began to consume a significant fraction of the available capacity of the medium.

2.3.2 TraceRouteTest Application

Introduction

This TraceRouteTest application tests the basic functionality of the traceroute, DSDV routing, and setting modules.

Functionality

29 In the default configuration, each mote sends back a traceroute packet to the sink node every 5 seconds along with 1 byte of piggyback information. As a packet multi- hops back to the sink node, each intermediate node along the path adds its node ID (one byte) into the traceroute payload. The sink node receives the complete traceroute message. 3. SOFTWARE SPECIFICATION

3.1 TinyOS

3.1.1 Introduction The TinyOS system, libraries, and applications are written in nesC, a new language for programming structured component-based applications. The nesC language is primarily intended for embedded systems such as sensor networks. nesC has a C-like syntax, but supports the TinyOS concurrency model, as well as mechanisms for structuring, naming, and linking together software components into robust network embedded systems. The principal goal is to allow application designers to build components that can be easily composed into complete, concurrent systems, and yet perform extensive checking at compile time. TinyOS defines a number of important concepts that are expressed in nesC. First, nesC applications are built out of components with well-defined, bidirectional interfaces. Second, nesC defines a concurrency model, based on tasks and hardware event handlers, and detects data races at compile time.

Tinyos Design

30 3.1.2 Components Specification A nesC application consists of one or more components linked together to form an executable. A component provides and uses interfaces. These interfaces are the only point of access to the component and are bi-directional. An interface declares a set of functions called commands that the interface provider must implement and another set of functions called events that the interface user must implement. For a component to call the commands in an interface, it must implement the events of that interface. A single component may use or provide multiple interfaces and multiple instances of the same interface. Implementation There are two types of components in nesC: modules and configurations. Modules provide application code, implementing one or more interface. Configurations are used to assemble other components together, connecting interfaces used by components to interfaces provided by others. This is called wiring. Every nesC application is described by a top-level configuration that wires together the components inside nesC uses the filename extension “.nc” for all source files – interfaces, modules, and configurations.

3.1.3 Concurrency Model TinyOS executes only one program consisting of selected system components and custom components needed for a single application. There are two threads of execution : tasks and hardware event handlers. Tasks are functions whose execution is deferred. Once scheduled, they run to completion and do not preempt one another. Hardware event handlers are executed in response to a hardware interrupt and also run to completion, but may preempt the execution of a task or other hardware event handler. Commands and events that are executed as part of a hardware event handler must be declared with the async keyword. Because tasks and hardware event handlers may be preempted by other asynchronous code, nesC programs are susceptible to certain race conditions. Races are avoided either by accessing shared data exclusively within tasks, or by having all accesses within atomic statements. The nesC compiler reports potential data races to the programmer at compile-time. It is possible the compiler may report a false positive. In this case a variable can be declared with the norace keyword. The norace keyword should be used with extreme caution.

31 3.2 nesC

3.2.1 Introduction nesC is an extension to C designed to embody the structuring concepts and execution model of TinyOS. TinyOS is an event-driven operating system designed for sensor network nodes that have very limited resources (e.g., 8K bytes of program memory, 512 bytes of RAM). TinyOS has been re-implemented in nesC.

The basic concepts behind nesC are:

 Separation of construction and composition: programs are built out of components, which are assembled (“wired”) to form whole programs. Components define two scopes, one for their specification (containing the names of their interface instances) and one for their implementation. Components have internal concurrency in the form of tasks. Threads of control may pass into a component through its interfaces. These threads are rooted either in a task or a hardware interrupt.

 Specification of component behaviour in terms of set of interfaces: Interfaces may be provided or used by the component. The provided interfaces are intended to represent the functionality that the component provides to its user, the used interfaces represent the functionality the component needs to perform its job.

 Interfaces are bidirectional: they specify a set of functions to be implemented by the interface’s provider (commands) and a set to be implemented by the interface’s user (events). This allows a single interface to represent a complex interaction between components (e.g., registration of interest in some event, followed by a callback when that event happens). This is critical because all lengthy commands in TinyOS (e.g. send packet) are non- blocking; their completion is signaled through an event (send done). By specifying interfaces, a component cannot call the send command unless it provides an implementation of the sendDone event.

Typically commands call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts (the nature of this binding is system-dependent, so is not described further in this reference manual).

 Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages rubust design, and allows for better static analysis of program’s.

 nesC is designed under the expectation that code will be generated by whole- program compilers. This allows for better code generation and analysis. An example of this is nesC’s compile-time data race detector.

32  The concurrency model of nesC is based on run-to-completion tasks, and interrupt handlers which may interrupt tasks and each other. The nesC compiler signals the potential data races caused by the interrupt handlers.

3.2.2 Interfaces

Interfaces in nesC are bidirectional: they specify a multi-function interaction channel between two components, the provider and the user. The interface specifies a set of named functions, called commands, to be implemented by the interface’s provider and a set of named functions, called events, to be implemented by the interface’s user.

Interface: interface identifier { declaration-list }

storage-class-specifier: command event async

This declares interface type identifier. This identifier has global scope and belongs to a separate namespace, the component and interface type namespace. So all interface types have names distinct from each other and from all components, but there can be no conflicts with regular C declarations. Each interface type has a separate scope for the declarations in declaration-list. This declaration-list must consist of function declarations with the command or event storage class (if not, a compile-time error occurs). The optional async keyword indicates that the command or event can be executed in an interrupt handler.

3.2.3 Component Specification

A nesC component is either a module or a configuration.

Module: module identifier specification module-implementation

Configuration: configuration identifier specification configuration- implementation

Component’s names are specified by the identifier. The specification lists the specification elements (interface instances, commands or events) used or provided by this component.

Uses-provides: uses specification-element-list provides specification-element-list

Modules

33 Modules implement a component specification with C code:

module-implementation: implementation { translation-unit } where translation-unit is a list of C declarations and definitions.

Implementing the Module’s Specification

The translation-unit must implement all provided commands (events) of the module (i.e., all directly provided commands and events, all commands in provided interfaces and all events in used interfaces). A module can call any of its commands and signal any of its events.

Calling Commands and Signaling Events

The following extensions to C syntax are used to call events and signal commands:

call-kind: signal or post

Tasks

Tasks are posted by prefixing a call to the task with post, e.g., post myTask(). Post returns immediately; its return value is 1 if the task was successfully posted for independent execution, 0 otherwise.

Call-kind: post Configurations

Configurations implement a component specification by connecting, or wiring, together a collection of other components: configuration-implementation: implementation { component-list opt connection-list }

The component-list lists the components that are used to build this configuration, the connection list specifies how these components are wired to each other and to the configuration’s specification.

3.2.4 Wiring

Wiring is used to connect specification elements (interfaces, commands, events) together.

Wiring statements connect two endpoints.

Connection:

34 endpoint = endpoint endpoint -> endpoint endpoint <- endpoint

There are three wiring statements in nesC:

 endpoint1 = endpoint2 (equate wires): Any connection involving an external specification element. These effectively make two specification elements equivalent. Let S1 be the specification element of endpoint1 and S2 that of endpoint2. One of the following two conditions must hold or a compile-time error occurs. 1. S1 is internal, S2 is external (or vice-versa) and S1 and S2 are both provided or both used 2. S1 and S2 are both external and one is provided and the other used.

 endpoint1 -> endpoint2 (link wires): A connection involving two internal specification elements. Link wires always connect a used specification element specified by endpoint1 to a provided one specified by endpoint2 . If these two conditions do not hold, a compile-time error occurs.

 endpoint1 <- endpoint2 is equivalent to endpoint2 -> endpoint1.

3.2.5 Concurrency in nesC nesC assumes an execution model that consists of run-to-completion tasks (that typically represent the ongoing computation), and interrupt handlers that are signaled asynchronously by hardware. A scheduler for nesC can execute tasks in any order, but must obey the run-to-completion rule (the standard TinyOS scheduler follows a FIFO policy). Because tasks are not preempted and run to completion, they are atomic with respect to each other, but are not atomic with respect to interrupt handlers.

As this is a concurrent execution model, nesC programs are susceptible to race conditions, in particular data races on the program’s shared state, i.e., its global and module variables (nesC does not include dynamic memory allocation). Races are avoided either by accessing a shared state only in tasks, or only within atomic statements. The nesC compiler reports potential data races to the programmer at compile-time.

3.3 Sample application : Blink The simple test program “Blink” found in apps/Blink in the TinyOS tree. This application simply causes the red LED on the mote to turn on and off at 1Hz. Blink application is composed of two components: a module, called “BlinkM.nc”, and a configuration, called “Blink.nc”. All applications require a top-level configuration file, which is typically named after the application itself. In this case Blink.nc is the configuration for the Blink application and the source file that the nesC compiler uses to generate an executable file. BlinkM.nc, on the other hand,

35 actually provides the implementation of the Blink application. Blink.nc is used to wire the BlinkM.nc module to other components that the Blink application requires. The reason for the distinction between modules and configurations is to allow a system designer to quickly “snap together” applications. For example, a designer could provide a configuration that simply wires together one or more modules, none of which she actually designed. Likewise, another developer can provide a new set of “library” modules that can be used in a range of applications. Sometimes (as is the case with Blink and BlinkM) it will have a configuration and a module that go together. When this is the case, the convention used in the TinyOS source tree is that Foo.nc represents a configuration and FooM.nc represents the corresponding module. While you could name an application’s implementation module and associated top-level configuration anything, to keep things simple we suggest that you adopt this convention in your own code.

Blink.nc Configuration The nesC compiler, ncc, compiles a nesC application when given the file containing the top-level configuration. Typical TinyOS applications come with a standard Makefile that allows platform selection and invokes ncc with appropriate options on the application’s top-level configuration.

Blink.nc configuration Blink { } implementation { components Main, BlinkM, SingleTimer, LedsC;

Main.StdControl -> BlinkM.StdControl; Main.StdControl -> SingleTimer.StdControl; BlinkM.Timer -> SingleTimer.Timer; BlinkM.Leds -> LedsC; }

The first thing to notice is the key word configuration, which indicates that The first two lines, configuration Blink { } simply state that this is a configuration called Blink. Within the empty braces here it is possible to specify uses and provides clauses, as with a module. A configuration can use and provide interfaces. The actual configuration is implemented within the pair of curly bracket following key word implementation.The components line specifies the set of components that this configuration references, in this case Main, BlinkM, SingleTimer, and LedsC.

36 The remainder of the implementation consists of connecting interfaces used by components to interfaces provided by others. Main is a component that is executed first in a TinyOS application. To be precise, the Main.StdControl.init() command is the first command executed in TinyOS followed by StdControl.nc interface StdControl { command result_t init(); command result_t start(); command result_t stop(); } Main.StdControl.start(). Therefore, a TinyOS application must have Main component in its configuration. StdControl is a common interface used to initialize and start TinyOS components. Tos/interfaces/StdControl.nc:

StdControl defines three commands, init(),start(), and stop(). Init() is called when a component is first initialized, and start() when it is started, that is, actually executed for the first time. Stop() is called when the component is stopped, for example, in order to power off the device that it is controlling. Init() can be called multiple times, but will never be called after either start() or stop are called. Specifically, the valid call patterns of StdControl are init*(start | stop)* . All three of these commands have “deep” semantics; calling init() on a component must make it call init() on all of its subcomponents. The following 2 lines in Blink configuration Main.StdControl -> SingleTimer.StdControl; Main.StdControl -> BlinkM.StdControl; wire the StdControl interface in Main to the StdControl interface in both BlinkM and SingleTimer. SingleTimer.StdControl.init()andBlinkM.StdControl.init() will be called by Main.StdControl.init(). The same rule applies to the start() and stop() commands. Concerning used interfaces, the subcomponent initialization functions must be explicitly called by the using component. For example, the BlinkM module uses the interface Leds, so Leds.init() is called explicitly in BlinkM.init(). nesC uses arrows to determine relationships between interfaces. The right arrow (->) “binds to”. The left side of the arrow binds an interface to an implementation on the right side. In other words, the component that uses an interface is on the left, and the component provides the interface is on the right.

37 The line BlinkM.Timer -> SingleTimer.Timer; is used to wire the Timer interface used by BlinkM to the Timer interface provided by SingleTimer. BlinkM.Timer on the left side of the arrow is referring to the interface called Timer (tos/interfaces/Timer.nc), while SingleTimer.Timer on the right side of the arrow is referring to the implementation of Timer (tos/lib/SingleTimer.nc). The arrow always binds interfaces (on the left) to implementations (on the right). nesC supports multiple implementations of the same interface. The Timer interface is such a example. The SingleTimer component implements a single Timer interface while another component, TimerC, implements multiple timers using timer id as a parameter. Wirings can also be implicit. For example, BlinkM.Leds -> LedsC;

is really shorthand for

BlinkM.Leds -> LedsC.Leds; If no interface name is given on the right side of the arrow, the nesC compiler by default tries to bind to the same interface as on the left side of the arrow.

BlinkM.nc Module

BlinkM.nc module BlinkM { provides { interface StdControl; } uses { interface Timer; interface Leds; } } // Continued below...

The first part of the code states that this is a module called BlinkMand declares the interfaces it provides and uses. The BlinkM module provides the interface StdControl. This means that BlinkM implements the StdControl interface. As explained above, this is necessary to get the Blink component initialized and started. The BlinkM module also uses two interfaces: Leds and Timer. This means that BlinkM may call any command declared in the interfaces it uses and must also implement any events declared in those interfaces. The Leds interface defines several commands like redOn(),redOff(), and so forth, which turn the different LEDs (red, green, or yellow) on the mote on and off. Because BlinkM uses the Leds interface, it can invoke any of these commands. Keep in mind,

38 however, that Leds is just an interface: the implementation is specified in the Blink.nc configuration file. Timer.nc: Timer.nc interface Timer { command result_t start(char type, uint32_t interval); command result_t stop(); event result_t fired(); } Timer interface defines the start() and stop() commands, and the fired() event. The start() command is used to specify the type of the timer and the interval at which the timer will expire. The unit of the interval argument is millisecond. The valid types are TIMER_REPEAT and TIMER_ONE_SHOT. A one-shot timer ends after the specified interval, while a repeat timer goes on and on until it is stopped by the stop() command. An application knows that its timer has expired when it receives an event. The Timer interface provides an event: event result_t fired();

An event is a function that the implementation of an interface will signal when a certain event takes place. In this case, the fired() event is signaled when the specified interval has passed. This is an example of a bi-directional interface: an interface not only provides commands that can be called by users of the interface, but also signals events that call handlers in the user. Think of an event as a callback function that the implementation of an interface will invoke. A module that uses an interface must implement the events that this interface uses.

BlinkM.nc, continued implementation {

command result_t StdControl.init() { call Leds.init(); return SUCCESS; }

command result_t StdControl.start() { return call Timer.start(TIMER_REPEAT, 1000) ; }

command result_t StdControl.stop() { return call Timer.stop(); }

event result_t Timer.fired() { call Leds.redToggle();

39 return SUCCESS; } }

The BlinkM module implements the StdControl.init(), StdControl.start(), and StdControl.stop() commands, since it provides the StdControl interface. It also implements the Timer.fired() event, which is necessary since BlinkM must implement any event from an interface it uses. The init() command in the implemented StdControl interface simply initializes the Leds subcomponent with the call to Leds.init(). The start() command invokes Timer.start() to create a repeat timer that expires every 1000 ms. Stop() terminates the timer. Each time Timer.fired() event is triggered, the Leds.redToggle() toggles the red LED. One can view a graphical representation of the component relationships within an application. TinyOS source files include metadata within comment blocks that ncc, the nesC compiler, uses to automatically generate html-formatted documentation. To generate the documentation, type make docs from the application directory. Compiling The Blink Application TinyOS supports multiple platforms. Each platform has its own directory in the tos/platform directory. In this example, one uses the mica platform as an example. Once in the TinyOS source tree, compiling the Blink application for the Mica mote by typing: make mica in the apps/Blink directory.

But in the project we are simulating so we dont use any hardware platform. We simulate the sensor motes using TOSSIM (tinyos simulator). In this case we build the application program by typing : make pc in the application directory. This compiles the application and builds it. We can view the output of the simulation either in the command line interface called CYGWIN or in the java based GUI called TINYVIZ. For viewing the output in the cygwin interface we use the following command in the application directory.

Build/pc/main.exe [number of nodes]

For viewing the output in the tinyviz we use the command in the application directory.

Tinyviz –run build/pc/main.exe [number of nodes]

40 In order to compile a nesC application run ncc on the top-level configuration file for the application. Ncc takes care of locating and compiling all of the different components that required by your application , linking them together , and ensuring that all of the component wiring matches up.

4. DESIGN AND IMPLEMENTATION

4.1 TOSSIM

4.1.1 Introduction

TOSSIM is a discrete event simulator for TinyOS sensor networks. Instead of compiling a TinyOS application for a mote, users can compile it into the TOSSIM framework, which runs on a PC. This allows users to debug, test, and analyze algorithms in a controlled and repeatable environment. As TOSSIM runs on a PC, users can examine their TinyOS code using debuggers and other development tools. This document briefly describes the design philosophy of TOSSIM, its capabilities, its structure. It also provides a brief tutorial on how to use TOSSIM for testing or analysis.

TOSSIM’s primary goal is to provide a high fidelity simulation of TinyOS applications. For this reason, it focuses on simulating TinyOS and its execution, rather than simulating the real world. While TOSSIM can be used to understand the causes of behavior observed in the real world, it does not capture all of them, and should not be used for absolute evaluations. TOSSIM is not always the right simulation solution; like any simulation, it makes several assumptions, focusing on making some behaviors accurate while simplying others.

4.1.2 Compiling and Running a Simulation

TOSSIM is automatically built when you compile an application. Applications are compiled by entering an application directory and typing make. Alternatively, when in an application directory, you can type make pc, which will only compile a simulation of the application.

Enter the apps/CntToLedsAndRfm directory. This application runs a 4Hz counter. It assumes a Mica mote which has 3 LEDs. On each counter tick, the application displays the least significant three bits of the counter on the three mote LEDs and sends the entire 16-bit value in a packet. Build and install the application on a Mica mote as in Lesson 4. You should see the LEDs blink.

Build a TOSSIM version of the application with make pc. The TOSSIM executable is build/pc/main.exe. Type build/pc/main.exe –help to see a brief summary of its command-line usage. TOSSIM has a single required parameter, the number of nodes

41 to simulate. Type build/pc/main.exe 1 to run a simulation of a single node. You should see a long stream of output fly by, most of which refer to radio bit events. Hit control-C to stop the simulation.

By default, TOSSIM prints out all debugging information. As radio bit events are fired at 20 or 40 KHz, these are the most frequent events in the simulator, they comprise most of the output in CntToLedsAndRfm. Given the application, we’re more concerned with the packet output and mote LEDs than individual radio bits. TOSSIM output can be configured by setting the DBG environment variable in a shell. Type export DBG=am,led in your shell; this makes only LED and AM (active messages) packet output enabled. Run the one-mote simulation again.

4.1.3 Adding Debugging Statements

TOSSIM provides configuration of debugging output at run-time. Much of the TinyOS source contains debugging statements. Each debugging statement is accompanied by one or more modal flags. When the simulator starts, it reads in the DBG environment variable to determine which modes should be enabled. Modes are stored and processed as entries in a bit-mask, so a single output can be enabled for multiple modes, and a user can specify multiple modes to be displayed. The set of DBG modes recognized by TOSSIM can be identified by using the –h option; all available modes are printed. Four DBG modes are reserved for application components and debugging use: usr1, usr2, usr3, and temp. In TinyOS code, debug message commands have this syntax:

dbg(, const char* format, ...);

The mode parameter specifies which under which DBG modes this message will be printed. In Timer.fired(), add this line just before the return statement:

dbg(DBG_TEMP, “Counter: Value is %i\n”, (int)state);

Set DBG to be temp and run a single mote simulation. You’ll see the counter increment. In general, the DBG mode name in TinyOS code is the name used when you run the simulator, with DBG_prepended. For example, am is DBG_AM, packet is DBG_PACKET and boot is DBG_BOOT. Just as you can enable multiple modes when running the simulator, a single debug message can be activated on multiple modes. Each mode is a bit in a large bitmask; one can use all of the standard logical operators (e.g. |, ~) . For example, change the debug message you just added to:

dbg(DBG_TEMP|DBG_USR1, “Counter: Value is %i\n”, (int)state);

It will now be printed if either temp or usr1 is enabled.

42 4.2 TinyViz: The TOSSIM User Interface

TinyViz provides an extensible graphical user interface for debugging, visualizing, and interacting with TOSSIM simulations of TinyOS applications. Using TinyViz, you can easily trace the execution of TinyOS apps, set breakpoints when interesting events occur, visualize radio messages, and manipulate the virtual position and radio connectivity of motes. In addition, TinyViz supports a simple “plugin” API that allows you to write your own TinyViz modules to visualize data in an application- specific way, or interact with the running simulation.

The main TinyViz class is a jar file, tools/java/net/tinyos/sim/tinyviz.jar. TinyViz can be attached to a running simulation. Also, TOSSIM can be made to wait for TinyViz to connect before it starts up, with the –gui flag. This allows users to be sure that TinyViz captures all of the events in a given simulation. TinyViz is not actually a visualizer; instead, it is a framework in which plugins can provide desired functionality. By itself, TinyViz does little besides draw motes and their LEDs. However, it comes with a few example plugins, such as one that visualizes network traffic.

Figure 2 shows a screenshot of the TinyViz tool. The left window contains the simulation visualization,showing 16 motes communicating in an ad-hoc network. The right window is the plugin window; each plugin is a tab pane, with configuration controls and data. The second element on the top bar is the Plugin menu, for activating or de-activating individual plugins. Inactive plugins have their tab panes greyed out.

The third element is the layout menu, which allows you to arrange motes in specific topologies, as well as save or restore topologies. TinyViz can use physical topologies to generate network topologies by sending messages to TOSSIM that configure network connectivity and the loss rate of individual links.

The right side of the top bar has three buttons and a slider. TinyViz can slow a simulation by introducing delays when it handles events from TOSSIM. The slider configures how long delays are. The On/Off button turns selected motes on and off; this can be used to reboot a network, or dynamically change its members.

The button to the right of the slider starts and stops a simulation; unlike the delays, which are for short, fixed periods, this button can be used to pause a simulation for arbitrary periods. The final button, on the far right, enables and disables a grid in the visualization area. The small text bar on the bottom of the right panel displays whether the simulation is running or paused.

The TinyViz engine uses an event-driven model, which allows easy mapping between TinyOS’ eventbased execution and event-driven GUIs. By itself, the application does very little; drop-in plugins provide user functionality. TinyViz has an event bus, which reads events from a simulation and publishes them to all active plugins.

43 To get started, look at the apps/TestTinyViz application, which causes motes to periodically send a message to a random neighbor.

Start up TinyViz from the command line, run the TestTinyViz app, as follows:

export DBG=usr1 tinyviz –run build/pc/main.exe 16

We will see a window looking something like the following:

Figure 2 : TinyViz connected to TOSSIM running an object tracking application. The right panel shows sent radio packets, the left panel exhibits radio connectivity for mote 15 and network traffic. The green arrows and corresponding labels represent link probabilities for mote 15, and the magenta arrows indicate packet transmission.

44 4.2.1 TinyViz Plugins

Users can write new plugins, which TinyViz can dynamically load. A simple event bus sits in the center of TinyViz; simulator messages sent to TinyViz appear as events, which any plugin can respond to. For example, when a mote transmits a packet in TOSSIM, the simulator sends a packet send message to TinyViz, which generates a packet send event and broadcasts it on the event bus. A networking plugin can listen for packet send events and update TinyViz node state and draw an animation of the communication.

Plugins can be dynamically registered and deregistered, which correspondingly connect and disconnect the plugin from the event bus. A plugin hears all events sent to the event bus, but individually decides whether to do anything in response to a specific event; this keeps the event bus simple, instead of having a content-specific subscription mechanism.

ADC Models

TOSSIM provides two ADC models: random and generic. The model chosen specifies how readings taken from the ADC are generated. Whenever any channel in the ADC is sampled in the random model, it returns a 10-bit random value. The general model also provides random values by default, but has added functionality. Just as external applications can actuate the lossy network model, they can also actuate the generic ADC model using the TOSSIM control channel, setting the value for any ADC port on any mote. Currently, only TinyViz, supports this, through the ADC plugin. Left-clicking on a mote selects it; using the ADC panel, you can set the 10-bit value read from any of the mote’s ADC ports.

Radio Models

TOSSIM simulates the TinyOS network at the bit level, using TinyOS component implementations almost identical to the mica 40Kbit RFM-based stack. TOSSIM provides two radio models: simple and lossy.

In TOSSIM, a network signal is either a one or zero. All signals are of equal strength, and collision is modeled as a logical or; there is no cancellation. This means that distance does not effect signal strength; if mote B is very close to mote A, it cannot cut through the signal from far-away mote C. This makes interference in TOSSIM generally worse than expected real world behavior.

The “simple” radio model places all nodes in a single cell. Every bit transmitted is received without error. Although no bits are corrupted due to error, two motes can transmit at the same time; every mote in the cell will hear the overlap of the signals, which will almost certainly be a corrupted packet. However, because of perfect bit transmission in a single cell, the probability of two motes transmitting at the same time is very low, due to the TinyOS CSMA protocol.

45 The simple model is useful for testing single-hop algorithms and TinyOS components for correctness. Deterministic packet reception allows deterministic results.

The “lossy” radio model places the nodes in a directed graph. Each edge (a, b) in the graph means a’s signal can be heard by b. Every edge has a value in the range (0, 1), representing the probability a bit sent by a will be corrupted (flipped) when b hears it. For example, a value of 0.01 means each bit transmitted has a 1% chance of being flipped, while 1.0 means every bit will be flipped, and 0.0 means bits will be transmitted without error. Each bit is considered independently.

The graph of the lossy model can be specified at TOSSIM boot with a file. TOSSIM looks for the file “lossy.nss” by default, but an alternate file can be specified with the –rf flag. The file has the following format:

::bit error rate ::bit error rate

For example, 0:1:0.012333 1:0:0.009112 1:2:0.013196 specifies that mote 1 hears mote 0 with a bit error rate of 1.2%, mote 0 hears mote 1 with a bit error rate of 0.9%, and mote 2 hears mote 1 with a bit error rate of 1.3%. By making the graph directed, TOSSIM can model asymmetric links, which initial empirical studies have suggest are a common occurance in TinyOS RFM-stack networks.

By specifying error at the bit level, TOSSIM can capture many causes of packet loss and noise in a TinyOS network, including missed start symbols, data corruption, and acknowledgement errors.

TOSSIM includes a Java tool, net.tinyos.sim.LossyBuilder for generating loss rates from physical topologies.

4.2.2 LossyBuilder

LossyBuilder assumes each mote has a transmission radius of 50 feet. Combined with the bit error rate, this means each mote transmits its signal in a disc of radius 50 feet, with the bit error rate increasing with distance from the center.

LossyBuilder can read in or generate physical topologies ((x,y) coordinates), and generate loss topologies from physical topologies by sampling from the model of the empirical distribution.

Running LossyBuilder will, by default, generate a loss topology for a 10 by 10 grid with a grid spacing of five feet (45’ by 45’), and print it to standard out. It prints the loss topology in the form that TOSSIM expects.

Its usage is:

46 usage: java net.tinyos.sim.LossyBuilder [options]

options:

-t grid : Topology (grid only and default) -d : Grid size (m by n) (default: 10 x 10) -s : Spacing factor (default: 5.0) -o : Output file -i : Input file of positions -p : Generate positions, not error rates

For example, java net.tinyos.sim.LossyBuilder –d 20 20 –s 10 –o 20x20-10.nss will output a loss topology of a 20 by 20 grid, with ten foot spacing, and write it to the output file “20x20-10.nss”.

4.2.3 Lossy Model Actuation

Specifying a loss topology with a file defines a static topology over an entire simulation. There are simulation situations, however, in which changing topologies are needed. TOSSIM therefore allows users to modify the loss topology at run-time. Applications can connect to TOSSIM over a TCP socket and send control messages to add, delete, or modify network links. Currently, the only application that does so is TinyViz.The TinyViz network plugins use the same empirical model of LossyBuilder to generate link loss rates; moving motes causes TinyViz to send the appropriate commands to TOSSIM.

4.2.4 Implementation of TraceRouteTest Application

Using TOSSIM we simulate TraceRouteTest application to measure the performance of DSDV in sensor networks with varying number of nodes. Simulation was considered for two different environments, with 5 and with 8 nodes.

The simulation testbed will consist of a single Sink node and all other Sensor nodes. All the Sensor nodes send their data to the Sensor node taken as the Sink node.

The TinyViz shows the radio packets being send between the sensor nodes. The node positions are captured and saved to a file. The node positions are supplied to the Lossy Builder which produces a corresponding output file with the respective link error rates. The same procedure is repeated for 8 nodes as well.

47 5. TESTING

5.1 Performance Evaluation

We evaluate the effectiveness and efficiency of the DSDV routing protocol in sensor networks through simulations.We implemented the DSDV routing protocol ,using the TraceRouteTest Application, in TOSSIM in two different environments consisting of :

 Five nodes : The simulation testbed consists of a single sink node and 4 other sensor node. All the 4 sensor nodes send their data to the sensor node taken as the sink node. By default, node 0 is the sink node.

 Eight nodes : The simulation testbed consists of a single sink node and 7 other sensor node. All the 7 sensor nodes send their data to the sensor node taken as the sink node. By default, node 0 is the sink node.

5.1.1 Performance measured under Different Source-Sink Distances

The constraint we use here, in the evaluation of performance of DSDV in sensor networks , is the distance of each sensor node from the sink node. Here we try to find the error rate of various links connecting the sink to the other nodes.The nodes are at varying distances from the sink. The link error rate increases with the increase in source-sink distances. Because the longer the route, the larger chance of broken path (Recall that node and channel are subject to failure).

Figure 3 reports the link error rates for different source-sink distances.

TraceRouteTest application tests the basic functionality of the traceroute, DSDV routing, and setting modules. In the default configuration, each mote sends a traceroute packet to the sink node.

First, we compile and export the necessary DBG values. Five DBG modes are used for this application components and debugging use: usr1, usr2, usr3, route and sim. Now, run the simulation for 5 nodes in TinyViz using the command line in the Cygwin.Once the simulation begins, place the nodes at different source-sink distances (path lengths). Each simulation run lasts for more than 600 seconds. Each node generates one data packet per second. The source nodes select the link with the minimum cost to the sink node and perform the routing.

48 Capture those positions and save it into a lossy file. LossyBuilder can read in or generate physical topologies ((x,y) coordinates), and generate loss topologies from physical topologies by sampling from the empirical model.

The output file containing the error rates is generated using the command java net.tinyos.sim.LossyBuilder -i lossy.nss -o output.nss

LossyBuilder assumes each mote has a transmission radius of 50 feet. Combined with the bit error rate, this means each mote transmits its signal in a disc of radius 50 feet, with the bit error rate increasing with distance from the sink.

The same procedure is repeated for 8 nodes. The results are then plotted in a graph of Path-length vs. Error rate.

Figure 3a: Error Rates Vs Path Length ( 5 nodes )

49 Figure 3b: Error Rates Vs Path Length ( 8 nodes )

50 6. CONCLUSION

Wireless sensor networks are potentially one of the most important technologies of this century. Consequently, billions of dollars are being committed to the research and development of sensor networks in order to address the many technical challenges and wide range of immediate applications. Advances in hardware development have made available the prospect of low cost, low power, miniature devices for use in remote sensing applications. The combination of these factors has improved the viability of utilizing a sensor network consisting of a large number of intelligent sensors, enabling the collection, processing analysis and dissemination of valuable information gathered in a variety of environments. A sensor network is an array (possibly very large) of sensors of diverse type interconnected by a communications network. Dissemination of sensor data in an efficient manner requires the dedicated routing protocols to identify shortest paths. There have been several routing protocols proposed for wireless ad hoc networks. Destination Sequence Distance Vector (DSDV) was chosen due to it's relative simplicity.

Simulation study of the TraceRouteTest DSDV Routing application in TOSSIM, led to the conclusion that the link error rate increases with the increase in source-sink distances. Because the longer the route, the larger chance of broken path and thus lowering the performance.

51

BIBLIOGRAPHY

C. Perkins and P. Bhagvat. Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers. ACM Computer Communications Review, pages 234 -244, Oct. 1994.

J. Broch, D. A. Maltz, D. B. Johnson, Y.-C. Hu, and J. Jetcheva, “A performance comparison of multi-hop wireless ad hoc network routing protocols,” in Proceedings of the Fourth Annual ACM/IEEE International Conference on Mobile Computing and Networking, ACM, October 1998.

S. R. Das, C. E. Perkins, and E. M. Royer, “Performance comparison of two on-demand routing protocols for ad hoc networks,” in INFOCOM, March 2000.

Charles Perkins. Mobile 1P as seen by the IETF. Connezions, pages 2-20, Mar 1994.

Per Johansson, Tony Larsson, Nicklas Hedman, and Bartosz Mielczarek. “Routing protocols for mobile ad-hoc networks – a comparative performance analysis”. In proceedings of the 5th International Conference on Mobile Computing and Networking (ACM MOBICOM ’99), August 1999, pages 195-206.

TOSSIM: A Simulator for TinyOS Networks,Philip Levis and Nelson Lee, [email protected] ,September 17, 2003.

P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and Scalable Simulation of Entire TinyOS Applications To appear in Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys 2003). nesC 1.1 Language Reference Manual ,David Gay, Philip Levis, David Culler, Eric Brewer ,May 2003.

Routing Techniques in Wireless Ad Hoc Networks: Classification and Comparison Xukai Zou, Byrav Ramamurthy ,Dept. of Computer Science and Engineering ,University of Nebraska-Lincoln ,Lincoln, NE 68588, U.S.A, email:[email protected] and Spyros Magliveras, Dept. Of Mathematical Sciences, Florida Atlantic University Boca Raton, FL 33431, U.S.A, email: [email protected]

52 SCREENSHOTS

53 54 55 56 57 58 59 60

Recommended publications