View metadata, citation and similar papers at core.ac.uk brought to you by CORE

provided by NORA - Norwegian Open Research Archives

UNIVERSITY OF OSLO Department of Informatics

Quality of Service Monitoring for Video Streaming in Mobile Ad-hoc Networks

Master’s thesis

Magnus Engh Halvorsen

1st February 2008

Abstract

This thesis work is done in the context of a recently launched research project on delay-tolerant streaming, where the vision is to provide a video streaming solution capable of performing in a dy- namic, wireless, mobile environment without the presence of a fixed infrastructure. There are situations where regular wireless communication net- works are not accessible or the connectivity is limited. Examples of such situations are emergency and rescue operations, where commu- nications infrastructure may be non-existent or destroyed. The use of mobile devices connected using mobile ad-hoc network (MANET) technology and operating independently of any existing infrastructure, can help overcome these difficulties and provide connectivity despite large-scale disasters. Also, a lot of information cannot be commu- nicated through conventional systems, such as phones or radio. The ability to transmit streaming multimedia data could potentially save lives in an emergency situation. Multimedia streaming is a topic that has not received much at- tention in MANET research. Furthermore, most of the research and experiences with MANETs in general are based on work done in sim- ulated environments. However, in contrast to most prior MANET research, this thesis uses a practical approach. We present a working solution for serving and consuming streaming video using the Nokia 770 Internet Tablet that can serve as a starting point for future work on video streaming in mobile ad-hoc networks. The solution is based entirely on open-source software, and can thus be freely used and modified for future research. We also demonstrate how we were able to perform cross-layer mon- itoring of the video streaming solution through a strategy of combin- ing hardware resource monitoring and different network measurement methods. In addition, we show how to set up a small-scale, real-world testbed for mobile ad-hoc networks to perform such monitoring. Using this testbed, we the evaluated the performance of video streaming on the Nokia 770 in a mobile ad-hoc network environment. Our main finding is that to avoid degrading the normal operation of the devices or the network during streaming, conservation of CPU re- sources must be a priority when designing a video streaming solution for mobile ad-hoc networks. We also investigated the wireless link data reported by the Nokia 770 wireless interface drivers, and found that the reported values have serious limitations compared to the values reported by other wireless devices.

2

Acknowledgements

I want to thank everyone who have given their time, assistance and patience so generously throughout the past two years. First and foremost, I would like to express my utmost gratitude to my supervisors, Thomas Plagemann and Matti Siekkinen, for their invaluable guidance and constructive comments throughout the course of my thesis work. Without your support and expertise, this work would not have been possible. I would also like to thank the participants of the Delay-Tolerant Streaming Project and the people in the DMMS research group at the University of Oslo for their helpful feedback and assistance on various matters. In particular, I want to thank Sergio Cabrero, for taking the time to explain their work on designing protocols for delay-tolerant streaming, and for assisting me with the initial configuration and software installation on the Nokia 770. Finally, I would like express my thanks to those of my friends who spent several hours of their valuable time proofreading this thesis. Your efforts and feedback helped much in improving its quality.

4

Contents

I Background and concepts 14

1 Introduction 14 1.1 Background ...... 14 1.2 Motivation and problem description ...... 15 1.3 Goalsandmethods ...... 16 1.4 Contributions ...... 16 1.5 Technicalchallenges...... 17 1.6 Relatedwork ...... 18 1.7 Outline...... 20

2 Mobile ad-hoc networks 21 2.1 Characteristics ...... 21 2.2 MANETApplications ...... 22 2.3 RoutinginMANETs ...... 22 2.3.1 Single-channel vs. multi-channel ...... 22 2.3.2 Uniformvs. non-uniform ...... 23 2.3.3 Topology based vs. destination based ...... 23 2.3.4 Proactivevs. reactiveprotocols ...... 23 2.4 Examples of MANET routing protocols ...... 23 2.4.1 Optimized Link State Routing (OLSR) ...... 24 2.4.2 Ad-hoc On-demand Distance Vector Routing (AODV) 24

3 Multimedia streaming 26 3.1 Introduction...... 26 3.2 Quality of Service (QoS) requirements ...... 26 3.3 Transportprotocols...... 27 3.3.1 Transmission Control Protocol (TCP) ...... 28 3.3.2 UserDatagramProtocol(UDP) ...... 29 3.3.3 Real-Time Transport Protocol (RTP) ...... 29 3.4 Controlprotocols ...... 29 3.4.1 Hypertext Transfer Protocol (HTTP) ...... 29 3.4.2 Session Initiation Protocol (SIP) ...... 30 3.4.3 Real-Time Streaming Protocol (RTSP) ...... 30 3.5 Multimedia streaming issues MANETs ...... 30 3.5.1 Wirelessmedium ...... 31 3.5.2 Resourceconstraints ...... 31 3.5.3 Heterogeneity ...... 32

6 3.5.4 Lackoffixedinfrastructure ...... 32 3.5.5 Topologychanges ...... 32 3.5.6 Maliciousnodes...... 32 3.5.7 Transportprotocoldesignissues ...... 33

4 Measuring Quality of Service 34 4.1 Measurements at the network level ...... 34 4.2 Linkquality ...... 35 4.2.1 Received Signal Strength Indicator (RSSI) ...... 35 4.2.2 Signal-to-NoiseRatio(SNR) ...... 36 4.2.3 Expected Transmission Count (ETX) ...... 36 4.2.4 Expected Transmission Time (ETT) ...... 36 4.3 Routequality ...... 37 4.3.1 Expected Transmission Count (ETX) ...... 37 4.3.2 Weighted Cumulative ETT (WCETT) ...... 37 4.3.3 Routestability ...... 37 4.4 Measurements at the media level ...... 38 4.4.1 Subjective vs. objective measurement methods . . . . . 38 4.4.2 MeanSquareError(MSE) ...... 39 4.4.3 Peak Signal-to-NoiseRatio(PSNR) ...... 39 4.4.4 Relative Peak Signal-to-Noise Ratio (rPSNR) . . . . . 39 4.4.5 Summary of measurement methods ...... 40 4.5 QoS measurement in MANETs ...... 40

II Experiments 41

5 The Nokia 770 Internet Tablet 41 5.1 Aboutthedevice ...... 41 5.2 Basicsoftware...... 42 5.2.1 Operatingsystem ...... 43 5.2.2 xterm ...... 43 5.2.3 SSH2server ...... 43 5.2.4 WirelessTools...... 43 5.2.5 NTPclient ...... 44 5.2.6 OLSRdaemon ...... 44 5.3 Networkmeasurements ...... 44 5.3.1 The proc filesystem (procfs) ...... 44 5.3.2 ifconfig...... 45 5.3.3 netstat...... 46 5.3.4 LinuxWirelessTools ...... 46

7 5.3.5 The sys filesystem (sysfs) ...... 50 5.3.6 ConnectionManager ...... 51 5.3.7 OLSRdaemon ...... 52 5.4 The Nokia 770 multimedia architecture ...... 55 5.4.1 GStreamer...... 55 5.4.2 The digital signal processor (DSP) ...... 56 5.4.3 Videostreaming...... 58 5.5 Videostreamingsoftware...... 59 5.5.1 gst-launch(GStreamer)...... 60 5.5.2 Flumotion...... 60 5.5.3 HelixDNAServer...... 60 5.5.4 Icecast...... 61 5.5.5 FFmpeg ...... 61 5.5.6 VideoLAN...... 62 5.5.7 VideoPlayer ...... 63 5.5.8 MPlayer ...... 63 5.5.9 ...... 64 5.6 Resourcemonitoring ...... 64 5.6.1 The /proc filesystem ...... 64 5.6.2 ps ...... 65 5.6.3 top...... 65 5.6.4 sysstat...... 65 5.7 Othersoftware ...... 66 5.7.1 Mediautils...... 66 5.7.2 MediaConverter...... 67 5.7.3 VidConvert ...... 67 5.7.4 RTPTools ...... 67 5.7.5 LIVE555testprograms...... 67

6 Wireless monitoring experiment 68 6.1 Motivationandpurpose ...... 68 6.2 Experimentsetup ...... 69 6.3 Measurements with laptop and Nokia 770 ...... 70 6.4 Measurements with two Nokia 770s ...... 79 6.5 Lessonslearned ...... 82

7 Video streaming experiment 88 7.1 Networkmonitoring...... 88 7.1.1 Kismet...... 88 7.1.2 tcpdump...... 88 7.1.3 mmdump ...... 89

8 7.1.4 Wireshark ...... 89 7.2 Experimentsetup ...... 90 7.2.1 Choiceofstreamingserver ...... 90 7.2.2 Choice of video player ...... 91 7.2.3 Choiceofvideo ...... 91 7.2.4 Choice of resource monitoring software ...... 95 7.2.5 Wirelesstestbed...... 95 7.3 Performingtheexperiment ...... 99 7.3.1 Scenario1: Localplayback ...... 99 7.3.2 Scenario2: Node-to-node ...... 108 7.3.3 Scenario3:MANET ...... 121 7.4 Lessonslearned ...... 129 7.5 PlaybackontheNokiaN800 ...... 129 7.5.1 AbouttheNokiaN800 ...... 130 7.5.2 Performing the measurements ...... 130 7.5.3 Results...... 131 7.5.4 Lessonslearned ...... 136

8 Conclusions 137 8.1 Summary ...... 137 8.2 Criticalevaluation ...... 138 8.3 Openproblemsandfuturework ...... 138

A Configuration of software 144 A.1 FlashingthelatestNokiaimage ...... 144 A.2 Gaininglocalrootaccess ontheNokia770 ...... 144 A.3 SettingupScratchbox ...... 145 A.4 Compiling and running FFserver ...... 146 A.5 Softwarerepositories ...... 147

B Source code 148 B.1 wlantest.sh...... 148 B.2 generate plotdata.py ...... 150 B.3 generate all plots.sh...... 152 B.4 resourcelogger.sh ...... 153 B.5 networklogger.sh ...... 155 B.6 wlanmonitor.sh ...... 157 B.7 start ffserver.sh ...... 158 B.8 start olsrd.sh ...... 159 B.9 playallstreams.sh ...... 160 B.10 parse rtcp data.sh...... 162

9 B.11 generate resource plots.sh ...... 163

List of Figures

1 OLSR: Example illustrating the use of MPRs ...... 24 2 TheNokia770InternetTablet...... 41 3 MaincomponentsoftheMaemoplatform...... 42 4 Sample output: /proc/net/dev ...... 45 5 Sample output: ...... 45 6 Sample output: /proc/net/wireless ...... 48 7 Sample output: iwconfig ontheDellXPSm1210...... 49 8 Sample output: iwconfig ontheNokia770...... 49 9 Sample output: iwlist’s scan command...... 50 10 Sample output: iwspy ...... 50 11 TheNokia770connectionmanager ...... 51 12 Maemo connectivity licences ...... 52 13 Sampleoutput:OLSRdaemon ...... 53 14 Nokia 770 multimedia architecture ...... 55 15 Output of DSP tasks from sysfs ...... 58 16 The VideoLAN streaming solution ...... 63 17 Laptop and Nokia 770: Reported bit rate at 16 meters . . . . 71 18 Laptop and Nokia 770: Reported bit rate at 32 meters . . . . 72 19 Laptop and Nokia 770: Reported bit rate at 64 meters . . . . 72 20 LaptopandNokia770: Linkquality at0meters ...... 73 21 LaptopandNokia770: Linkquality at2meters ...... 73 22 LaptopandNokia770: Linkquality at4meters ...... 74 23 LaptopandNokia 770: Linkquality at64meters ...... 74 24 LaptopandNokia770: Noiselevelat0meters ...... 75 25 LaptopandNokia770: Noiselevel at64meters ...... 75 26 LaptopandNokia770: Signallevelat0meters ...... 76 27 LaptopandNokia 770: Signallevel at64meters ...... 77 28 LaptopandNokia770: Txerrorsat64meters ...... 77 29 Laptop and Nokia 770: Tx excessive retries at 16 meters . . . 78 30 Laptop and Nokia 770: Tx excessive retries at 64 meters . . . 78 31 2Nokia770s: Signallevelat0meters...... 80 32 2Nokia770s: Signallevelat32meters ...... 81 33 2Nokia770s: Linkqualityat0meters ...... 81 34 2Nokia770s: Linkqualityat32meters...... 82 35 2 Nokia 770s: Tx excessive retries at 0 meters ...... 83 36 2 Nokia 770s: Tx excessive retries at 16 meters ...... 83

10 37 2 Nokia 770s: Tx excessive retries at 32 meters ...... 84 38 2Nokia770s: Txerrorsat32meters ...... 84 39 Theactionvideoclip ...... 94 40 Theinterviewvideoclip ...... 94 41 CPU utilization: Local playback of clip no. 1 ...... 100 42 CPU utilization: Local playback of clip no. 4 ...... 101 43 CPU utilization: Local playback of clip no. 3 ...... 101 44 CPU utilization: Local playback of clip no. 7 ...... 102 45 CPU utilization: Playing MPEG-4 video in MPlayer ...... 103 46 CPU utilization: Playing MPEG-4 video in Video Player . . . 104 47 DSP int./sec.: Playing MPEG-4 video in MPlayer ...... 104 48 DSP int./sec.: Playing MPEG-4 video in Video Player . . . . 105 49 CPU utilization: Playing MPEG-1 video in MPlayer ...... 105 50 CPU utilization: Playing MPEG-1 video in Video Player . . . 106 51 DSP int./sec.: Playing MPEG-1 video in MPlayer ...... 106 52 DSP int./sec.: Playing MPEG-1 video in Video Player . . . . 107 53 Node-to-nodeexperimentsetup ...... 108 54 CPU utilization: Server node streaming clip no. 1 ...... 109 55 CPU utilization: Client node streaming clip no. 1 ...... 110 56 CPU utilization: Server node streaming clip no. 3 ...... 110 57 CPU utilization: Client node streaming clip no. 3 ...... 111 58 Block device activity: Server node streaming clip no. 1 . . . . 112 59 Block device activity: Server node streaming clip no. 2 . . . . 112 60 Block device activity: Server node streaming clip no. 3 . . . . 113 61 Block device activity: Client node streaming clip no. 1 . . . . 113 62 Network activity: Server node streaming clip no. 1 ...... 114 63 Network activity: Server node streaming clip no. 2 ...... 115 64 Network activity: Server node streaming clip no. 3 ...... 115 65 Link quality: Client node streaming clip no. 1 ...... 116 66 Link quality: Client node streaming clip no. 3 ...... 116 67 RTCP cumulative packet loss: Streaming clip no. 1 and 4 . . . 117 68 RTCP cumulative packet loss: Streaming clip no. 7 and 10 . . 118 69 Excerpt of the decoded packet headers from Wireshark . . . . 120 70 MANETexperimentsetup ...... 121 71 CPU utilization: Server streaming clip no. 1 ...... 122 72 CPU utilization: Client streaming clip no. 1 ...... 123 73 CPU utilization: Intermediate node streaming clip no. 1 . . . 123 74 CPU utilization: Intermediate node streaming clip no. 3 . . . 124 75 CPU utilization: Server node streaming clip no. 3 ...... 124 76 CPU utilization: Client node streaming clip no. 3 ...... 125 77 Network activity: Server node streaming clip no. 3 ...... 125

11 78 Link quality: Intermediate node streaming clip no. 3 . . . . . 126 79 Link quality: Server node streaming clip no. 3 ...... 127 80 Link quality: Client node streaming clip no. 3 ...... 127 81 TheNokiaN800...... 131 82 CPU utilization: Playing MPEG-4 video in MPlayer ...... 132 83 CPU utilization: Playing MPEG-4 video in MediaPlayer . . . 132 84 CPU utilization: Playing MPEG-1 video in MPlayer ...... 133 85 CPU utilization: Playing MPEG-1 video in MediaPlayer . . . 133 86 CPU utilization: Local playback of clip no. 1 in MPlayer . . . 134 87 CPU utilization: Local playback of clip no. 1 in MediaPlayer . 134 88 CPU utilization: Local playback of clip no. 4 in MPlayer . . . 135 89 CPU utilization: Local playback of clip no. 4 in MediaPlayer . 135

List of Tables

1 Comparisonoftransportprotocols...... 28 2 Comparisonofcontrolprotocols ...... 30 3 QualityofServiceperspectives...... 34 4 Nokia770hardwarespecifications ...... 41 5 File formats with built-in support on the Nokia 770 ...... 56 6 Audio formats with built-in support on the Nokia 770 . . . . . 57 7 Supported audio/video format combinations ...... 57 8 LaptopandNokia770: Filetransferresults...... 70 9 2Nokia770s: Filetransferresults ...... 79 10 Video clips used in the experiment ...... 93 11 Information accessible from the different protocol layers.... 98 12 Summary of network measurement possibilities ...... 98 13 Monitoringmethodsbylayer...... 99 14 NokiaN800hardwarespecifications ...... 130

12

Part I Background and concepts

1 Introduction

1.1 Background During the last years, mobile devices have become more and more common- place, to the point that statistically, every person in Norway owns a mobile phone. In addition to this, laptops are rapidly replacing traditional desktop computers, and the number of PDAs are increasing. Also, today’s mobile devices usually support several wireless communica- tion technologies, such as Bluetooth, 3G and 802.11 WLAN, with each tech- nology having different advantages and disadvantages. Thus, mobile devices can choose the most suitable wireless interface to communicate in a particu- lar scenario. This is known as the ”wireless technologies convergence” (Kwon et al., 2002, p. 66). Traditionally, data networks are designed, deployed and based around a fixed infrastructure. But a group of mobile devices can also form a network of their own in order to exchange information. Such a network, formed spontaneously by wireless, mobile nodes without any existing infrastructure, is called a mobile ad-hoc network (MANET). However, the strengths of the MANET are also what leads to its weak- nesses. Mobile devices are small and have limited resources, such as pro- cessing power and storage capacity. Also, wireless networks are shared me- dia, which suffer from contention and are vulnerable to interference from ex- ternal sources. This, in combination with the lack of a fixed infrastructure, makes it hard to provide the necessary quality of service to support delay and bandwidth sensitive services, such as streaming of multimedia data. The traditional methods of providing quality of service guarantees, such as over- provisioning of bandwidth and routing capacity or strict admission control, cannot be applied, as they require ownership and policing of the network infrastructure. This thesis work is done in the context of a recently launched research project on delay-tolerant streaming, where the vision is to provide a video streaming solution capable of performing in a dynamic, wireless, mobile en- vironment without the presence of a fixed infrastructure. Multimedia streaming is a topic that has not received much attention in MANET research. Furthermore, most of the research and experiences

14 with MANETs in general are based on work done in simulated environments. Very little work has been done on performing practical experiments with real mobile devices in real-world MANETs. This is largely due to the difficulty of creating such networks in a laboratory setting, as it is hard to limit the communication range of the nodes to create indoor multi-hop testbeds, and to create reproducible results. In addition, software for delivering multimedia services is rarely designed to operate in mobile environments, and the mobile devices themselves are seldom designed to provide such services. This particularly applies to the server side.

1.2 Motivation and problem description There are situations where the regular wireless communication networks are not accessible or the connectivity is limited. Examples of such situations are emergency and rescue operations, where communications infrastructure may be non-existent or destroyed. As an example, imagine a large-scale earthquake, where large parts of the existing infrastructure are damaged, rendering the traditional communication networks useless. Also, many other communication issues can arise under such circumstances with respect to the technical equipment, for instance due to extreme weather conditions. At the same time, the need for communication is even greater than usual, as it is vital for the command and control of such operations. Different departments and organizations have to cooperate and coordinate their efforts, and emergency teams must be dispatched to provide medical assistance to injured victims. The use of mobile devices connected using mobile ad-hoc network tech- nology and operating independently of any existing infrastructure, can help overcome these difficulties and provide connectivity despite large-scale dis- asters. Also, while phones and radio are sufficient in many circumstances, some information, e.g. maps and pictures, cannot be communicated through these conventional systems (Sanderson et al., 2007). The ability to transmit streaming multimedia data using a mobile device carried by the members of a rescue team can potentially save lives. As an example, streaming video from a camera could be used to assess the severity of injuries sustained by people during an accident, or damages to important structures or machinery. Applications for multimedia streaming over MANETs are of course not limited to emergency and rescue scenarios. It might also be used for en- tertainment purposes, or as a way to provide location-based, self-scalable multimedia services, such as guided tours at a museum. In general, mobile ad-hoc networks can provide flexible, adaptive and

15 low-cost networking solutions. But while MANETs open up new possibil- ities, they also introduce new challenges. Mobility causes problems when trying to find a suitable path that can sustain a multimedia streaming ses- sion, and mobile devices are usually small and limited in terms of processing power, storage capacity, and battery power. It thus follows that monitoring of available resources in the mobile ad-hoc network is essential to provide sufficient quality of service for a video streaming solution to operate. Meeting the quality of service demands of an application requires that the quality of service demands are met at each layer in every component along the data path. Thus, measurements must be performed at every layer to ensure that these demands are met. This requires monitoring of system resources, as well as monitoring at each layer of the OSI model, including hop-by-by and end-to-end network measurements.

1.3 Goals and methods The goal of this thesis is to create an experimental platform for video stream- ing in mobile ad-hoc networks, including a set of measurement tools and strategy to evaluate the performance of the solution. This platform can then serve as a starting point for further practical investigations to find factors that limit the performance of mobile ad-hoc network video streaming. The complete solution should be based entirely on open source software. This work is also an effort to gain greater insight into the reliability of these measurement methods, and the quality of the measurement data, as well as services and techniques that rely on monitoring services, such as route optimization, etc. In contrast to most prior research on mobile ad-hoc networks, this work takes a practical approach. Using real mobile devices, we seek to create a small-scale testbed for video streaming over mobile ad-hoc networks. To evaluate the performance of our video streaming platform, we then conduct a series of experiments while performing measurements at the hardware layer as well as all network layers in order to see the interaction between the different layers, and to provide a complete picture of the performance of the solution. The nodes in our experiments are Nokia 770 Internet Tablets, which is a -based tablet released in late 2005, with 802.11 WLAN and Bluetooth connectivity.

1.4 Contributions This thesis demonstrates how we successfully created a platform for open source, mobile ad-hoc network video streaming with the Nokia 770 Internet

16 Tablet. It also shows how we composed a set of tools, scripts and strategies to provide cross-layer monitoring of the video streaming solution. In addition, we show an example on how to set up a small-scale, real-world testbed for mobile ad-hoc networks to perform such monitoring. Using these tools and strategies, we evaluate the performance of video streaming on the Nokia 770 in a mobile ad-hoc network environment. We also investigate the wireless link data reported by the Nokia 770 wireless interface drivers, and find that the values reported by the driver have serious limitations compared to the values reported by other wireless devices. Additionally, this work serves as an introduction into what kind of net- work quality of service information is available in a Linux environment gener- ally, and on a Nokia 770 specifically, as well as an example on how to develop the means to extract and log these measurements in an effective manner.

1.5 Technical challenges The general impression of the Nokia 770 is that it is not a very stable device. During our work, our devices would frequently hang or reboot, particularly when performing CPU intensive tasks and opening several applications at once. In terms of measurements, the first problems we faced were issues with the driver for the wireless interface in the Nokia 770. It turned out that some of the measurements that it reports cannot be interpreted in the way one would expect from experience with other wireless interfaces, or are clearly incorrect, when compared to measurements made by other devices. Availability of software packages was another issue. While there are sev- eral repositories containing large numbers of software packages for the Nokia 770, as well as for the newer generations of the Internet Tablet series, the selection of media playing software was not very large, and the availability of packaged software for serving multimedia streams was non-existent. Normally, when a package is missing, open source software can easily be downloaded and compiled. Unfortunately, due to the Nokia 770’s limited storage capacity, installing a compiler suite and other necessary prerequis- ites is not very feasible. In addition to this, the processor architecture of the device is different from the x86 architecture of most common computer systems, meaning that in order to compile software for the Nokia 770 on a regular desktop computer, we were required to set up a cross-compilation environment. Finally, there was the issue of creating a real-world test bed for performing mobile ad-hoc network measurements on a limited area without access to

17 customized equipment. We managed to find a solution that worked well for our very small MANET, but will not scale to much bigger networks.

1.6 Related work Providing delay and bandwidth sensitive services such as multimedia stream- ing in MANETs are a major challenge, and very little research has been done in this area. However, one related project is NonStop, by Li and Wang (2003). Non- Stop is ”a collection of middleware-based algorithms that collectively guar- antee the continuous availability of multimedia streaming services from the point of view of any mobile users in the network” Li and Wang (2003, p. 2). The work focuses on mobility prediction, and as most other MANET re- search, it is based on experiments running in simulators. Rojviboonchai et al. (2005) presented the Ad Hoc Multipath Streaming Protocol (AMTP), which is designed to work with a multipath routing pro- tocol, optimizing QoS by using cross-layer information to stream multimedia data over disjoint paths. Ghannay et al. (2004) extended the OLSR protocol to monitor and collect QoS parameters from the nodes of a mobile ad-hoc network. Chen et al. (1999) presented the Ad Hoc Management Protocol (ANMP), which builds on SNMP to collection resource information from ad-hoc network nodes. However, all this work is also based on experiments performed in simulators. Ngo et al. (2003), on the other hand, demonstrated a prototype of WAN- Mon, ”a monitoring tool that allows the user to monitor the resource con- sumption by ad-hoc wireless network based activities.” It is able to monitor four categories of statistics: Network, power, memory and CPU. In the area of wireless link monitoring, Yeo et al. (2004) developed a framework for monitoring of wireless networks, conducting measurements by sniffing of the wireless medium. Yeo et al. (2005) showed that wireless mon- itoring is a reliable method of capturing traces for wireless traffic analysis. They also demonstrated how to improve the accuracy of wireless monitoring by merging traces from multiple monitoring devices. Bianchi et al. (2006) did a performance analysis of outdoor 802.11 wireless links based on link layer measurements. The link layer measurement data was obtained by modifying the open source MADWiFi driver. The work by Mahanti et al. (2007) focuses on how to assess the quality of wireless traces, and the placement of wireless sensors for performing wireless sniffing. While the work on wireless monitoring mentioned so far has focused on monitoring of wireless networks in infrastructure mode, there has also been research on link layer monitoring in multi-hop networks. Aguayo et al.

18 (2004) used link layer measurements to analyze the causes of packet loss in an 802.11b multi-hop network. The data was provided through per-frame meas- urements obtained through special features of the Prism2 wireless chipset. Das et al. (2007), on the other hand, compared the performance of various link quality metrics in multi-hop wireless networks. On the topic of multimedia stream monitoring, van der Merwe et al. (2000) did work on analyzing packet captures of multimedia data, and de- veloped a the tool mmdump to aid in parsing said data. Zhou et al. (2007) have developed the Portable MultiMedia Monitor (p3m), for kernel space monitoring of multimedia streams. p3m is extensible, server independent, and is claimed to greatly reduce the resource requirements associated with parsing and analyzing multimedia streams. Both of these tools are designed with larger scale, wired IP networks in mind, however. Kuang and Williamson (2002) was perhaps the first to perform a meas- urement study of video streaming performance on a wireless network, using a combination of packet capturing with tcpdump at each node and wireless sniffing. Koucheryavy et al. (2004) evaluated the performance of RealVideo streaming in a wireless network using end-to-end measurements, performing experiments with different signal-to-noise ratios and competing cross-traffic. However, both works perform their measurements on a regular 802.11 WLAN in infrastructure mode. Sun et al. (2005) performed a performance evaluation of both video and voice traffic in a real-world mesh network. Still, these experiments were not performed using actual video and voice streaming software, but by replaying an RTP stream using RTPtools. It thus only looks at the networking aspect, and does not take into account the resource requirements of video playback on the client node, for instance. Xue and Chandra (2006) performed real-world experiments with video streaming in mobile ad-hoc networks. Their analysis focuses on hardware and hardware resource management. In contrast to the work presented in this thesis, the nodes in their experiments were quite powerful laptops running primarily closed-source software. Evaluation of image and video quality is perhaps a more mature research area. A good overview on the topic of video quality assessment can be found in Wang et al. (2003). Reibman et al. (2004) have also done some work on predicting video quality based on network measurements. They have developed NoParse, QuickParse and FullParse; three methods for monitoring video quality in a network, each with varying degrees of complexity and measurement requirements. However, their work is mostly theoretical and the experimental results are obtained through simulations. Finally, there has been a small amount of work made on creating real-

19 world testbeds for mobile ad-hoc network experiments. Kaba and Raichle (2001) used signal attenuation and external antennas to limit the transmis- sion range of wireless nodes to create multi-hop topologies, while Kaul et al. (2006) accomplished the same using noise injection. However, their work was primarily motivated by routing protocol development.

1.7 Outline This thesis is divided into two parts: Part I gives an introduction to the problem area and discusses the different concepts. Part II then describes the experiments we performed and presents the results. In Part I, Chapter 2 looks at mobile ad-hoc networks and mobile ad- hoc network routing protocols. Chapter 3 introduces multimedia streaming and the concept of quality of service requirements, while Chapter 4 discusses methods for measuring quality of service in video streaming. In Part II, Chapter 5 gives an introduction to the Nokia 770 Internet Tab- let, and discusses how to perform quality of service measurements using this device. Chapters 6 and 7 then present our wireless networking experiment and multimedia streaming experiments, respectively, and discuss the results of the measurements that were performed. Finally, we present a summary of our results and experiences, and an evaluation of the methods and solutions we chose for our work, as well as some ideas for future work in this area.

20 2 Mobile ad-hoc networks

This chapter provides an introduction to mobile ad-hoc networks, and their characteristics and applications. We also look at mobile ad-hoc network routing protocols, in particular OLSR and AODV.

2.1 Characteristics A mobile ad-hoc network (MANET) is a network consisting of heterogeneous, self- organizing, mobile, wireless nodes. It is characterized by the absence of infrastructure; systems are both end-nodes and routers. The nodes in a MANET will typically be small devices, such as a mobile phone or a PDA, but also larger mobile devices, such as laptops, can potentially participate in a MANET. Mobility requires that the devices use wireless communication. The devices may support a number of different wireless interfaces, and communicate over the one most suitable in a particular situation. As there is no fixed infrastructure in a MANET, the nodes in a MANET need to be self-organizing and have significant autonomy. Each device acts as both an end-node and a router. The consequence of these added responsibil- ities is an increased load on both CPU and bandwidth for each node. It also raises some questions regarding security. As any node may be an intermediate node along the path from source to destination, there are many opportunit- ies for man-in-the-middle attacks, where an intermediate node can eavesdrop and/or modify data that is passing through it. There is also the possibility of spoofing attacks, where a node can provide false routing information to the rest of the network. A MANET differs from a normal ad-hoc network in that in regular ad- hoc network, nodes are limited to direct node-to-node communication, while a MANET allows communication through a multi-hop path. MANETs bear some resemblance to the more well-known peer-to-peer networks. They both consist of autonomous nodes, organizing themselves without the presence of central control. Also, the nodes are heterogeneous, with varying hardware specifications and bandwidth. The main difference is that while peer-to-peer networks operate at the application layer, MANETs operate at the network layer. (Hollick and Steinmetz, 2006, slide 10) Nodes in a MANET will also generally have higher resource limitations than nodes in a P2P-network.

21 2.2 MANET Applications Mobile ad-hoc networks provide flexible, adaptive and low-cost networking solutions. They are suitable where communication infrastructure is unavail- able, either because it does not exist, or because it has been damaged. There are also cases where deployment of infrastructure is not cost-effective or even impossible. As already mentioned in Section 1.2, an example of an area where MANETs may be particularly useful, is in emergency and rescue operations, where the normal communication infrastructure is non-operational or non-existing. The use of mobile devices connected in a mobile ad-hoc network and operating independently of any existing infrastructure can help provide connectivity despite large-scale disasters. Applications for multimedia streaming over MANETs are of course not limited to emergency and rescue scenarios. It might also be used for en- tertainment purposes, or as a way to provide location-based, self-scalable multimedia services, such as guided tours at a museum. However, the volatile nature of MANETs requires special consideration when running many of the services that we take for granted in regular infra- structure networks. This thesis considers video streaming in particular, and we take a closer look at the issues surrounding video streaming services in MANETs in Section 3.5.

2.3 Routing in MANETs Feeney (1999) suggests that MANET routing protocols can be classified ac- cording to the following criteria:

Communication model: Single-channel vs. multi-channel

Structure: Uniform vs. non-uniform

State information: Topology based vs. destination based

Scheduling: Proactive vs. reactive (on-demand)

2.3.1 Single-channel vs. multi-channel This refers to communication model of the wireless physical layer. With single-channel routing protocols, all nodes communicate over the same wire- less channel. Multi-channel protocols on the other hand, combine channel assignment and routing functionality. (Feeney, 1999, p. 2)

22 2.3.2 Uniform vs. non-uniform In uniform routing protocols all nodes are treated and behave in the same manner, while non-uniform protocols imposes a form of structure on the network, by partitioning nodes into different groups. For instance, it may feature ”special” nodes with extended responsibilities. (Feeney, 1999, p. 3)

2.3.3 Topology based vs. destination based Topology based protocols maintain information about the network topology. The class of link state routing protocols fall into this category. Destination based protocols, on the other hand, do not attempt to main- tain topology information. The classic distance vector routing protocols are the best known examples of this. (Feeney, 1999, p. 4)

2.3.4 Proactive vs. reactive protocols The difference between proactive and reactive routing protocols pertains to when the routing protocols obtains the route to the destination node. Proactive protocols attempt to maintain an updated routing table with routes to all known destination nodes in the MANET. The nodes participat- ing in the network exchange routing information periodically, or in response to topology changes, in order to keep an updated view of the network. This has the advantage of minimizing the delay during route look-up, at the ex- pense of increased network traffic to maintain the routing tables. Reactive protocols only update the routing table in response to a routing request. This has the advantage of minimizing network traffic overhead, at the expense of an increased delay during route look-up. (Feeney, 1999, p. 5)

2.4 Examples of MANET routing protocols As an example, we present two MANET routing protocols: OLSR and AODV. These two protocols were chosen because they are two of the most common routing protocols used in MANETs and MANET research today. In addition, they are based on very different routing strategies. While they are both single-channel protocols, they differ in most other aspects.

23 Figure 1: OLSR: Example illustrating the use of MPRs

2.4.1 Optimized Link State Routing (OLSR) Optimized Link State Routing (OLSR) is a proactive routing protocol for mobile ad-hoc networks. It is based on the classic link state algorithm, but has been tailored to suit the requirements of a MANET. It is therefore a topology based routing protocol. The protocol is optimal in terms of number of hops. In general, link state routing functions by each node periodically flood- ing the state of its links to the network. Every node stores the link state information received from all nodes in the network. From this information, each node gets a global view of the network and independently builds up a routing table using shortest-path-first (Dijkstra’s algorithm). OLSR includes an optimization to limit the overhead resulting from flood- ing link state information and other control traffic through the network. This optimization is based on the idea of multi-point relays (MPRs). (Clausen and Jacquet, 2003, p. 14) Every node selects a set of its neighbor nodes as multi- point relays. Only nodes selected as MPRs forward control traffic an other traffic meant to be distributed to the entire network. The MPRs are also used during route calculation, where they are used to form routes from a node to any destination in the network. The number of MPRs per node should be minimized, but any 2-hop neighbor must be covered by at least one MPR. The use of MPRs is illustrated in Figure 1.

2.4.2 Ad-hoc On-demand Distance Vector Routing (AODV) Ad-hoc On-Demand Distance Vector Routing (AODV)is, as its name sug- gests, a proactive routing protocol based on the distance vector routing paradigm. It is therefore also a destination based protocol. AODV is a uniform protocol, as all nodes behave in the same manner, and are treated

24 equally by all other nodes. Upon transmission of a data packet, a Route Request (RREQ) is broad- cast by the sending node. This RREQ is rebroadcast by any neighboring nodes until it eventually reaches the destination. A reverse path pointing towards the source is set up by the nodes along the path to the destination. All routes are cached in a routing table for a certain amount of time to avoid excessive flooding of Route Requests. (Hollick and Steinmetz, 2006)

25 3 Multimedia streaming

This chapter provides an overview of multimedia streaming and an introduc- tion to the concept of quality of service (QoS). We also give a short descrip- tion of some of the most common protocols used when streaming multimedia data, and discuss the issues related to multimedia streaming in mobile ad-hoc networks.

3.1 Introduction Multimedia streaming refers to the consumption of multimedia data while it is being delivered, e.g. listening to a sound clip while it is being downloaded. A multimedia stream can be live or on demand. Live streaming refers to streaming of data from a live event, where the media data is not stored for a period of time before it is distributed. Streaming is necessary to enable viewing of live multimedia content. On demand streaming refers to streaming of non-live media, where the media data is stored and transmitted to the user upon request. The demand for multimedia content in general has steadily increased since the introduction of broadband Internet services for home customers, and as bandwidth continues to increase, so does the demand for multimedia content. During the last few years, downloading of songs and eventually movies, has become commonplace. Eventually, high bandwidth and low latency has made streaming of these media feasible, and live multimedia applications such as Internet telephony, or Voice over IP (VoIP), has become popular through applications such as Skype. The introduction of 3G mobile telephone networks and growing con- cerns about global warming have also brought increased attention to video telephony and video conferencing. But the increased and demand for streaming multimedia content places new demands on the network infrastructure and end-systems. This is be- cause, in contrast to traditional content, multimedia streaming requires a certain Quality of Service (QoS). The concept of QoS is described further in the next section.

3.2 Quality of Service (QoS) requirements To properly support streaming of multimedia, parameters such as avail- able memory, processing power and bandwidth must be taken into account. Transmission of multimedia streams across channels that are unable to fulfill

26 the resource requirements, results in choppy playback and a bad user experi- ence. We say that multimedia streaming requires a certain Quality of Service (QoS). This paper will use the following understanding of the meaning of the term QoS, used in Steinmetz and Nahrstedt (2004, p. 16):

Quality of Service indicates the defined and controlling behavior of a service expressed through quantitative measurable parameter(s)

According to Steinmetz and Nahrstedt (2004, pp. 14-15), the QoS require- ments of a multimedia application can be specified by these parameters:

• Throughput - Determined from the needed data rate and data unit size

• Delay - Both local and end-to-end

• Jitter - The maximum allowed variance in end-to-end delay

• Reliability - Of transmissions and error detection and correction

The importance of each of these parameters depends on the nature of the multimedia application. For a telephone-conferencing application, end-to- end delay will be very important, as it is very difficult to have a conversation with large delays. On the other hand, in a video-on-demand system, end- to-end delay will be almost insignificant, while throughput and jitter will be the variables of most concern. In any case, meeting the QoS demands of an application requires that QoS demands are met at each layer in every component along the data path. Most current computing and communication services are so-called ”best effort services”, based on either no guarantees or partial guarantees (Stein- metz and Nahrstedt, 2004, p. 24). Implementation of multimedia streaming in such systems is done by over-provisioning. Where instead of allocating scarce resources fairly among the different services, one makes sure that there is always enough resources available for each service.

3.3 Transport protocols The transport protocol is responsible for end-to-end data transmission. There are several transport protocols in use in the Internet today, and they each have their strengths and weaknesses. Not all are suited for streaming of multimedia data streaming.

27 TCP UDP RTP Session-oriented Yes No No Reliable Yes No No Sequence numbering Yes No Yes Time-stamping No No Yes Multicast No No Yes Designed for multimedia No No Yes In widespread use Yes Yes Yes

Table 1: Comparison of transport protocols

According to Steinmetz and Nahrstedt (2004, p. 258), to support multi- media transmission, transport protocols need to provide the following func- tions: • Timing information • Semi-reliability

• Multicasting • NAK-based error recovery mechanism • Rate control The two transport protocols in the Internet Protocol Suite, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), are the two most common transport protocols in the Internet today. However, both TCP and UDP lack important features required for multimedia streaming. We will therefore also have a closer look at RTP, which is designed for streaming of multimedia data. The differences between the three protocols are summar- ized in Table 1.

3.3.1 Transmission Control Protocol (TCP) TCP is the most widely used transport protocol in the Internet today. It is a connection-oriented protocol, that provides reliable delivery of a stream of bytes. The data is guaranteed to be delivered in order. This reliability makes it well suited for applications such as regular file transfer, e-mail, etc. On the other hand, this also makes it unsuited for streaming of multimedia data, as the failed delivery of a single packet can introduce large delays in the reception of the media stream.

28 3.3.2 User Datagram Protocol (UDP) UDP is a very simple protocol, that facilitates end-to-end delivery of a single packet of data, a so-called datagram. There is no connection setup prior to the data transmission. UDP provides no delivery guarantees, sequence numbering or acknowledgements of received data packets.

3.3.3 Real-Time Transport Protocol (RTP) RTP is an end-to-end protocol that provides network transport functions to facilitate transmission of real-time data over multicast or unicast network services. It is very flexible, and can be used over any packet-based lower-layer protocol, but UDP is the usual choice. Components called translators and mixers can be used to change the characteristics of a media stream in order to satisfy heterogeneous require- ments. Through mixers, RTP supports mixing of multiple media streams from several sources. Translators provide transformations such as format and rate conversion. RTP is accompanied by RTCP, which provides feedback to senders and receivers about the on-going media stream. Media senders and receivers periodically send RTCP packets to the same multicast group. These RTCP packets contain information about the media stream. This information can be used by the senders to adjust their sending behavior to adapt to changing network conditions. (Steinmetz and Nahrstedt, 2004, ch. 6.5.2)

3.4 Control protocols In addition to a protocol to transport the data from the source to the des- tination(s), we need a control protocol to set up, control and tear down the streaming session. The control protocol is responsible for retrieving a descrip- tion of the offered multimedia stream(s), and the negotiation of transmission parameters. We will have a look at three of the most common control protocols in use the Internet today: Hypertext Transfer Protocol (HTTP), Session Initiation Protocol (SIP) and Real-Time Streaming Protocol (RTSP). Table 2 contains a quick comparison of the protocols.

3.4.1 Hypertext Transfer Protocol (HTTP) The HTTP protocol is most commonly used to transfer data on the World Wide Web (WWW). While originally designed to transfer hypertext pages, it is now used for transmission of other media types as well. For instance, it

29 HTTP SIP RTSP Designed for multimedia No Yes Yes Multicast No Yes Yes Simple Yes No Yes Supports control of video stream No No Yes In widespread use Yes Yes Yes

Table 2: Comparison of control protocols is often used to set up and tear down multimedia stream sessions. However, it lacks the features of the more specialized protocols, such as SIP and RTSP.

3.4.2 Session Initiation Protocol (SIP) SIP is a signaling protocol for setting up, controlling and tearing down media streams. It is commonly used in Voice over IP (VoIP) and video conferencing. Although it is designed for a different purpose, it has many similarities to the HTTP protocol.

3.4.3 Real-Time Streaming Protocol (RTSP) RTSP is designed to control real-time multimedia streams over IP networks. It provides media player functionality, with functions to set up and terminate data streams, and to start or stop playback or recording. It can also be extended for special requirements. RTSP operates independently of the underlying transport protocol, and can therefore be used both over UDP and TCP. As it is based on HTTP 1.1, it is similar to HTTP in several mechanisms. But one important difference is that while HTTP is a stateless protocol, RTSP stores state information for each session.

(Steinmetz and Nahrstedt, 2004, p. 308)

3.5 Multimedia streaming issues MANETs As mentioned, the traditional method of obtaining acceptable quality of ser- vice in a wired network is over-provisioning. But the characteristics mobile ad-hoc network communications makes this approach impossible. There are several aspects of mobile ad-hoc networks that complicate the issue of multimedia streaming. The nodes in the network are wireless; they

30 have resource constraints; the nodes in the network are heterogeneous; and the topology is constantly changing due to node mobility and the lack of a fixed infrastructure. In addition, there is the possibility of sabotage by malicious nodes. See Section 2.1 for more information about these issues.

3.5.1 Wireless medium Operating on a wireless medium, MANETs are susceptible to the traditional problems with wireless communications. Wireless transmissions are susceptible to various transmission errors, caused by interference from other electrical equipment, multi-path fading, or collid- ing transmissions by other nodes. Recovering from such errors may require retransmission of data. This leads to an increase in delay and jitter, impact- ing the quality of the multimedia stream. Each node has a limited transmission range. This range is dependent upon many factors, such as the wireless transmission protocol, antennae size, energy use, obstacles and weather conditions. This limited range means that data must be routed through several other nodes to reach the destination. Each hop adds processing delay and increases the possibility of introducing a bottleneck into the network path. For each hop, there is also the added possibility of a transmission error occurring, which adds delay and increases jitter. The wireless medium is a shared medium. While a wireless node is trans- mitting, all other nodes within range of the sender or within range of the receiver must hold their transmissions. This raises some interesting questions regarding streaming of data across multiple wireless hops. For instance, a multimedia stream from node A to node D, passing through nodes B and C, requires continuous transmission of packets from node A to node B, from node B to node C, and from node C to node D. But while e.g. node B is transmitting data to node C, both node A and node D must hold all their transmissions to avoid interfering with transmission between B and C. The question is how this will affect delay and jitter for a multimedia stream. Conditions affecting the quality of the wireless transmission may also change suddenly and unexpectedly. Thus, even if sender and receiver are able to agree on QoS parameter values that are optimal at one point time, those values may be sub-optimal only seconds later.

3.5.2 Resource constraints The devices participating in a MANET will predominantly be small devices, which implies limited processing power, memory and storage capacity. Being

31 small mobile devices, they will normally be battery powered, which means energy consumption must be kept at a minimum. Wireless communication will often mean limited bandwidth, and as men- tioned, the nature of wireless communications means that this bandwidth is shared by all devices in the surrounding area. Additionally, an increase in network traffic places additional load on the nodes in the network, which in turn increases energy consumption. It is therefore important to keep network traffic overhead at a minimum.

3.5.3 Heterogeneity While most or all of the nodes in a MANET will have some resource con- straints imposed on them, the capabilities and resources of each node can be very varying. A laptop computer, for example, has several times the pro- cessing power and storage capacity of e.g. a cellular phone, while a cellular phone has communication capabilities usually not available to a computer. Link bandwidth may also vary dramatically depending on the type of wireless interface used for communication.

3.5.4 Lack of fixed infrastructure The lack of a fixed infrastructure requires that nodes function as routers in the network. This can introduce large bottlenecks, if a lot of responsibility is assigned to a node with very limited resources.

3.5.5 Topology changes The node mobility leads to continuous changes in topology, which means that routes may be formed and broken rapidly. When a route breaks, the discovery of a new route will most likely introduce delays, which will affect the quality of an ongoing media stream. In addition, the topology change may introduce new bottleneck links in the network path, leading to a reduction in bandwidth. In the worst case, parts of the network may even separate in such a way that there is no route from one part of the network to another. This is known as partitioning. If source and destination nodes wind up in separate partitions, the media stream will be broken.

3.5.6 Malicious nodes There is also a real concern that malicious nodes may delay or disrupt net- work traffic, by providing false routing information to neighboring nodes. One type of malicious nodes are so-called ”black holes”, that masquerade as

32 a bogus destination. Models show that using the AODV routing protocol, if only 0.8% of nodes are ”black holes”, the resulting packet loss is close to 50%. If the fraction of ”black holes” increases to 4.0%, the resulting packet loss is close to 80% (Hollick and Steinmetz, 2006, slide 84). Such high packet losses would adversely affect a multimedia stream.

3.5.7 Transport protocol design issues Standard TCP has proved very inefficient in a MANET, due to its inability to distinguish between link failure and congestion. TCP flow control works under the assumption that packet loss is an indication of congestion, rather than link errors. In mobile ad-hoc networks however, link errors will happen much more frequently than in traditional, wired networks, due to factors such as mobility. Holland and Vaidya (1999) showed that as node speed approaches 10 m/s, throughput quickly drops down to 50%. It is also not particularly suited to multimedia streaming, as discussed above. Because of the complex movement patterns that may arise in a MANET, it may be better to use a simpler protocol such as UDP, rather than trying to do something clever. RTP may be a good compromise, as it is a relatively simple protocol, which can use UDP for data transport, with added features for streaming of real-time data.

33 Perspective Network level Media level Content level Technical (QoS) Yes Yes Yes User (QoP) No Yes Yes

Table 3: Quality of Service perspectives

4 Measuring Quality of Service

. As mentioned, meeting the QoS demands of an application requires that QoS demands are met at each layer in every component along the data path. Multimedia streaming sessions require constant monitoring to ensure that QoS demands are met. Changes in network conditions or an increase in processing load may lead to broken QoS guarantees. Resource monitoring is thus an essential part of QoS enforcement (Steinmetz and Nahrstedt, 2004, p. 69). (Wikstrand in Gulliver and Ghinea (2006)) proposes a model that se- gregates quality into three discrete levels: The network level, the media level, and the content level. The network level is concerned with all issues related to the flow of data in the network, e.g. bandwidth, delay, jitter and packet loss. The media level concerns quality issues related to the methods used to transform the network data back into media information, such as video and audio data. This includes parameters such as frame rate, bit rate, color depth and compression methods. Finally, the content level concerns quality factors that influence the presentation and perception of media data to the user. Gulliver and Ghinea (2006) also proposes that the media level and the content level can be viewed from two perspectives: The technical perspective, and the user perspective. The technical perspective represents what is tra- ditionally viewed as Quality of Service (QoS), while a new term introduced by the authors, Quality of Perception (QoP), considers the user perspect- ive of the multimedia experience. See Table 3. Any results of a measure- ment greatly depend on where the measurements are performed (Gulliver and Ghinea, 2006). We will not concern ourselves with measurements at the content level, as this falls outside the scope of this thesis.

4.1 Measurements at the network level When performing network QoS measurements, one needs to consider at which layer(s) to perform the measurements. In particular, whether to perform end-

34 to-end or hop-by-hop measurements. Hop-by-hop measurements allows greater insight into where along the path any problems occur, but it also requires measurements to be performed at each node along the network path. Depending on the network, one may not have the required privileges to perform such measurements. End-to-end measurements on the other hand, will only tell us when something is wrong, but not where along the network path any problems occur. This means that to get a complete picture, one needs to perform meas- urements at as many layers as possible.

4.2 Link quality Please bear in mind that since this thesis focuses on mobile ad-hoc networks, several of the link quality measurement methods discussed in this section apply only for wireless networks. Also note that when discussing wireless networks, the term signal quality is often used interchangeably with link quality. Even so, the only mention of signal quality in the IEEE 802.11 standard, is that

signal quality = PN code correlation strength which is only defined when using DSSS modulation, and DSSS is only one of several modulation schemes used in the physical layer of the 802.11abg standards (Bardwell, 2004; IEEE-SA Standards Board, 1999). To avoid confusion, we will avoid the use of the term ”signal quality”, and instead use the term ”link quality”, which is consistent with the term used in common wireless LAN tools, such as iwconfig on Linux.

4.2.1 Received Signal Strength Indicator (RSSI) RSSI is the only measure of signal strength defined in the 802.11 standard (Bardwell, 2004, p. 8), and is designed for internal use in wireless interfaces. ”The receive signal strength indicator (RSSI) is an optional parameter that has a value of 0 through RSSI Max....RSSI is intended to be used in a relative manner. Absolute accuracy of the RSSI reading is not specified.”(IEEE-SA Standards Board, 1999, p. 150) As pointed out in Bardwell (2004, p. 10), 802.11 requires no specific relationship between RSSI and dBm or mW. This means that vendors are free to, and certainly do, calculate the signal strength from the RSSI in different ways.

35 4.2.2 Signal-to-Noise Ratio (SNR) Another measure of link quality is the Signal-to-Noise-Ratio (SNR), which is defined as the ratio between the power of a signal and the power of corrupting noise. Because the range of the signals can be very dynamic, it is common to express the SNR in terms of a logarithmic decibel scale.

Psignal SNR = 10 × log10  Pnoise 

4.2.3 Expected Transmission Count (ETX) The Expected Transmission Count (ETX) denotes the expected number of transmissions to successfully deliver a packet across a given 802.11 network link. It is designed as a metric to used in routing protocols. Let pf and pr be the probability of packet loss during packet transmission over a given link in the forward and reverse direction, respectively. Then the probability that the transmission is not successful is:

p =1 − (1 − pf ) × (1 − pr)

The ETX is then defined as:

1 ETX = 1 − p

(Das et al., 2007; Draves et al., 2004).

4.2.4 Expected Transmission Time (ETT) Draves et al. (2004) suggests a new link quality metric, building on the ETX. The Expected Transmission Time (ETT) is a ”bandwidth-adjusted” ETX, where the ETX is multiplied by the link bandwidth, to obtain the time spent transmitting the packet. As with ETX, it is designed to be used in routing protocols. S ET T = ETX × B

36 4.3 Route quality 4.3.1 Expected Transmission Count (ETX) The ETX of a route is just the sum of the ETX values of each individual link along the path. That is:

n

ETXpath = ETXi Xi=1 ETX is designed for a homogeneous environment, as it does not take bandwidth into account. It does not perform so well when the links have different data rates, or the nodes have multiple radios.

(Draves et al., 2004, p. 115)

4.3.2 Weighted Cumulative ETT (WCETT) To compute the quality of an entire route, Draves et al. (2004) introduces the Weighted Cumulative ETT (WCETT). In its simplest form, WCETT is defined as the sum of ETT values for all hops on the path:

n

WCETT = ET Ti Xi=1 Draves et al. (2004) showed that WCETT significantly outperforms ETX in a heterogeneous environment, where the nodes have multiple radios or the links have different data rates.

4.3.3 Route stability Route changes can have a major impact on the QoS, as it can introduce sudden delays, packet loss and even disconnection. As route changes are quite common in mobile ad-hoc networks due to mobility, it must be taken into consideration during QoS monitoring in MANETs. By choosing a stable route, one can deliver more predicable QoS. Routing stability can be analyzed using the metrics prevalence, persist- ence and route flapping:

37 Prevalence is the probability of observing a given route. For a given pair of source and destination, the prevalence, pd of the route d it can be computed by the formula kp pd = np where kp is the number of times the dominant route was observed, and np is the number of times the given route d was observed.

Persistence is the duration for which a route lasts before a route change occurs.

Route flap refers to a change in route.

(Ramachandran et al., 2007)

4.4 Measurements at the media level Evaluation of media quality can be done through subjective or objective measurements. Each approach has its own advantages and disadvantages.

4.4.1 Subjective vs. objective measurement methods Subjective QoS measurements requires evaluation of the media quality by humans. While it has proved to be the most reliable way of evaluating video video quality, subjective evaluation has a number of problems that limits its usefulness. The most obvious is the amount of time and resources required, which limits the number of measurements that are possible to perform. Also, results of the experiments performed in (Zink, 2003, p. 61) contain some evid- ence that the content of a video clip influences the perceived quality during subjective quality measurements. However, further research is required to draw any conclusions. Recommended reading is The ITU Radiocommunication Assembly (2002), which contains several specifications on how to conduct subjective video qual- ity measurements. Objective video quality measurement methods can be categorized into three different classes, based on the type of measurements they use:

Full reference Compare the measured video stream to the original stream.

Reduced/partial reference Uses only parts of the information contained in the original stream.

38 No reference Uses only data from the measured video stream, without comparison to the original.

(Reibman et al., 2004, p. 328)

4.4.2 Mean Square Error (MSE) The MSE is defined as:

N 1 MSE = (x − yi)2 N i Xi=1 where N is the number of pixels in the image or video signal, and xi and yi are the i-th pixels in the original and distorted signals, respectively (Wang et al., 2003, p. 1042). As it compares the signal to the original, the MSE is a full reference method.

4.4.3 Peak Signal-to-Noise Ratio (PSNR) The Peak Signal-to-Noise Ratio (PSNR) estimates the quality of a recon- structed image compared to the original, and is the most commonly used objective measurement technique for video and image quality. It is a full reference method, and is relatively easy to compute.

MAX2 PSNR = 10log 10 MSE¯

(Wang et al., 2003, p. 1042).

4.4.4 Relative Peak Signal-to-Noise Ratio (rPSNR) Tao et al. (2005) proposes a new video quality metric based on the PSNR metric, called relative PSNR (rPSNR). The rPSNR is calculated relative to the quality of video transmitted over a network path that meets the desired QoS requirements. This means that the video quality can be be estimated using only network statistics and some basic configuration parameters of the video application. This metric is thus a no reference method.

39 4.4.5 Summary of measurement methods The majority of objective image quality assessment models are based on the MSE measure, or modifications of the MSE measure, such as the PSNR. Unfortunately, these methods have been proved to be unreliable measures of perceived visual quality. Wang et al. (2003, p. 1043) points out that the fundamental problem with these methods, is the assumption that ”that the loss of perceptual quality is directly related to the visibility of the error signal”. Wang et al. (2004) suggests that object quality measures should consider structural information, as the human visual system is more adapted to extract structural information from a scene. Still, the traditional MSE and PSNR approaches remain popular, as they are easier to compute and have a clear physical meanings (Wang et al., 2004). Also, results of experiments performed by Zink (2003) suggest that the traditional method of using average PSNR to measure objective quality is particularly unsuited for layer-encoded video. The author proposes an al- ternative metric called the spectrum which is designed to better capture the properties of layered video. This shows that the choice of video encoding must be taken into account when deciding upon which method of objective quality measurement to use.

4.5 QoS measurement in MANETs The process of QoS monitoring can add overhead. Coupled with the limited resources of a typical node in a MANET, this overhead may be signific- ant compared to the available resources and the resources demanded by the multimedia application. One must therefore take care so that the monitor- ing process itself is not responsible for lowering the QoS of the multimedia application. There will often be a trade-off between the accuracy of the mon- itoring, and the resource requirements of the monitoring. Due to the resource constraints in a MANET, it is imperative that all monitoring processes be lightweight. Another question is how often to perform the measurements. Resource monitoring can be performed proactively or on-demand. Proactive monitor- ing delivers measurements on a regular basis, while on-demand monitoring is initiated upon request from a user or a service. (Steinmetz and Nahrstedt, 2004, p. 70) On-demand monitoring may lower the resource consumption and allow measurements to be performed at times when it is the least intrusive. On the other hand, proactive monitoring provides a steady stream of meas- urement data that might better capture resource consumption history which can be valuable during analysis and research.

40 Figure 2: The Nokia 770 Internet Tablet

CPU 252 MHz TI OMAP 1710 Display 800x480x16 touch-screen Connectivity 802.11g WLAN, Bluetooth 1.2 Memory 64 MB RAM Storage 128 MB flash, 64 MB RS-MMC (upgradable to 1 GB) Dimensions 14.1cm7.9cm1.9cm Weight 230 g

Table 4: Nokia 770 hardware specifications (maemo.org, g)

Part II Experiments

5 The Nokia 770 Internet Tablet

This chapter gives an introduction to the Nokia 770 Internet Tablet and its capabilities. We will also have a look at how to perform network measure- ments using the device.

5.1 About the device The Nokia 770 Internet Tablet is a wireless, tablet computer designed to provide Internet connectivity, such as web browsing and e-mail, as well as basic media player functionality. It has a 800x480 touch-screen, capable of displaying up to 65536 simultaneous colors. A summary of the Nokia 770’s

41 Figure 3: Main components of the Maemo platform (maemo.org, g) hardware specifications is presented in Table 4 on the preceding page. The Nokia 770 runs an called OS 2006, which is a modified version of the Debian/Linux, with a graphical user interface. This was the most attractive feature of the device, as it enabled us to use software written for the Linux platform to conduct our research. The complete platform is an open source platform known as Maemo ver- sion 2. Figure 3 gives a quick overview of the components of the Maemo platform. (maemo.org, f; Nokia, 2006).

5.2 Basic software An effort was made to use standard Linux software wherever possible. How- ever, the Nokia 770 comes with a very limited selection of software by de- fault. In addition, many of the common utilities that are included, are just BusyBox versions. BusyBox is a utility that combines many common UNIX utilities into a single executable. The versions of the included utilities are often limited, but the included functionality works just as in their original UNIX counterparts. We therefore had to install some additional software on our devices.

42 5.2.1 Operating system The OS version preinstalled on our devices was 1.2006.26-8. Nokia has since then releases two updates to the 2006 version of the operating system, and the latest version is 3.2006.49-2. 1 We used both version 1.2006 and 3.2006 in our experiments. More details are in the chapters dedicated to each set of experiments. For details on flashing the Nokia 770 with a new OS image, please see Appendix A.1. Version 1.2006.26-8 features kernel version 2.6.16-omap1, CX3110x driver version 0.8, and CX3110x firmware version 2.13.0.0.a.13.9. Version 3.2006.49- 2 features kernel version 2.6.16.27-omap1, CX3110x driver version 0.8, and CX3110x firmware version 2.13.0.0.a.13.14. The 1.1 and 1.2 versions of the CX3110x driver, which is used in the newer Nokia N800 have been open source and released since the start of 2007, but the 0.8.1 version used in OS 3.2006.49-2 for the Nokia 770 were not open-sourced until 2007-10-29 (CX3110x Linux 2.6 driver).

5.2.2 xterm The X Terminal Emulator is a port of the traditional xterm utility for the UNIX platform, and provided us with a graphical interface to the Nokia 770 command line. We used the osso-xterm package version 0.13.mh24. Access to a command line is necessary to perform many of the admin- istrative tasks required to set up the device with mobile ad-hoc networking capabilities. Also, most of the applications we used provided only a command line interface. .

5.2.3 SSH2 server To provide remote access to the Nokia devices, we installed OpenSSH version 1:4.6p1-5.maemo1. This way, we could get a remote shell through SSH and perform file transfers to and from the devices through SFTP. Installation of an SSH server is also required to gain root access to the devices. See Appendix A.2 for details.

5.2.4 Wireless Tools We installed version 28-1 of Wireless Tools for Linux to provide information about the physical and link layers. The Linux Wireless Tools are a set of

1Later, Nokia has also released an OS 2007 ”Hacker’s edition” for the 770, but it is not officially supported, and was not stable enough for our use at the time.

43 utilities to configure the wireless networking interfaces in Linux, and they are commonly included with most Linux distributions.

5.2.5 NTP client We installed the package ntpdate to set the system clock of the Nokias using the Network Time Protocol (NTP). By synchronizing the clocks of our measurement devices, we could properly compare the measurements from each device. Note that ntpdate does not compensate for clock drift, but only performs a one-time synchronization with the NTP servers. Even so, the relatively short duration of our experiments should make this irrelevant.

5.2.6 OLSR daemon We will be using version 0.4.10 of OLSRd for the Maemo platform. This is just a cross-compiled and packaged version of the OLSRd provided at the official homepage www.olsr.org. OLSRd version 0.4.10 is RFC3626 compliant, and implements all auxiliary functions specified in the RFC, with the exception of Link Layer Notifications.

5.3 Network measurements In this section, we will present methods for extracting information from the physical and link layers of wireless interfaces on the Nokia 770. Most of these methods are common for all Linux distributions, however. In order to keep things as simple as possible, we preferred software that is available pre-packaged for the Nokia 770. Due to the resource constraints imposed on us by the platform, the software to perform these measurements had to be light-weight, both in size on disk, as well as in memory and CPU time consumption. Of course, in order to perform analysis we had to be able to log the output from these utilities, and the output format should be easy to parse.

5.3.1 The proc filesystem (procfs) The proc filesystem, or process information pseudo-filesystem, is a pseudo- filesystem that is used as an interface to kernel data structures. It is com- monly mounted at /proc. (Linux Programmer’s Manual, b) The proc filesystem enables common tools that need access to kernel structures, such as ps, top, lspci, free, etc., to be implemented entirely in user space.

44 Inter-| Receive | face |bytes packets errs drop fifo frame compressed multicast| lo: 5507 67 0 0 0 0 0 0 eth0:1489016188 2541166 0 0 0 0 0 131 eth1: 73997 0 0 798 0 0 0 0 Transmit bytes packets errs drop fifo colls carrier compressed 5507 67 0 0 0 0 0 0 56984652 598733 0 0 0 0 0 0 326214 0 0 0 0 0 0 0

Figure 4: Sample output: /proc/net/dev

eth1 Link encap:Ethernet HWaddr 00:13:02:CA:01:43 inet addr:10.0.0.1 Bcast:10.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::213:2ff:feca:143/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4 errors:0 dropped:142 overruns:0 frame:0 TX packets:939 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:256 (256.0 b) TX bytes:34417 (33.6 KiB) Interrupt:17 Base address:0xc000 Memory:ecfff000-ecffffff

Figure 5: Sample output: ifconfig.

Most of the network relevant information is found in the directories /proc/net/ and /proc/sys/net/. While the exact number depends on the Linux kernel version, modules, compilation options, etc., these directories contain about 500 files with information about the networking stack. For our purposes, however, we will focus on the files /proc/net/dev and /proc/net/wireless, which provide access to kernel data structures common for all network interfaces and data structures specific to wireless network interfaces, respectively. While the information in these files can be read directly, a better way is to use the utilities ifconfig and iwconfig, which are presented below.

5.3.2 ifconfig ifconfig is a standard UNIX utility used to configure the network interfaces of the system. It also provides information and statistics for each interface. The interface statistics are read from /proc/net/dev and presented in a more human-readable format. From the sample output in Figure 5, we see that ifconfig provides the following information:

45 Link encap The type of network link. HWaddr The MAC address of the interface. inet addr The IP address associated with the interface. Bcast The broadcast address associated with the interface. Mask The network mask of the interface. MTU The maximum transfer unit set for the interface. RX packets Total number of packets received by the interface. errors Number of packets received in error by the interface. dropped Number of packets dropped by the interface. overruns Number of packets dropped due to overrun. frame Number of packets dropped due to frame errors. TX packets Total number of packets transmitted. errors Number of transmission errors. dropped Number of transmitted packets dropped. overruns Number of transmission errors due to overrun. carrier Number of transmission errors due to loss of carrier. collisions Number of transmission errors due to collisions. txqueuelen The size of the transmission queue. RX bytes Number of bytes received in total. TX bytes Number of bytes transmitted in total.

One problem with these statistics, is that there appears to be no easy way of resetting these counters. Resetting the counters requires unloading the driver module for the network interface - simply reconnecting the link will not help.

5.3.3 netstat netstat is a utility for printing information about the Linux networking subsystems, such as current network connections, routing tables, interface statistics, and more (Eckenfels, 2001). Normally, running netstat with the -i parameter, will output the same statistics from /proc/net/dev as ifconfig -s, but the netstat command on the Nokia 770 does not support this parameter.

5.3.4 Linux Wireless Tools The Linux Wireless Tools are tools to manipulate the Linux Wireless Ex- tensions. There are several other tools for manipulation the wireless con- figuration through the Wireless Extensions API, but the Wireless Tools are

46 the reference implementation, and are usually installed by default on most common Linux distributions. It is important to note that the features provided by the Wireless Tools through the Wireless Extensions, depend on the drivers for the wireless in- terface. Thus, not all statistics or configuration options are available on all systems. (Tourrilhes, 2007) Wireless Tools are not supplied with the Nokia 770, but must be installed from third-party repositories. The package comprises the following tools: iwconfig reports measured signal level, noise level and link quality as well as transfer statistics for each wireless interface. These values are read from /proc/net/wireless and presented in a more human-readable format. See Figure 6 on the next page for a sample of the output from /proc/net/wireless. Compare this to the sample output from iwconfig, in Figure 7 on page 49. Tourrilhes (2006) gives the following description of the parameters avail- able for inspection using iwconfig:

47 Inter-| sta-| Quality | Discarded packets | Missed | WE face | tus | link level noise | nwid crypt frag retry misc | beacon | 21 eth1: 0000 67. 233. 233. 0 0 0 0 13 0

Figure 6: Sample output from /proc/net/wireless on the Dell XPS m1210.

ESSID The Extended service set identifier (i.e. the ”name”) of the wireless network. Mode The transmission mode of the wireless interface. In an ad-hoc network, this is ”Ad-Hoc”. Frequency The frequency band used by the wireless network with which the interface is currently associated. Bit Rate The current bit rate used by the wireless interface. Tx-Power The current transmission power. Retry limit The retry limit of the MAC. RTS thr The size of the smallest packet for which the node sends RTS. Fragment thr The fragmentation threshold. IP packets larger than this will be split into several fragments before sending. Access Point/Cell An address equal to 00:00:00:00:00:00 means that the card failed to asso- ciate with an Access Point (most likely a configuration issue). The Access Point parameter will be shown as Cell in ad-hoc mode (for obvious reasons), but otherwise works the same. Link Quality Overall quality of the link. May be based on the level of contention or in- terference, the bit or frame error rate, how good the received signal is, some timing synchronization, or other hardware metric. This is an aggregate value and depends totally on the driver and hardware. Signal level Received signal strength (RSSI - how strong the received signal is). May be arbitrary units, or dBm, iwconfig uses drive meta information to interpret the raw value given by /proc/net/wireless and display the proper unit or max- imum value (using 8-bit arithmetic). In Ad-Hoc mode, this may be undefined and you should use iwspy. Noise level Background noise level (when no packet is transmitted). Similar comments as for Signal level. Rx invalid nwid Number of packets received with a different NWID or ESSID. Used to de- tect configuration problems or adjacent network existence (on the same fre- quency). Rx invalid crypt Number of packets that the hardware was unable to decrypt. This can be used to detect invalid encryption settings. Rx invalid frag Number of packets for which the hardware was not able to properly re- assemble the link layer fragments (most likely one was missing). Tx excessive retries Number of packets that the hardware failed to deliver. Most MAC protocols will retry the packet a number of times before giving up. Invalid misc Other packets lost in relation with specific wireless operations. Missed beacon Number of missed beacons from the Cell or the Access Point we have missed. Beacons are sent at regular intervals to maintain the cell coordination, failure to receive them usually indicates that the card is out of range.

As mentioned earlier, all the parameters and statistics are device depend- ent, meaning that usually only a subset of them will be provided, depending on the hardware and driver support (Tourrilhes, 2006). As an example, com- pare the output from iwconfig on a the Nokia 770 in Figure 8 to the output from iwconfig on the Dell laptop in Figure 7 on the next page. Some of the parameters have two modes: Fixed mode and automatic mode. If the value of a parameter is prefixed by a ”=”, it means that the

48 eth1 IEEE 802.11b ESSID:"unikum" Mode:Ad-Hoc Frequency:2.412 GHz Cell: 5A:5C:CD:1D:BA:17 Bit Rate:2 Mb/s Tx-Power:15 dBm Retry limit:15 RTS thr:off Fragment thr:off Power Management:off Link Quality=67/100 Signal level=-31 dBm Noise level=-32 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:143 Missed beacon:0

Figure 7: Sample output: iwconfig on the Dell XPS m1210.

wlan0 IEEE 802.11b ESSID:"unikum" Mode:Ad-Hoc Frequency:2.417 GHz Cell: 22:61:B9:D0:DB:2E Bit Rate=11 Mb/s Tx-Power=19 dBm Sensitivity=0/200 RTS thr=2347 B Fragment thr=2346 B Encryption key:off Power Management:off Link Quality=69/0 Signal level=-26 dBm Noise level=-95 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:114 Invalid misc:0 Missed beacon:0

Figure 8: Sample output: iwconfig on the Nokia 770. value is fixed and forced to that value. If the value is prefixed by a ”:”, the parameter is in automatic mode, and the current value is shown, but may change. (Tourrilhes, 2006) Note that value of both Signal level and Noise level may be un- defined when in ad-hoc mode. To monitor the signal level and noise levels of the nodes of an ad-hoc network, one should use iwspy instead (Tourrilhes, 2006). Unfortunately, the use of iwspy appears to be unsupported by the device driver on the Nokia 770 (see below). Still, iwconfig seems to report the correct value as long as the ad-hoc network consists of only two nodes. iwgetid Depending on the supplied argument, iwgetid prints either the current access point address, channel, frequency, mode or protocol name. All the information reported is also reported by iwconfig, but iwgetid is designed to be easier to use for scripting purposes. (Linux Programmer’s Manual, a). iwlist delivers more detailed wireless information from a wireless interface. In particular, it supports scanning the wireless medium for wireless networks, with detailed information on the parameters of each network (see Figure 9 on the following page). During our trials, the scanning facility has been very unstable on the Nokia 770. Using it will frequently result in disconnection from the wireless

49 eth1 Scan completed : Cell 01 - Address: 02:13:02:BA:EB:8F ESSID:"unikum" Protocol:IEEE 802.11g Mode:Ad-Hoc Channel:3 Encryption key:off Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s Quality=96/100 Signal level=-31 dBm Noise level=-31 dBm Extra: Last beacon: 132ms ago

Figure 9: Sample output: iwlist’s scan command.

eth1 Statistics collected: 00:14:A7:FA:85:34 : Quality:232/100 Signal level:-24 dBm Noise level:-24 dBm 00:19:4F:9E:B7:ED : Quality:240/100 Signal level:-16 dBm Noise level:-16 dBm Link/Cell/AP : Quality=98/100 Signal level=-26 dBm Noise level=-26 dBm Typical/Reference : Quality:70 Signal level:0 Noise level:0

Figure 10: Sample output: iwspy on the Dell XPS m1210. network and/or the device crashing. This happens for OS 1.2006 as well as OS 3.2006. iwspy is used to monitor a list of up to 8 addresses (Linux Programmer’s Manual, c) in a wireless network. For each of these addresses, it monitors link quality, signal strength and noise level. See Figure 10 for a sample of the output from this command. Unfortunately, the features required by iwspy appears to be unsupported by the Nokia 770 WLAN driver. Running iwspy returns with the message "Interface doesn’t support wireless statistic collection". This means that we have not been able to take advantage of this feature in our experiments.

5.3.5 The sys filesystem (sysfs) Beginning with kernel version 2.6, much of the non-process related inform- ation has been moved from the proc-filesystem to a new pseudo-filesystem called sysfs, commonly mounted at /sys. Among other things, it is used by udev to handle device management. The file structure of sysfs reflects the physical devices and device drivers installed on the computer, and as such, the contents of /sys will vary de- pending on the hardware configuration.

50 Figure 11: The Nokia 770 connection manager

On the Nokia 770, information about the wireless network interface is located in the directory /sys/devices/platform/wlan-omap. However, the folder does not seem to include much interesting information beside what is already available in the proc filesystem. The data in the files rssi, signal quality contain the values for signal level and link quality, respect- ively, and are both the same values found in the proc filesystem, and thus the same as those reported by iwconfig. On the other hand, unlike the variable /proc/net/wireless, these variables are hardware dependant, and thus specific to the Nokia 770 WLAN adapter and device drivers. The only variables of interest that are not accessible through iwconfig are cal iq, cal output limits, cal pa curve data and cal rssi, which seem to contain tuning/calibration values for the various parameters. E.g. cal rssi contains tuning values for the RSSI (see Section 4.2.1. But without information on how to interpret the values, it is very hard to take advantage of the information contained in these variables. A request was sent to Nokia R&D, asking if they could provide some help in this regard, but we did not receive an answer. This unfortunately means that we have not been able to take any of these values into consideration.

5.3.6 Connection Manager The Connection Manager is the standard GUI tool for setting up wireless connections on the Nokia 770. When connected, it provides a graphical rep- resentation of the wireless link quality. However, this representation deviates from the intuitive interpretation of the link quality reported by iwconfig. Even when the link quality reported by iwconfig varies between 80 and 50, Connection Manager will still report the link to be of excellent quality, with 4 green bars. Figure 11 shows link quality reported by Connection Manager while iwconfig reported a link quality of 53/0. Our first idea was to examine the source code for Connection Manager,

51 Figure 12: Licences of Nokia 770 / Maemo 2.2 connectivity components (maemo.org, e). in order to see how the link quality values were obtained and, if they val- ues obtained were the same as the ones reported by iwconfig, how they were converted into the graphical representation used by Connection Man- ager. Unfortunately, Connection Manager turns out to be a closed-source component (in maemo connectivity UI ), as demonstrated by Figure 12. We therefore had to look for other solutions. By monitoring DBUS mes- sages using dbus-monitor, we hoped to intercept the link quality readings from Connection Manager. This is possible with Network Manager, which is a connection manager commonly used on regular Linux installs. However, it appears that the Nokia 770 Connection Manager only sends messages dur- ing connection and disconnection. We then investigated the possibility of retrieving the link quality values by sending DBUS messages to the Con- nection Manager using the dbus-send utility, but the values that then were returned, were incomprehensible. The result was that we had to abandon the hope of retrieving link qual- ity readings from Connection Manager. Nonetheless, the results from the wireless experiment turned out to at least shed some light on the possible in- terpretation of the link quality values read from iwconfig. This is discussed further in Section 6.

5.3.7 OLSR daemon The OLSR daemon offers an implementation of an ETX-like metric. In this implementation, the link quality between a node and its neighbor is defined as the probability of a successful packet transmission the node to its neighbor. Conversely, the neighbor link quality on the other hand, is defined as the probability of a successful packet transmission from the neighbor to the node. OLSR does not support computing the ETX for a complete path.

52 *** olsr.org - 0.4.10 (Feb 16 2007) ***

--- 16:54:23.75 ------DIJKSTRA

10.0.0.2:1.00 (one-hop) 10.0.0.3:1.00 (one-hop)

--- 16:54:23.75 ------LINKS

IP address hyst LQ lost total NLQ ETX 10.0.0.3 0.000 1.000 0 10 1.000 1.00 10.0.0.2 0.000 1.000 0 10 1.000 1.00

--- 16:54:23.75 ------NEIGHBORS

IP address LQ NLQ SYM MPR MPRS will 10.0.0.2 1.000 1.000 YES NO NO 3 10.0.0.3 1.000 1.000 YES NO NO 3

--- 16:54:23.75 ------TOPOLOGY

Source IP addr Dest IP addr LQ ILQ ETX 10.0.0.2 10.0.0.1 1.000 1.000 1.00 10.0.0.2 10.0.0.3 1.000 1.000 1.00 10.0.0.3 10.0.0.1 1.000 1.000 1.00 10.0.0.3 10.0.0.2 1.000 1.000 1.00

Figure 13: Sample output: OLSR daemon olsrd at debugging level 2

It is worth noting that enabling these Link Quality Extensions breaks RFC3626 compatibility, as the OLSR daemon distributes link quality inform- ation by modifying the HELLO and TC messages of the OLSR protocol. (Lopatic, 2004)

53 DIJKSTRA This table displays the best routes for each destination that OLSRd knows about. The leftmost IP address is the destination, and the the remaining IP addresses on each line are the addresses of the nodes on the route to the destination. LINKS This

IP address The IP address of the interface over which we have contact with the neighbor.

hyst The current hysteresis value for the link. Hysteresis is not used when Link Quality Extensions are enabled.

LQ The link quality value for this link determined at our end.

lost The number of lost packets within the current link quality window size.

total The total number of packets received. The value is capped at the link quality window size.

NLQ The neighbor link quality value for this link.

ETX This is the expected transmission count for this link.

NEIGHBORS

IP address The IP address of the main interface of this neighbor.

LQ The link quality value of the best link we have to this neighbor.

NLQ The neighbor link quality value of the best link we have to this neigh- bor.

SYM Shows whether or not the link to this neighbor is considered symmet- ric.

MPR Indicates if this neighbor has been selected to act as an MPR for this node.

MPRS Indicates if the neighboring node has selected this node to act as an MPR for it.

will Shows the neighbor’s willingness.

TOPOLOGY This table shows topology information for the entire MANET.

Source IP addr The node that reports a link.

Dest IP addr The node to which the source node reports a link.

LQ The quality of the link as determined by the source node.

ILQ The quality of the link as determined by the destination node.

ETX The expected transmission count for this link.

54 Figure 14: Nokia 770 multimedia architecture. (maemo.org, f)

5.4 The Nokia 770 multimedia architecture Because of the GStreamer support of the multimedia architecture (Figure 14), the Nokia 770 comes with out-of-the-box support for a number of media formats.

5.4.1 GStreamer GStreamer is the core of the Nokia 770 multimedia architecture. It is an open source multimedia framework that supports ”construction of graphs of media- handling components”. The multimedia data passes through a pipeline of media-handling components, each responsible for a small part of the processing required to read the media from an input and send it to the desired output in the desired format. By building on the GStreamer frame- work, multimedia applications can thus take advantage of advances in codec and filter technology transparently. (GStreamer: open source multimedia

55 MIME type Extension Format Application video/3gpp *.3gp 3GPP video, MPEG4 Visual Simple Profile Video Player Level 2, H.263 profile 0 level 10, 3GPP demux based on spec 3GPP TS 26.244 v6.0.0 (2004- 03): Basic profile, excluding timed text, audio- only clips not supported video/3gpp *.3gpp 3GPP video, MPEG4 Visual Simple Profile Video Player Level 2, H.263 profile 0 level 10, 3GPP demux based on spec 3GPP TS 26.244 v6.0.0 (2004- 03): Basic profile, excluding timed text, audio- only clips not supported video/mpeg *.mpe MPEG-1 Video, up to CIF resolution @30 fps Video Player video/mpeg *.mpeg MPEG-1 Video, up to CIF resolution @30 fps Video Player video/mpeg *.mpg MPEG-1 Video, up to CIF resolution @30 fps Video Player video/x-msvideo *.avi AVI file format (containing MPEG4 Visual Video Player Simple Profile Level 2/H.263 profile 0 level 10, MPEG audio layer III) video/x-real *.ra RealAudio 8,9,10 Video Player video/x-real *.ram RealNetworks Metafile Video Player video/x-real *.rm RealAudio, RealVideo 8,9,10 (QCIF @15 fps) Video Player video/x-real *.rmj Real Jukebox file Video Player video/x-real *.rmvb RealVideo variable bitrate Video Player video/x-real *.rpm RealNetworks Metafile Video Player video/x-real *.rv RealVideo 8,9,10 (QCIF @15 fps) Video Player video/x-real *.sdp Session Description Protocol File Video Player

Table 5: File formats with built-in support on the Nokia 770 framework) The 770 come with GStreamer version 0.10 installed, together with some device specific elements to access the DSP. The video and audio formats supported out-of-the box by the Nokia 770 are listed in Table 5.

5.4.2 The digital signal processor (DSP) The Nokia 770 does not have a separate audio card. Instead, audio streams are by a digital signal processor (DSP). In addition to basic PCM, the DSP can also handle encoded audio streams, such as MP3. This has the advantage of saves valuable clock cycles in the ARM processor core, which means that higher quality videos can be played. The DSP can assist in the decoding of some video formats, listed in Table 7 on the next page. The decoding of multimedia data using DSP requires special support by the decoder, however. In the Nokia 770, the decoder support is handled through the GStreamer plug-ins. See Table 6 on the following page for a list of supported formats. We also note that the available DSP tasks can also be read from sysfs, in the directory /sys/devices/platform/dsp. See Figure 15. For more information about sysfs, please see Section 5.3.5.

56 Format Decoded with PCM N/A Raw PCM audio MP2 DSP MPEG audio layer-2 MP3 DSP MPEG audio layer-3 AAC DSP Advanced Audio Coding, only LC and LTP profiles supported AMR-NB DSP Adaptive Multi-Rate narrowband AMR-WB DSP Adaptive Multi-Rate wideband IMA ADPCM CPU Adaptive Differential Pulse Code Modulation G.711 a-law DSP ITU-T standard for audio com- panding G.711 mu-law DSP ITU-T standard for audio com- panding WAV - MP3 DSP MP3 audio in WAV container WAV - PCM DSP PCM audio in WAV container RM - RA10 DSP RealAudio in RealMedia con- tainer. Uses closed source soft- ware, no support in GStreamer.

Table 6: Audio formats with built-in support on the Nokia 770 (maemo.org, b)

Container Video format Audio format Video decoded with MPG MPEG1 MP2 CPU AVI MPEG4 MP3 DSP AVI H263 MP3 DSP 3GP MPEG4 AAC DSP 3GP MPEG4 AMR-NB DSP 3GP MPEG4 AMR-WB DSP 3GP H263 AAC DSP 3GP H263 AMR-NB DSP 3GP H263 AMR-WB DSP RM RV10 RA10 CPU

Table 7: Supported audio/video format combinations (maemo.org, b)

57 Nokia770-01:~# cat /sys/devices/platform/dsp/*/devname pcm0 pcm1 videofb mp2dec pcm_rec aep g711_enc g711_dec ilbc_enc ilbc_dec avsync audiopp pcm2 aacdec amrnb amrwb mp3dec mpeg4dec

Figure 15: Output of DSP tasks from sysfs

5.4.3 Video streaming A lot of information is available on the Internet on streaming multimedia to the Nokia 770. There are several approaches to accomplish this, depending on what kind of media streams one wants to set up. Examples can be found in Ikke’s Blog; Brown (2007); maemo.org (i). On the other hand, very little has been written on streaming video from the Nokia 770. Lifton (2007) has an article on how to stream live audio and video from a camera connected to a Nokia N800 to Second Life. However, the approach depends on an external QuickTime streaming server to actually distribute the stream to Second Life and other interested viewers. As we are concentrating on setting up video streaming in a MANET, our solution can not be dependant upon existing infrastructure. We will therefore have to look for alternative solutions that enable us to stream video from a single Nokia 770. Both maemo.org (j) and maemo.org (d) contains useful information on encoding video for optimal playback on the Nokia 770 (as well as for the N800). They state the standard Nokia 770 Video Player has some limitations due to the internal hardware of the device:

In particular:

• The maximum bandwidth the Nokia 770 can handle properly is about 352 × 288 × 30 =1.52 megapixels per second, depending on the com- plexity of the action.

58 • The 770 is not capable of decoding video with bandwidth greater than 800 Kbps.

• It can only decode video where horizontal and vertical dimensions are multiples of 16.

These limitations have spurred the development of several tools to convert videos to Nokia 770 compatible formats. See Section 5.7 for more information about such tools. maemo.org (j) mentions that ”MPlayer can be used to overcome some of the default player limitations and play full frame rate video.”, but from the list above, it is not entirely clear which limitations are due to the internal hardware, and which are due to the video playing software, but the only limitations mentioned by MPlayer for Maemo relates to the hardware pixel doubling capabilities of the Nokia 770. The display on the Nokia 700 has a resolution of 800x480 pixels. When running in full screen mode, video is played at the native resolution of Maemo, which is 800x480 pixels. When running in windowed mode, the video is played at a resolution of 600x360 pixels. Both display modes are 15:9 format, as opposed to the wide-screen broadcasting standard of 16:9. As mentioned, the Nokia 770 supports hardware pixel doubling, and the MPlayer documentation recommends a video resolution of 400x224 (16:9), 400x240 (15:9) or 320x240 (4:3) to make the most efficient use of these cap- abilities, resulting in improved image quality, as well as reduced CPU load and battery consumption (MPlayer for Maemo). In addition, maemo.org (d) adds 352x288 (CIF), 352x208, 240x144 (15:9) and 176x144 (QCIF) to the list of recommended resolutions.

5.5 Video streaming software For the purpose of constructing a video streaming solution we required three main pieces of software: • Streaming server

• Multimedia player

• Monitoring software On the server side, we required a streaming server that supports stream- ing of video using the RTP protocol.On the client side, we required video playing software capable of playing streaming video through the RTP pro- tocol. Due to the resource constraints of the Nokia 770, all the software

59 should be as light-weight as possible. To allow for scripting of test-cases, the software should allow configuration through the command line. In both cases we preferred software that was available pre-packaged for the Nokia 770, to avoid issues with cross-compilation. The availability of debugging information during playback of video streams is also important, particularly on client side, as this is where the effect of the network conditions can be observed.

5.5.1 gst-launch (GStreamer) The GStreamer package (see Section 5.4.1 for more information about GStreamer) includes the tool gst-launch, which can be used to create such pipelines from the command line. Although this tool is not meant to be used for production systems, it can be used to easily create pipelines for testing purposes. On the Nokia 770, gst-launch is contained in the gst-tools software. The Farsight RTP plug-in version 0.1, which provides RTP support for GStreamer, as well as decoders for a wide variety of formats, was installed by default.

Homepage: http://gstreamer.freedesktop.org/

5.5.2 Flumotion Developed with the backing of Fluendo, Flumotion is an open source stream- ing server. While it is written in Python, the development is focused on the Linux platform, and uses the GStreamer framework for all its low-level func- tionality. No packaged version of Flumotion was available for the Nokia 770. In ad- dition, the build procedure when compiling from source is very complicated, and doing so is strongly discouraged by the developers.

Homepage: http://www.flumotion.net

5.5.3 Helix DNA Server The Helix DNA Server is an open source streaming server from the Helix Community, founded by Real. The only supported media formats in addition to the RealAudio and RealVideo formats (*.rm, *.ra, *.rv), is MP3 audio. Support for additional formats requires purchase of the commercial Helix Server. The Helix DNA Server supports RTSP/RTP streaming, as well as HTTP and TCP, UDP unicast and multicast.

60 No package exists for the Nokia 770, and the impressions is also that it is too big and and resource hungry for our needs.

Homepage: https://helix-server.helixcommunity.org

5.5.4 Icecast Icecast is a free and open source streaming server, available for Linux and Windows. It started out as a audio streaming server, but starting with version 2.2.0, Icecast supports streaming of video via the Theora format, an open and free video codec. Both Icecast and Theora are developed by the Xiph.org Foundation, also known for the FLAC and Ogg Vorbis audio codecs. (Ikke’s Blog) describes how to set up Icecast to stream sound to the Nokia 770, but using a desktop computer as a server. We were not able to find a version of Icecast packaged for the Nokia 770.

Homepage: http://www.icecast.org

5.5.5 FFmpeg The FFmpeg project is is a part of the MPlayer project, and is made up of several components:

ffmpeg A command line tool for converting videos.

ffserver A multimedia streaming server.

ffplay A simple media player based on SDL and the FFmpeg libraries. libavcodec A library of all the FFmpeg audio/video encoders and decoders. libavformat A library of parsers and generators for audio/video formats.

Several other projects are based on, or incorporate a significant amount of work from FFmpeg. Among these are most of the other video players mentioned in this section. The FFmpeg package contains a simple streaming server, named FF- server. It also turns out that FFmpeg can also be set up to send its output to an RTP stream. This way, it can actually function as a simple streaming server (even simpler than FFserver). Unfortunately, even after several hours of searching on the Internet, we were not able to locate a pre-packaged version of FFserver for the Nokia 770.

61 However, compiling from source is encouraged by the developers, and the build process is very simple, in stark contrast to several of the other projects mentioned here, such as VLC and Flumotion. See Section 7.2.1 for more information.

Homepage: http://ffmpeg.mplayerhq.hu.

5.5.6 VideoLAN The VideoLAN project produces open-source video software. It is most known for its cross-platform video player, the VideoLAN Client (VLC), and perhaps also for the library libdvdcss, which allows reading of encrypted DVDs.

VideoLAN Client (VLC) The VideoLAN Client is a free, cross-platform media player available for Linux, Windows and MacOS X, released under the GPL (GNU Public Licence) version 2. It is extremely feature rich, and can play a wide array of media formats. In addition to being a media player, it includes streaming server functionality, with the ability to do on-the-fly transcoding of the media stream. Streams can be set up both using a graph- ical user interface, as well as the command line. Unfortunately, all the features comes a price. VLC is a rather large application, both on disk and in memory. There have been efforts to port VLC to the Maemo platform (Torra, 2006), but we were not able to locate a version packaged for the Nokia 770.

VideoLAN server (VLS) The VideoLAN Server is a legacy streaming server, which has now been mostly replaced by VLC. In fact, most of the functionality provided by VLS can be found in VLC, and the VideoLAN team recommends using VLC instead of VLS. Yet, its much smaller size and lower complexity, makes it interesting for our use. Whereas VLC supports streaming from practically any imaginable source, VLS is limited to streaming from MPEG-1, MPEG-2 and MPEG-4 files, DVDs, MPEG encoding card and DV camcorders.

Figure 16 gives an overview of the capabilities of the VideoLAN streaming solution.

Homepage: http://www.videolan.org

62 Figure 16: Overview of the VideoLAN streaming solution (Source: www.videolan.org)

5.5.7 Video Player The Nokia 770 comes with a simple video player pre-installed, aptly named Video Player. It is based on the GStreamer framework, and can play local video files as well as streaming video from the network. However, experience has shown that it is very picky in terms of which media streams it will play. We were also not able to make it play videos that did not contain an audio stream. Also, it only supports stream setup through HTTP, not RTSP.

5.5.8 MPlayer MPlayer is a video player that is widely renowned in the open source com- munity. It has been ported to a multitude of platforms, including Maemo. It has a pluggable architecture, and supports an impressive number of input formats and audio and video codecs. The Nokia 770 version of MPlayer includes a special GStreamer sound output module, which enables it to use the DSP core to decode MP2 and MP3 audio. This makes it possible to play videos of higher bit rate, as the ARM processor core does not have to decode audio data. It also includes a video output driver developed specifically for the Nokia 770. This driver uses direct framebuffer access to benefit from the hardware accelerated YUV support. (MPlayer for Maemo)

63 MPlayer is very configurable, and allows the output of verbose debug- ging information from playback and streaming sessions. It even has a ”slave mode”, which allows remote control of the player. Ideally for us, such a feature rich player would include some streaming server functionality, as VideoLAN does. Alas, there does not seem to be any way of hosting a streaming session using MPlayer. The lack of streaming server functionality is confirmed in a response to a question regarding this on the MPlayer mailing list.

Homepage: http://www.mplayerhq.hu

5.5.9 Xine Together with MPlayer, Xine is the most well-known media player in the open source community. It is licenced under the GPL 2, and can play back most common, and uncommon, multimedia formats available. Xine is designed to be portable, and supports a range of platforms, in- cluding Linux, FreeBSD, Solaris, Irix and Darwin/MacOS X. Like MPlayer, it has a modular architecture, and is composed of an engine core with several different input, de-muxer, decoder and output plug-ins. We were unfortunately not able to locate a Xine version compiled and packaged for the Nokia 770.

Homepage: http://www.xinehq.de

5.6 Resource monitoring As mentioned in Section 1.6, there has been some work on resource monitor- ing of MANET nodes. However, the only tool that could be of interest to us, was WANMon. According to Ngo et al. (2003), a prototype was developed for the RedHat Linux Platform in 2003, but we were not able to find any more current information about this tool. We were therefore forced to find other solutions to perform resource monitoring on the Nokia 770.

5.6.1 The /proc filesystem As with other hardware information and statistics mentioned earlier in this section, CPU load and memory statistics can be read directly from the proc- filesystem. This is where the utilities in this section read their information. As an example, memory statistics can be accessed through /proc/meminfo and system load information can be read from /proc/loadavg, though the

64 output is less readable and harder to parse than the output of utilities de- signed for this purpose. We therefore wanted to use such utilities, if possible.

5.6.2 ps To monitor CPU usage, we initially planned to use the standard POSIX utility ps, which can output a list of processes on the system, together with the resource utilization for each process. However, as pointed out in Section 5, the utilities provided by the BusyBox shell on the Nokia 770 are not full- featured equivalents to what would be provided with a regular Linux install, for instance. The BusyBox version of ps only has a single output format (equivalent to running ps -eo pid,user,vsize,stat,cmd), which does not include CPU utilization and memory utilization statistics.

5.6.3 top top is another standard POSIX utility that normally can be configured through command line parameters to provide customized information about all or selected processes on the system, similar to ps, but at regular inter- vals. Unfortunately, does BusyBox again only provide a limited and non- configurable version of the utility. We could of course have installed full-featured versions of these utilities, but as we had to rely on installation of third-party software anyway, we could might as well look around for something that would be even more suited for our use.

5.6.4 sysstat The sysstat package comprises a set of utilities for monitoring the hardware resources of a system: sar is an incredibly powerful logging utility that reports statistics and coun- ters for several operating system variables. A very useful feature is that all statistics can be stored in a file and replayed with various parameters at a later time, showing only the statistics of interest. Some of the areas that sar collects statistics on are: I/O and transfer rate statistics; paging statistics; task creation activity; block device activity; IRQ statistics; network interface statistics; CPU statistics; memory statistics; and more. Most statistics reported by the other utilities in the sysstat package, are also logged by sar.

65 Interestingly, the network interface statistics that are reported, are the same as those reported by ifconfig. But ifconfig only reports cumulative counts, and does not have the same features for logging of the statistics as sar. pidstat reports statistics for the running tasks on a Linux system. It can be used for monitoring all tasks, or just selected tasks on system. iostat reports I/O statistics for devices as well as CPU statistics. mpstat reports statistics for each individual processor in a multiprocessor system. vmstat reports virtual memory statistics. isag is a tool for creating graphs for resource data logged by sar. This tool is really designed for plotting of daily, weekly and monthly resource graphs when sar is run as a daemon on the system. It does not work very well for plotting graphs of resource data gathered when running sar ”ad-hoc”.

5.7 Other software In this section we list various utilities and software packages that cannot be categorized as streaming servers or playback software, but which may still be of use when working with video and multimedia streaming on the Nokia 770.

5.7.1 Mediautils Mediautils are a collection of open source utilities for transcoding, managing and watching videos on Nokia 770, N800 and N810 Internet Tablets. At the moment, Mediautils consists of tablet-encode and mediaserv. tablet-encode converts videos to Nokia compatible formats using MEn- coder. It was previously known as 770-encode. mediaserv builds on tablet-encode, to provide a web interface for on-demand transcoding and watching of videos in a media library

Homepage: http://mediautils.garage.maemo.org/

66 5.7.2 MediaConverter MediaConverter is a Java based tool to convert videos for the Nokia 770.

Homepage: https://garage.maemo.org/projects/mediaconverter/

5.7.3 VidConvert VidConvert is web site that offers online conversion of video files to a Nokia 770 compatible format. However, at the moment the site says that the service is offline due to the bandwidth requirements.

Homepage: http://www.bleb.org/services/vidconvert/

5.7.4 RTPTools RTPTools is a set of applications to process RTP data, often used to conduct experiments and analyze RTP streams. rtpdump parses and prints RTP packets. rtpplay replays RTP sessions that have been recorded by rtpdump. rtpsend generates RTP packets from a textual description. rtptrans functions as a translator between unicast and multicast networks.

Homepage: http://www.cs.columbia.edu/IRT/software/rtptools/

5.7.5 LIVE555 test programs LIVE555 is a set of C++ libraries for multimedia streaming using standard protocols, to be used for building multimedia streaming applications. Among other things, it is used to implement the streaming support in MPlayer. This library includes a set of test software, designed to demonstrate how to develop applications using the LIVE555 library. The test software includes a command line RTSP client, openRTSP, that can be used to open, stream and record RTSP media streams. It also includes a small RTSP server, te- stOnDemandRTSPServer, which can stream from various types of media files on-demand, using RTP or raw UDP.

Homepage: http://www.live555.com/liveMedia/

67 6 Wireless monitoring experiment

In spite of the issues with the link quality readings mentioned in Section 5, we hoped to find a correlation between the wireless link statistics provided by the operating system and the performance of the wireless link. This chapter describes how we investigated this matter by means of experiments with wireless transmission.

6.1 Motivation and purpose Since much of the instabilities in mobile ad-hoc networks are due to features of the wireless medium mentioned in Section 3.5, link layer measurements are a valuable supplement to measurements at the network and application layers. However, in order to implement such measurements, one needs to decide upon a metric for performing link quality measurements. In addition, one needs to understand how the variations of the chosen metrics affect the link quality. (Das et al., 2007) did a study on the performance of several of the common link quality metrics discussed in Section 4.2, and their performance in real world mesh network testbeds. The purpose of this experiment was two-fold:

• Firstly, as a motivation to see what kind of link layer information is available in a Linux environment generally and on a Nokia 770 specific- ally, as well as an exercise in developing the means to extract and log these measurements in an effective manner.

• Secondly, to gain greater insight into the reliability of these measure- ment methods and the quality of the measurement data.

The general idea was to start out with two devices at close range, and gradu- ally increase the distance between them, while logging as much wireless meas- urement data as possible on the units. By gradually increasing the distance in this manner, we were changing the networking conditions in a way that we knew would have an effect on the quality of the wireless network link. By plotting graphs and performing visual analysis of results, we hoped to see what measurement data we could hope to use in further experiments. We also hoped to be able to identify correlations between some of the parameters, which could form the basis for further investigations.

68 6.2 Experiment setup We performed the measurements using Nokia 770 Internet Tablets. See Sec- tion 5.1 for more general information about this device. We switched the supplied 64MB memory card with a 1GB memory card, to provide extended storage capacity for multimedia files and log files. In the experiment, we used the 802.11b/g interface, which provides a bandwidth of up to 54 Mb/s, depending on the conditions. The interface is based on the STLC4370 chipset from STMicroelectronics, which is a variant of the CX3110x chipset from Conexant. The experiment was performed by connecting two devices using the WLAN interface in 802.11 ad-hoc mode. The script wlantest.sh (see Appendix B.1) was running on each of the devices for the duration of each measurement, reading and storing the link quality, signal level and noise level values repor- ted by iwconfig at 1 second intervals. The script also logged the output of ifconfig, as well as /proc/net/wireless and /proc/net/dev. The initial idea was to vary the signal strength by manipulating the trans- mission power of the wireless network interface of the devices, thereby simu- lating changing conditions. Unfortunately, our attempts to manually adjust this, were largely unsuccessful. Preliminary testing showed that lowering the transmission power using iwconfig did not seem to have any noticeable effect on neither reported link quality, signal level, noise level, nor the automatic bit rate setting. Still, from looking at the options in the Connection Manager, it seems that the WLAN interface should support setting the transmission power. However, only two settings are available: 10 mW and 100 mW, with the default value being 100 mW. But even trying to set the power using the Connection Manager did not reveal any noticeable differences between the two settings. We therefore concluded that setting the transmission power in this way did not have the desired effect, or at least the effect was not large enough to make an impact, for the purposes of our experiment. The decision was then made to perform the experiment by physically increasing the distance between the devices. We started out by placing the two devices next to each other, and gradu- ally increasing the distance between them. This way, we hoped to see how the link quality measurements varied according to the changing conditions. We performed measurements at the following distances: 0 meters, 1 meter, 2 meters, 4 meters, 8 meters, 16 meters, 32 meters and 64 meters. Care was taken to perform the measurements in an environment with as little interference as possible. The measurements were performed outside, on

69 Laptop to Nokia Nokia to laptop Distance Time Average Time Average (mm:ss) KB/s (mm:ss) KB/s 0 m 03:05 276.8 02:55 292.6 1 m 03:07 273.8 02:43 314.1 2 m 03:05 276.7 02:37 326.1 4 m 03:03 279.8 02:57 289.3 8 m 03:14 263.9 02:35 330.3 16 m 04:35 186.2 02:16 376.5 32 m 03:04 278.3 02:34 332.5 64 m 03:35 238.1 06:36 129.3

Table 8: Results of transferring of a 50 MB file a dry, sunny summer’s day, in level terrain on a large open field (a soccer field). A scan of the wireless medium using iwlist was performed prior to each measurement, to reveal any other wireless networks that could cause interference. These scans had to be made on the laptop, due to the problems with iwlist’s scanning on the Nokias mentioned in 5.3.4. No other wireless networks were detected by these scans.

6.3 Measurements with laptop and Nokia 770 For the initial measurements, we used a Nokia device and a laptop com- puter. The laptop was a Dell XPS 1210m, running Ubuntu Linux 7.04, kernel v. 2.6.20-16. The wireless network adapter in the laptop was an Intel PRO/Wireless 3945abg, using the ipw3945 driver, version 1.2.0mp and firm- ware version 14.2. The Nokia device used in this set of measurements was the upgraded one running OS version 3.2006.49-2.

File transfer In Table 8 we see the results of the file transfer tests. On most occasions, transferring files from the Nokia to the laptop was signific- antly faster than the other way around. We suspect this to be because the performance of TCP is heavily influenced by the performance of the receiver, and the Nokia is not powerful enough to receive at higher speeds.

Bit rate The Nokia reports a bit rate of 54 Mbps at all distances, while the laptop reports a constant bit rate of 11 Mbps up to and including 8 meters,

70 Bit rate Laptop & Nokia 770 (3.2006) Distance: 16 m

Laptop Nokia 770 54 54

48 48

36 36

Bit rate (Mbps) 24 24

18 18

1112 1112 9 9 5.5 6 5.56

12 12 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 17: Reported bit rate at a distance of 16 meters. and then appears to adapt its bit rate according to the changing channel conditions. From Figures 17, 18 and 19 we see how the fluctuations in the laptops reported bit rate increases with the distance.

Link quality For link quality, there seems to be a consistent difference of 35 - 40 dBm between the link qualities reported by the two devices. The laptop always reports the higher link quality, with the exception of when the distance is 0 meters, where both the devices report the same link quality. We also see that at shorter distances, the Nokia also reports much more changing link qualities than the laptop. See Figures 20, 21, 22 and 23.

Noise level From Figures 24 and 25 we see that the two devices report wildly different values when it comes to noise level. This is true for all distances. The laptop’s noise level values fluctuate greatly at the lower distances, and gradually even out as the distance between the two devices increase. Intuitively, the high noise level experienced at the short distances is explained by interference caused by the Nokia. As the distance between the two devices increase, this interference gradually diminishes. The noise level values reported by the Nokia remain around -95 dBm regardless of distance and conditions.

71 Bit rate Laptop & Nokia 770 (3.2006) Distance: 32 m

Laptop Nokia 770 54 54

48 48

36 36

Bit rate (Mbps) 24 24

18 18

1112 1112 9 9 5.5 6 5.56

12 12 0 50 100 150 200 250 300 350 400 450 500 Time (seconds)

Figure 18: Reported bit rate at a distance of 32 meters.

Bit rate Laptop & Nokia 770 (3.2006) Distance: 64 m

Laptop Nokia 770 54 54

48 48

36 36

Bit rate (Mbps) 24 24

18 18

1112 1112 9 9 5.5 6 5.56

12 12 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 19: Reported bit rate at a distance of 64 meters.

72 Link quality Laptop & Nokia 770 (3.2006) Distance: 0 m 100 Laptop Nokia 770

80

60

Link quality 40

20

0 0 50 100 150 200 250 300 350 400 450 500 Time (seconds)

Figure 20: Link quality at a distance of 0 meters.

Link quality Laptop & Nokia 770 (3.2006) Distance: 2 m 100 Laptop Nokia 770

80

60

Link quality 40

20

0 0 100 200 300 400 500 600 Time (seconds)

Figure 21: Link quality at a distance of 2 meters.

73 Link quality Laptop & Nokia 770 (3.2006) Distance: 4 m 100 Laptop Nokia 770

80

60

Link quality 40

20

0 0 100 200 300 400 500 600 Time (seconds)

Figure 22: Link quality at a distance of 4 meters.

Link quality Laptop & Nokia 770 (3.2006) Distance: 64 m 100 Laptop Nokia 770

80

60

Link quality 40

20

0 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 23: Link quality at a distance of 64 meters.

74 Noise level Laptop & Nokia 770 (3.2006) Distance: 0 m

Laptop Nokia 770 0

-20

-40

Noise level (dBm) -60

-80

-100 0 50 100 150 200 250 300 350 400 450 500 Time (seconds)

Figure 24: Noise level at a distance of 0 meters.

Noise level Laptop & Nokia 770 (3.2006) Distance: 64 m

Laptop Nokia 770 0

-20

-40

Noise level (dBm) -60

-80

-100 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 25: Noise level at a distance of 64 meters.

75 Signal level Laptop & Nokia 770 (3.2006) Distance: 0 m

Laptop Nokia 770 0

-20

-40

Signal level (dBm) -60

-80

-100 0 50 100 150 200 250 300 350 400 450 500 Time (seconds)

Figure 26: Signal level at a distance of 0 meters.

Signal level Interestingly, the devices seem to agree on the signal level. From Figures 26 and 27, we see that the reported signal level is very similar, although, in general, the laptop reports signal level 1 - 5 dBm higher than the Nokia.

Errors No transfer errors are reported by any of the devices at distances up to and including 32 meters, but at 64 meters we see a sharp increase in transmission errors on the Nokia. If we compare 28 and 30, we see how the values for Tx error and Tx excessive retries correlate. The curve for Tx error follows the shape of the Tx excessive retries curve, while staying slightly above it. From figure 8 we also see how the transmission speed suffers, as it drops 60 % compared to the results from a distance of 32 meters.

Tx Excessive Retries At a distances from 1 to 4 meters, the laptop and Nokia are able to communicate without any packets exceeding the retry limit. At further distances, we experience varying numbers of excessive retries on the Nokia 770, with the largest amount of undelivered packets when the distance is 64 meters. The laptop reports of no undelivered packets at any of the distances. See figures 29 and 30.

76 Signal level Laptop & Nokia 770 (3.2006) Distance: 64 m

Laptop Nokia 770 0

-20

-40

Signal level (dBm) -60

-80

-100 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 27: Signal level at a distance of 64 meters.

Errors Laptop & Nokia 770 (3.2006) Distance: 64 m 250 Laptop (Tx) Laptop (Rx) Nokia 770 (Tx) Nokia 770 (Rx) 200

150 Errors 100

50

0 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 28: Link layer transmission errors at a distance of 64 meters.

77 Tx excessive retries Laptop & Nokia 770 (3.2006) Distance: 16 m 35 Laptop Nokia 770

30

25

20

15 Tx excessive retries

10

5

0 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 29: Tx excessive retries at a distance of 16 meters.

Tx excessive retries Laptop & Nokia 770 (3.2006) Distance: 64 m 250 Laptop Nokia 770

200

150

100 Tx excessive retries

50

0 0 100 200 300 400 500 600 700 800 900 1000 Time (seconds)

Figure 30: Tx excessive retries at a distance of 64 meters.

78 3.2006 to 1.2006 1.2006 to 3.2006 Distance Time Average Time Average (mm:ss) KB/s (mm:ss) KB/s 0 m 03:41 231.7 04:16 200.0 1 m 03:07 273.8 05:16 162.0 2 m 03:55 217.9 04:28 191.0 4 m 03:14 263.9 04:39 183.5 8 m 03:46 226.6 04:35 186.2 16 m 03:36 237.0 04:30 189.6 32 m 03:03 279.8 04:10 204.8 64 m N/A N/A N/A N/A

Table 9: Results of transferring a 50MB file

6.4 Measurements with two Nokia 770s The second set of measurements was performed with the two Nokia devices. One of the Nokias had the original OS 1.2006 installed, while the other one had been upgraded to OS 3.2006. As the release notes for the 3.2006.49- 2 version of OS 2006 state that it improves the quality of wireless LAN connections, we hoped to identify differences in the behavior of and the results reported by the two operating system versions. It was apparent that the connection between the Nokias was generally much more unstable than in the case with one laptop and one Nokia 770. In addition, xterm would frequently crash and in some cases it would be neces- sary to reboot the device. This meant that several of the measurements had to be restarted. When interacting with the tablet during the measurements, trying to open menus, etc., performance was noticeably degraded from the open xterm tabs running the logging script and ssh sessions. The range of transmission also seemed to be more limited in the case with two Nokias. At a distance of 64 meters, we were not able to make a connection at all between the two devices.

File transfer Again, the results from the file transfer show higher speeds when transferring files from one device to the other. This time, the higher speeds are achieved when transferring from the tablet running OS 3.2006 to the one running OS 1.2006.

79 Signal level Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 0 m

Nokia 770 (1.2006) Nokia 770 (3.2006) 0

-20

-40

Signal level (dBm) -60

-80

-100 0 100 200 300 400 500 600 700 Time (seconds)

Figure 31: Signal level at a distance of 0 meters.

Bit rate While laptop and the Nokia disagreed on the bit rate on almost all occasions, the two Nokias reported the same bit rate of 11 Mbps on all occasions. The results of the file transfer tests, on the other hand, suggest that this value may not be entirely accurate.

Noise level The two Nokias also reported more or less the same noise levels of -95 dBm in all the measurements, just as the Nokia 770 did in the first set of meaurements.

Signal level As for signal level, the Nokias also seem to agree, although the Nokia running OS 1.2006 reports several sharp drops in signal level that do not occur at the Nokia running OS 3.2006. It also reports slightly higher values at greater distances. See Figures 31 and 32.

Link quality The interesting part is when we compare the link quality and signal level graphs. Aside for the actual values, the curves seem almost identical. The drivers for the Nokia 770 apparently reports link quality as some sort of constant difference of signal level. Hence, we also make the same observations for link quality as for signal level.

Tx Excessive Retries Unlike the previous set of measurements, excessive link layer retries occur at all distances. Not surprisingly, as the distance

80 Signal level Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 32 m (2nd try)

Nokia 770 (1.2006) Nokia 770 (3.2006) 0

-20

-40

Signal level (dBm) -60

-80

-100 0 100 200 300 400 500 600 Time (seconds)

Figure 32: Signal level at a distance of 32 meters.

Link quality Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 0 m 100 Nokia 770 (1.2006) Nokia 770 (3.2006)

80

60

Link quality 40

20

0 0 100 200 300 400 500 600 700 Time (seconds)

Figure 33: Link quality at a distance of 0 meters.

81 Link quality Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 32 m (2nd try) 100 Nokia 770 (1.2006) Nokia 770 (3.2006)

80

60

Link quality 40

20

0 0 100 200 300 400 500 600 Time (seconds)

Figure 34: Link quality at a distance of 32 meters. increases, so does the number of retries. As one can see from Figures 35, 36 and 37, the Nokia running OS 1.2006 seems to suffer from a greater number of excessive retries compared to the Nokia running OS 3.2006.

Errors Despite the number of reported excessive link layer retries, the only distance where we see transmission errors occurring in the case of two Nokias is at a distance of 32 meters. See Figure 38. Comparing with Figure 37, we see that there is a sharp increase in the number of excessive link layer retries at the same point in time. Comparing with the link quality graph in Figure 34, we see that the errors occurred just as both nodes reported a drop in link quality. For some reason, we also see that the node running 3.2006 reports a spike in link quality just before this drop.

6.5 Lessons learned When starting the logging script on the Nokia, it was apparent that it did not have the processing power to run the logging at full speed. Although the script was configured to log results every second, the Nokia could only output results every 2-3 seconds. The two Nokia 770s had trouble transmitting at a distance of 64 meters. Trial and failure suggested that a connection could be established up to about 60 meters.

82 Tx excessive retries Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 0 m 55 Nokia 770 (1.2006) Nokia 770 (3.2006) 50

45

40

35

30

Tx excessive retries 25

20

15

10 0 100 200 300 400 500 600 700 Time (seconds)

Figure 35: Tx excessive retries at a distance of 0 meters.

Tx excessive retries Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 16 m (2nd try) 140 Nokia 770 (1.2006) Nokia 770 (3.2006) 130

120

110

100

90 Tx excessive retries

80

70

60 0 50 100 150 200 250 300 350 400 Time (seconds)

Figure 36: Tx excessive retries at a distance of 16 meters.

83 Tx excessive retries Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 32 m (2nd try) 20 Nokia 770 (1.2006) Nokia 770 (3.2006) 18

16

14

12

10

8 Tx excessive retries 6

4

2

0 0 100 200 300 400 500 600 Time (seconds)

Figure 37: Tx excessive retries at a distance of 32 meters.

Errors Nokia 770 (1.2006) & Nokia 770 (3.2006) Distance: 32 m (2nd try) 45 Nokia 770 (1.2006) (Tx) Nokia 770 (1.2006) (Rx) 40 Nokia 770 (3.2006) (Tx) Nokia 770 (3.2006) (Rx)

35

30

25

Errors 20

15

10

5

0 0 100 200 300 400 500 600 Time (seconds)

Figure 38: Link layer transmission errors at a distance of 32 meters.

84 In both sets of measurements there is an obvious asymmetry in the trans- mission speeds. The Nokia 770 running OS 3.2006 is much slower at receiving than both the laptop and the Nokia running OS 1.2006. We noticed that all transmissions started out much faster than the average speed, but gradually dropped to about half the rate. We suspect that this is caused by TCP’s rate control, due to the receiver’s buffers filling up. It is apparent that the bit rate reported by the Nokia 770 wireless driver can not be trusted. Comparisons with the bit rate reported by the wireless driver of the laptop and the results of file transfers shows that this value is inaccurate. In addition, the noise levels reported by the Nokia wireless driver on the are obviously incorrect. The almost constant noise level reported by the Nokias, compared to the very fluctuating noise levels reported by the laptop, leads us to conclude that noise detection on the wireless channel is broken on the Nokia 770. The erroneous noise level reporting could be an explanation for the Nokias failure to adjust the bit rate with varying channel conditions, which in turn would explain the low transmission speeds experienced at the longer dis- tances. Finally, there definitely seems to be a problem with the link quality meas- urement reported by the Nokia devices. The values reported are consistently below 60, although the device is sitting right next to the transmitter. Also, the maximum link quality is reported to be 0, instead of 100. There also seems to be little correlation with the link quality reported by other devices, such as laptops. Looking at the figures for Link quality and signal level reported by the Nokia, we see that the graphs are identical, aside from the actual values reported. This relationship between link quality and signal strength explains why the reported link quality drops so sharply with distance. Digging a little further, we examined the log of iwconfig output from the Nokias, and discovered that difference between the link quality and signal level was not constant, but seemed to vary slightly by just a couple of units. Looking at the noise level values, we noticed the correlation. The Nokia 770 wireless drivers apparently calculate link quality simply as: Link quality = Signal level − Noise level Due to the issues with reported noise level already mentioned, this causes the relationship between link quality and signal level to be an almost constant difference. It also explains the strange spike in reported link quality in Figure 34. By examining the logs for the exact values reported by iwconfig, we found

85 that it reported a signal level of −96dBm, and a noise level of −95dBm. This resulting in a link quality value of -1. The link quality value is apparently stored as an unsigned byte, causing an underflow and a reported link quality of 255. As the distance increases, signal strength does not decrease linearly. Rather, it decreases according to the inverse square law. This means that as the distance doubles, signal strength decreases by a factor of four (Bardwell, 2004, p. 6). In the case of the Nokia 770, where the difference between repor- ted link quality and signal strength is nearly constant, this means that the link quality value also follows the inverse square law. With this in mind, we understand why a link can still function very well, despite reporting very low link quality values. Knowing this, we suspect that the Connection Manager is designed to interpret and convert the link quality values reported by the interface accordingly. We were unable to find any official documentation on the receive threshold for the Conexant CX3110x that the Nokia 770 uses. However, according to Bardwell (2004, p. 7), the most sensitive 802.11 cards have a receive threshold of about −96 dBm. Looking at all of the results, the only parameter that seems relatively consistent between all three devices is the signal level. While the WLAN interface in the laptop almost always reports a higher signal level than the interface in the Nokia, the difference between the values is very slight. The exception is the sharp drops in reported signal quality at short distances on the Nokia running OS 1.2006. The release notes of version 3.2006.49-2, state that it improves the quality of wireless LAN connections. These drops may be a symptom of these issues. At the shorter distances, we see a correlation between the increases in Tx excessive retries reported by the Nokia and fluctuations in signal level. Intuitively, one would think that the number of Tx excessive retries would be reflected in the number of reported transmissions errors, in that several sequential errors in transmitting a packet more times than Retry Limit eventually would trigger Tx excessive retries. But after study- ing the results, this does not seem to be the case, as the total number of transmission errors remains constant throughout all measurements, with one exception. The retry limit on the Nokia 770 is not reported by iwconfig. We also checked the output of iwlist wlan0 retry, but this only outputs the min- imum and maximum values for this limit, which are 1 and 65535 respectively. We therefore have to assume that the retry limit is set to the default values. The default MAC retry limit in the 802.11 standard is 4 for long data frames with a length greater than the RTS/CTS threshold, and 7 for short

86 frames with a length less than or equal to this threshold (IEEE-SA Standards Board, 1999, p. 484). According to the statistics we gathered, it appears that link layer retrans- missions only occur in our setup when communicating at distances close to the maximum range, that is at 64 meters for the combination of laptop and Nokia 770, and 32 meters for the 2 Nokia combination. Finally, we also note that the batteries of the Nokias lasted about 2.5 hours into the experiment, at which time we had to recharge them to complete it. Bearing in mind that the units were transmitting at full speed about half of the time, often at large distances requiring high power, it is not that bad, but not very impressive either.

87 7 Video streaming experiment

This chapter describes our choice of setup for conducting some initial video streaming measurements with the Nokia 770, and a discussion of the exper- iences and results of these measurements.

7.1 Network monitoring For monitoring of the network we built upon the experiences from 6. But, as sar also handles monitoring of the network interfaces of the system, there was no longer any need to log the output of ifconfig or the contents of /proc/net/dev. We still logged the output from iwconfig, to store general data about the wireless link, which sar does not log. Often when doing monitoring of wireless networks, one will use devices with wireless network cards that have special features that enable enhanced and simplified monitoring of the wireless medium. Examples are cards that are based on the Prism2 chipset, or chipsets that are supported by the MadWiFi driver, or the newer ath5k driver. Due to the closed nature of our experimental devices, however, we did not have the possibility of hand- picking a wireless chipset according to the capabilities of the interface in the devices available to us. In order to capture complete 802.11 frames, wireless interfaces have to be in monitoring mode, which means that the nodes cannot participate actively in the network. This meant that we could not use the Nokias participating in the streaming session to also capture the wireless traffic.

7.1.1 Kismet Kismet is a well-known wireless network sniffer, with the ability to passively monitor and log the traffic of wireless networks. We were also able to locate a version of Kismet packaged for the Nokia 770.

Homepage: http://www.kismetwireless.net/

7.1.2 tcpdump tcpdump is a utility to dump traffic on a network, by capturing packets at layer 2. It is included in most Linux distributions, and could easily be installed on the Nokia 770 from the repositories listed in Appendix A.5. tcpdump has the ability to store the captured packets to a file using the libpcap format. libpcap is a packet capturing library, and files stored using

88 this library can be interpreted by several other network analysis tools, such as Ethereal/Wireshark mentioned below. By capturing the traffic to file, the traffic can later be ”replayed”. tcpdump has the ability to filter based on various parameters, such as incoming net- work interface, source and destination ports, etc. It can also decipher traffic from some protocols, and display the contents to the user. As Grewal and DeDourek (2005) points out, there can be some issues when using tcpdump to capture packets on wireless interface. In particular, there is a risk of logging duplicate packets due to the interface running in promiscuous mode.

Homepage: http://www.tcpdump.org/

7.1.3 mmdump van der Merwe et al. (2000) presents a tool called mmdump, which extends tcpdump to provide extended support for parsing of multimedia traffic traces. For example, it supports setup of dynamic filters, based on the port numbers announced by RTSP during the setup of a streaming session. Unfortunately, we were not able to find a version of this tool available for download from anywhere to test it.

7.1.4 Wireshark Wireshark is an open source network protocol analyzer, capable of perform- ing deep inspection of several hundreds of protocols. Older versions of this software were released under the name of Ethereal. The ability to plot graphs of the values of various protocol fields in a packet capture is also very convenient. Unfortunately, the functionality to customize these plots and export the results is still very limited. On the other hand, the ”read filter” mechanism which allows filtering the output from a capture file based fields in the protocol headers of many of the hundreds of supported protocols is very powerful. This allowed us to use the command line version of Wireshark, TShark, to output this data to a text file, which we could then feed to Gnuplot in order to create proper graphs. In addition, Wireshark contains functionality to do analysis on captured RTP streams. Using this functionality, we could easily compute statistics from the RTP streams captured in scenarios 2 and 3, and export this result to a CSV-file. The results could then be plotted using Gnuplot.

Homepage: http://www.wireshark.org

89 7.2 Experiment setup In this subsection, we will discuss the choice of software we used to conduct our video streaming experiment, as well as the choice of codecs, protocols and physical layout of the network.

7.2.1 Choice of streaming server Even after several days of scouring the web, we were not able to locate a streaming server that had been compiled and packaged for the Nokia 770. Our second choice was then to try to set up a GStreamer pipeline to stream video from the Nokias. Unfortunately, it became apparent that gst-launch was not suitable for setting up RTP pipelines, as the RTP component requires configuration options which cannot be passed on the command line. We would therefore be required to create an application to be able to send RTP using GStreamer. GStreamer bindings exists for several languages, including Python. We had hoped that this would be a quick way to create a simple application for setting up an RTP stream for testing purposes, but the documentation for the Python language bindings in its current state was very incomplete (Python GStreamer Documents), and GStreamer is a complex framework. Since the development of applications using the GStreamer framework is outside the scope of this thesis, we concluded that this would have been too time consuming considering the time frame. We therefore decided to try to cross-compile an existing streaming ap- plication ourselves, using the Scratchbox environment. See Appendix A.3 for information about installing Scratchbox. We went with FFserver from the FFmpeg package, as this had the fewest dependencies and is designed to be small and lightweight. Also, while FFm- peg is still in active development, VLS has not had a release since late 2004. VLC, on the other hand, depends on libraries from FFmpeg, so in order to compile it, we would have to compile FFmpeg first anyway. There was some trouble getting it to compile and run, as there were some bugs in both the build script and in the RTP stream setup code. After some research and tweaking, we were eventually able to cross-compile FFserver and make it run on the Nokia 770. See Appendix A.4 for details. 2 Another problem with FFserver and FFmpeg is that the official docu- mentation is somewhat lacking. Fortunately, due to the widespread use of these components, it is not very hard to find answers to a lot of common

2We eventually also managed to compile a version of both VLS and VLC for the Nokia 770, but had some issues in getting them to run properly.

90 questions from unofficial sources on the Internet, such as various technical forums, etc. Also, the example configuration file provided a good starting point.

7.2.2 Choice of video player While the standard Nokia 770 video player is able to play media streams, it has a number of limitations regarding the types of videos it is able to play. It also does not have a command line interface that allows for easy scripting of tests, nor the ability to output information about the playback of the media stream. Testing also revealed incompatibilities between FFserver and the RTSP streaming functionality of Video Player. Furthermore, it also appears that Video Player does not support playing video files without an audio stream, as it would hang when trying to load our MPEG-1 video files, and also refuse to load the MPEG-4 files, with the error message ”audio codec not supported”. We therefore decided to go with MPlayer, as a Nokia 770-optimized ver- sion of this is easily downloadable from online package repositories. The version we used was 1.0rc1-maemo.18.n770.

7.2.3 Choice of video FFserver/MPlayer appears to be much more sensitive to errors in the video stream compared to when the video file is played locally. Fortunately, this was not really an issue, as we would have to re-encode all our test video clips anyway, and the results produced by FFmpeg streamed perfectly. Given the screen size of the Nokia 770, we wanted to perform the exper- iment with full screen video playback, as we imagine that this would be the most useful way to view video on the device. Initially, we were planning to also try with videos that the Nokia 770 was not able to scale to full screen using hardware scaling. However, after trying to play videos encoded in several resolutions, both listed and unlisted, we were not able to find something that MPlayer would not report as scaling to full screen using hardware scaling. This also verified that there are no hardware limitation preventing MPlayer from playing videos with resolutions that are not a multiple of 16, as reported in 5.4. This could mean that the resolution limit only pertains to the Nokia Video Player. Since we would not primarily be using Video Player for our experiment, we found it best to choose two resolutions we were sure would scale nicely to full screen on the device, with no or minimal loss of quality, and without the need to add black bands.

91 Since the display of the device is 15:9 scaled at 800x480 pixels, we decided to encode the video in the two listed 15:9 resolutions that are recommended: 400x240 and 240x144. 240x144 was chosen as we could see from initial trials that it yields acceptable quality on the 770, with only 1/3 the number of pixels of 400x240 resolution, thus requiring a much lower bandwidth. This would also test the Nokia 770 stated limitation of maximum number of 1.52 megapixels per second, as with 25 frames per second, the high-quality videos would have a total of 2.4 megapixels per second. With a bandwidth of 1000 Kbps, it would also test the stated bandwidth limitation of 800 Kbps. See Section 5 for more information about these limitations. We made some initial tests to decide what codecs to use for the videos in the experiment. Table 7 on page 57 shows which combinations of video, audio and container formats are supported by the Nokia 770, and whether they can be decoded by the DSP or not. According to LIVE555, the RTP media types currently known to be working in MPlayer are MPEG-1, MPEG- 2 and MPEG-4 elementary streams, H.263+ and MJPEG. We see that most of the video formats listed can be decoded by the DSP, possibly saving the resources of the main CPU, and increasing battery life. As mentioned in Section 5.5.8, MPlayer supports DSP decoding of both MP2 and MP3 audio. Still, the status of MPlayer’s support of DSP decoding of video was somewhat of a question mark. As MPlayer does not integrate into the GStreamer stack of the Nokia 770, and so it could not be taken for granted that MPlayer would provide DSP decoding support for all or even any of the formats on the list. MPlayer for Maemo does not mention the status of video DSP decoding support. We therefore decided to select two different codecs, where one codec could be decoded by the DSP, and the other could not, and then analyze the performance difference between the two. If one performed much better than the the other, then we could tell whether or not there was any DSP support for that format in MPlayer, and how large an impact the DSP had on the performance. In the search for two codecs to use for our experiment, testing revealed an incompatibility between MPlayer and FFserver when streaming H.263+ encoded video. There were also issues with getting MPEG-2 working with FFserver. Since MJPEG is more of a special-purpose video codec, we there- fore chose to go with MPEG-1 and MPEG-4, as they both worked flawlessly in both FFserver and MPlayer. They are also two of the most widely used codecs in the world of digital video. For the measurements, we used two different clips encoded in several different combinations of resolutions, bit rates and encodings. As we wanted to focus on video streaming performance, all of the clips were encoded without

92 Clip number Clip type Resolution Bit rate (Kbps) Codec 1 Action 400x200 1000 MPEG-4 2 Action 240x144 500 MPEG-4 3 Action 240x144 200 MPEG-4 4 Action 400x200 1000 MPEG-1 5 Action 240x144 500 MPEG-1 6 Action 240x144 200 MPEG-1 7 Interview 400x200 1000 MPEG-4 8 Interview 240x144 500 MPEG-4 9 Interview 240x144 200 MPEG-4 10 Interview 400x200 1000 MPEG-1 11 Interview 240x144 500 MPEG-1 12 Interview 240x144 200 MPEG-1

Table 10: Video clips used in the experiment sound. As the audio and video streams are transmitted in two separate RTP streams, this would mean that we would only have one single media stream to worry about when monitoring the network. The video clips were encoded from the DVD-original using FFmpeg with 2-pass encoding. In order to see whether the level of action in a clip would influence the results, the two clips were chosen such that one of the clips had a high-level of action, and the other had a very low level of action. The high-action clip (Figure 39) is from the last action scene in the movie Top Gun, which incidentally is one of the author’s favorite movies. The low-action clip (Figure 40) is from an interview of Tom Cruise included in the extra material featured on the same Top Gun DVD. This clip contains very little motion, and little cutting. The different combinations of video encoding parameters used are listed in Table 10. All the clips were encoded with a frame rate of 25 fps, and all of the following measurements were performed using these two clips, in the listed combinations of bit rates and resolutions. The only exception is in the local playback scenario, as explained in Section 7.3.1. Note that demuxing of the container format is always done on the main CPU, so it should not have any influence on the results (maemo.org, b).

93 Figure 39: The action video clip

Figure 40: The interview video clip

94 7.2.4 Choice of resource monitoring software We decided to use the sysstat package to handle resource monitoring a each node. The only available packages of sysstat were for Maemo version 3.2 and 4, so we had to compile this using Scratchbox, but this was straight- forward. Monitoring of per-process CPU utilization turned out to be a CPU in- tensive task in itself. Initial testing showed that the pidstat utility uses up to 20% CPU time when monitoring all tasks on the system in 1 second intervals. With this in mind, we also did a quick test with top, and found that this also consumed about 10% CPU when running with 1 second update interval. But, we also found that the CPU utilization was heavily dominated by the applications we were running for our experiment, FFserver and MPlayer. We therefore settled with monitoring the resources of our Nokias with only the sar utility, running at 1 second intervals. We wrote the script resourcelogger.sh (see Appendix B.4, which was run on each node before the start of each meas- urement to setup and start logging the system resource utilization.

7.2.5 Wireless testbed A major hurdle was now to design and set up a multi-hop mobile ad-hoc network in a limited area. There has not been very much research on the subject, and existing solutions rely on using large outdoor areas, signal at- tenuation or noise injection (Kaul et al., 2006), (Kaba and Raichle, 2001), (Jadhav et al., 2005). All these solutions were too complex for our needs, and rely on custom hardware. We chose to set up our ad-hoc network using the OLSR protocol (see Section 2.4.1). Intuitively, a proactive routing protocol is what best caters to the delay requirements of multimedia streaming. Also, a version of the OLSR daemon packaged for the Nokia 770 was easily available, and preliminary testing showed that it functioned very well. Through a little trial-and-error, we found that the bomb shelter in the basement of the department building was a good location to conduct our streaming experiment. By carefully placing the nodes in a ”zigzag” pattern partly behind the thick walls, and then turning down the transmission power on the nodes on both edges, we were able to set up a multi-hop, three-node MANET. The proactive nature of the OLSR protocol simplified the setup, in the way that we could easily tell whether or not a link existed between any two nodes. A reactive protocol would have required us to try to establish a

95 connection in order to test this. Also, by logging the output of OLSR, we would easily be able to tell if a link between any of our nodes went down causing changes in the topology. The two-node scenario was also performed within this shelter, as it provided good isolation from any other wireless networks that could interfere with our measurements. To monitor system hardware resources, we decided to the sar utility from the sysstat package. This was running on all nodes in the experi- ment, enabling us to track the resource consumption on the server, client and intermediate node. The properties of the wireless adapter were monitored using a slightly modified version of the script from the wireless experiment, logging the out- put from iwconfig. We also logged the output from OLSRd, running with debug level 1. This logs all topology changes happening in the MANET. By passing the parameters -dispin and -dispout to OLSRd, we could easily have logged all incoming and outgoing OLSR-packets as well. Even so, we decided to capture the OLSR traffic by means of TCPdump instead, by logging UDP traffic at port 698. In this way, the OLSR packet traces could be inspected by in the same manner as the other network traffic. Also, we wanted the output from OLSR to be printed to screen as well as to the log file, in order to monitor the topology of the MANET. Adding the options -dispin and -dispout causes a lot extra console output, which leads to a considerable slowdown on the 770. A quick investigation showed that the central process maemo-launcher uses about 50% CPU time, which clearly would have affected the results of our measurements. We thus concluded that capturing the OLSR packets with tcpdump would be more reliable, and less intrusive. To capture raw 802.11 frames, we used Kismet running on the Dell XPS 1210m laptop also used in 6. Our solution was to use a laptop running Kismet to sniff the wireless medium during the experiment, and store all the traffic to file for later analysis. Although Wireshark is also able to perform capturing of wireless traffic, we chose Kismet, as Wireshark produced incomplete traces during initial testing. Since our ad-hoc network was a single frequency network, we only had to concern ourselves with sniffing a single wireless channel (see Deshpande et al. (2006) for information on monitoring multiple channels). By placing the sniffer node next to the intermediate node in our 3-node MANET, the sniffer was able to hear all three nodes in the network. This way, we avoided trouble with merging logs from several sniffers. Each Nokia 770 participating in the measurements was running tcpdump to capture the OLSR, RTP and RTCP traffic to and from the device. Due to

96 the limited resources of the Nokias, only the first 128 bytes of each packets were captured, which is enough to store all protocol headers contained in the packets, but which leaves out the payload. In order to easily be able to set up the network related logging on each node, we wrote the script networklogger.sh (Appendix B.5). However, while performing the third and final set of measurements, we experienced more problems with the sniffing, which we unfortunately were unable to solve in time. Thus, the third set of measurements had to be performed without wireless sniffing enabled. Like any other network traffic, RTCP packets can be captured usingtcpdump. One difficulty with setting up capturing of RTP/RTCP traffic, tough, is that RTSP by default picks a random even port number for the RTP stream. The RTCP traffic corresponding to each RTP stream will then be sent at the odd port number following the chosen RTP port. Fortunately, MPlayer has an option -rtsp-port, which allows the user to direct the RTP traffic to a specific port number, thus making the task of capturing the traffic with tcpdump very simple. Finally, we logged all output from the streaming server, thereby logging all the requests made by the client. The plan was initially to also perform subjective evaluation of the stream- ing video quality, by using the ”editing list” feature in MPlayer to mark low quality parts. However, the lack of a hardware keyboard made this im- possible, and we did not have access to an external Bluetooth keyboard at the time. Instead, we had to settle for just general observation of the quality of each streaming session. Analysis of the packet captures made by Kismet and tcpdump were ana- lysed using Wireshark and the command line version, Tshark, as it is capable of inspecting RTP and RTCP packets, OLSR, TCP and UDP packets, as well as 802.1 and 802.11 frames. Information that is accessible in the different protocol layers are summar- ized in Table 7.2.5. Headers of several of the protocols contain information that is not shown. Only protocol fields that would be considered interesting during an analysis is listed. Table 12 summarizes which network related QoS metrics mentioned in 4 we are able to measure with our current setup.From the experiment in Section 6, we learned that the noise level reported by the Nokia 770 wireless driver cannot be trusted. This also means that we are unable to compute the SNR of the wireless link. As mentioned in Section 5.3.7, OLSRd supports computation of ETX for the wireless links. But as this breaks RFC com- patibility and could possibly interfere with our measurements, we chose to disable this. As we are logging the output of the OLSR daemon, we also have

97 802.11 IP UDP OLSR Frame type and sub type Total length Source port Packet length More fragments Identification Destination port Packet sequence number Retry Time to live Length Message type Duration Next level protocol Message size BSSID address Source IP address Originator address Source MAC address Destination IP address Hop count Receiver MAC address Message sequence number Transmitter MAC address Fragment number Sequence number RTP RTCP RTSP Payload type Sender SSRC Media control information Sequence number NTP timestamp Media location Timestamp RTP timestamp Transport protocol Sync. src. (SSRC) identifier Sender’s packet count (SR only) Contrib. src. (CSRC) identifier Sender’s octet count (SR only) Per source: Timestamp of last SR Delay since last SR (RR only) Interarrival jitter (RR only) Fraction lost (RR only) Cum. no. of packets lost (RR only) Highest seq.no. received (RR only)

Table 11: Information accessible from the different protocol layers

Metric Can measure? Method Signal strength Yes iwconfig Noise level No* (iwconfig) SNR No* ETX Yes OLSRd ETT No WCETT No Prevalence Yes OLSRd Persistence Yes OLSRd Route flap Yes OLSRd

Table 12: Summary of network measurement possibilities

98 Layer Monitoring method Hardware sar Link iwconfig, Kismet Network/MANET OLSRd, tcpdump Transport tcpdump Session FFserver Application FFserver, MPlayer Media Simple subjective evaluation

Table 13: Monitoring methods by layer the ability to compute the route stability metrics prevalence, persistence and route flap. Our cross-layer monitoring strategy is summarized in Table 13.

7.3 Performing the experiment Before starting with the experiments, we measured the average CPU utiliza- tion when idle to be about 9% each for both user and system CPU, totalling at 18% idle CPU utilization.

7.3.1 Scenario 1: Local playback The purpose of this scenario was to see how much resources is consumed by the playback of the media itself, without involving data transmission, routing and other complicating factors. It also served as a test of the resource monitoring software. Each of the 6 different videos were played back using MPlayer. For each video, the resource consumption was monitored using the the script in Ap- pendix B.4 on page 153, which logs output from sar and pidstat. Figures 41, 42, 43 and 43 show graphs of the CPU utilization for the 400x240 1000 Kbps clip and 240x144 200 Kbps clip in both MPEG-4 and MPEG-1 encoding. From the graphs we can see how the total CPU utilization is close to 100% for the 1000 Kbps clip in the case of both MPEG-4 and MPEG-1. The outlook is somewhat better for the 200 Kbps clips, where the total CPU utilization is somewhere around 60% on average. In both cases, we see that the CPU utilization for each clip is slightly higher when playing the MPEG-4 version. This is not surprising, since MPEG-4 is more complex to decode than MPEG-1.The results for the 500 Kbps clips are not shown here, but were somewhere in between.

99 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 41: CPU utilization: Local playback of action clip at 400x240 1000 Kbps with MPEG-4 encoding.

When comparing the graphs for the two action and interview clips, we see that they both have about the same CPU utilization, despite the fact that the level of action in the interview clip is very small compared to the action clip. One difference though, are the sharp drops in CPU utilization that happen 4 times during playback of the interview clip. These drops occurred for all the tested combinations of resolution and bit rate, but are most prominent in the 400x240 1000 Kbps clip. The cause of these sharp drops were quite a puzzlement at first, until the obvious reason occurred to us: The interview clip is divided into 4 sequences, which each are separated by a still frame showing the topic for the next section of the interview. These frames are displayed for about 6 seconds each. Thus, we see that the content of the video clip being played does indeed affect the CPU utilization. Still, very little movement is needed to bring the CPU utilization up to the level of the action clip. Still unsure of MPlayer’s support for DSP video decoding, we decided to compare the performance of MPlayer to the Nokia 770 Video Player. As mentioned in 5, decoding through the DSP is handled automatically through GStreamer plug-ins in the Nokia 770 multimedia architecture. The standard Video Player included with the device uses the GStreamer architecture, so by measuring the CPU utilization when playing a video using MPlayer, and comparing it to the CPU utilization when playing it in the standard Nokia 770 Video Player, we hoped to be able to see the difference between regular

100 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 42: CPU utilization: Local playback of action clip at 400x240 1000 Kbps with MPEG-1 encoding.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 43: CPU utilization: Local playback of action clip at 240x144 200 Kbps with MPEG-4 encoding.

101 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 Time (MM:SS)

Figure 44: CPU utilization: Local playback of interview clip at 400x240 1000 Kbps with MPEG-4 encoding. and DSP optimized decoding. But, as mentioned in Section 5.5.7, the Nokia 770 Video Player apparently does not support playback of video files that do not contain an audio stream. We were therefore not able to use our regular video clips for this test. Instead, we chose to use the video clip Discovery.avi, which is included with the Nokia 770 by default. This clip was chosen under the assumption that it is reasonable well optimized for playback on the Nokia 770, using the standard video player. By examining the interrupt statistics from the two experiments, we learned that the number of interrupts/second for interrupt number 10 was twice as high in the case of the Nokia Video Player. The output from /proc/interrupts told us that interrupt 10 was a DSP interrupt, which would suggests an in- crease in DSP utilization, which would explain the much lower CPU utiliza- tion. Afterwards, we performed the measurements again, using the same video clip, Discovery.avi, but converted to MPEG-1 video with MPEG-1 layer 2 audio. According to maemo.org (b), the DSP does not support decoding of MPEG-1 video. Note that the original video had 15 frames per second, but due to limitations in MPEG-1, this had to be increased to 30 in the transcoded video, although the bit rate was kept very close to the original. From Figures 51 on page 106 and 52 on page 107, we see that playing this video in Video Player causes an even higher number of DSP interrupts per

102 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 45: CPU utilization: Playing MPEG-4 video in MPlayer second, while the number for MPlayer remain similar. However, when looking at Figures 49 on page 105 and 50 on page 106, we see that CPU utilization for Video Player has increased drastically, and is now much higher than for MPlayer.

103 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 Time (MM:SS)

Figure 46: CPU utilization: Playing MPEG-4 video in Video Player

140

120

100

80

60 DSP interrupts/second

40

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 47: DSP interrupts/second: Playing MPEG-4 video in MPlayer

104 140

120

100

80

60 DSP interrupts/second

40

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 Time (MM:SS)

Figure 48: DSP interrupts/second: Playing MPEG-4 video in Video Player

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 Time (MM:SS)

Figure 49: CPU utilization: Playing MPEG-1 video in MPlayer

105 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 50: CPU utilization: Playing MPEG-1 video in Video Player

140

120

100

80

60 DSP interrupts/second

40

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 Time (MM:SS)

Figure 51: DSP interrupts/second: Playing MPEG-1 video in MPlayer

106 140

120

100

80

60 DSP interrupts/second

40

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 Time (MM:SS)

Figure 52: DSP interrupts/second: Playing MPEG-1 video in Video Player

107 Figure 53: Node-to-node experiment setup

Summary All of the streams played smoothly. This shows that maximum number of megapixels that the 770 is able to handle, is somewhat higher than the 1.52 megapixels that is reported by maemo.org (d). Our test videos with a resolution of 420x200, has 420x200x25 = 2.4 megapixels, and played without problems. Even so, the graphs show that the CPU load is close to 100% when playing the most demanding video clips, and that use of the DSP for decoding of video data significantly reduces this CPU load. Subjectively, the quality of all the clips were good enough to watch on the Nokia 770, although the 200x140 200 Kbps video may be to grainy to see some details in the image, which could be important in some practical scenarios. In this respect, 240x140 at 500 Kbps probably has a better bandwidth/quality ratio. We can also see that the difference in resource consumption between MPEG-1 and MPEG-4 when not decoding using the DSP is slightly in favor of MPEG-1.

7.3.2 Scenario 2: Node-to-node In this scenario, we wanted to see the result of streaming video over a single wireless link between two neighboring nodes, without the complicating factor of routing and route management at the mobile ad-hoc network layer. This was also an opportunity to see how the wireless data transmission impacts video playing on the Nokia 770, and test the performance of the video streaming software on both the server and the client. There were some initial difficulties in getting a complete capture of the wireless traffic, but these problems were solved by making sure that the WLAN adapter was not associated to an ESSID upon launching Kismet.

108 CPU utilization 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 54: CPU utilization: Server node streaming action clip at 400x240 1000 Kbps with MPEG-4 encoding.

After redoing most of the second set of measurements, we were able to capture a complete trace of the traffic.

The hardware layer From Figures 54 and 55 on the next page we see that the client has about the same CPU load as in the local playback scenario, yet slightly higher, resulting in a total CPU load of 100% at certain moments, leading to somewhat choppy playback in the case of 1000 Kbps bit rate in 400x240 resolution. The server has an average CPU load of about 30%. Unlike the client, where the user CPU time from the playback is the dominating factor, the system CPU time is about twice as high as the user CPU time on the server. The difference in CPU utilization between streaming MPEG-4 and MPEG- 1 video was small or non-existent on both the server and the client side. When we decreased the resolution and bit rate of the video stream, the difference was most noticeable on the client side, where the CPU utilization dropped by about 30%. On the server, the CPU utilization dropped by 10%, totalling at about half the utilization in the 1000 Kbps case. See Figures 56 and 57. By comparing the results for client and server, we are beginning to see evidence that the resources of the streaming client easily can become a bot- tleneck in a MANET video streaming solution. Just as in the local playback scenario, the content of the video clip did

109 CPU utilization 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 55: CPU utilization: Client node playing action clip at 400x240 1000 Kbps with MPEG-4 encoding.

CPU utilization 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 56: CPU utilization: Server node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

110 CPU utilization 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 57: CPU utilization: Client node playing action clip at 240x144 200 Kbps with MPEG-4 encoding. not significantly affect the CPU utilization of the client, with the exception of the certain points in the interview clip, noted in 7.3.1. The same thing was also true for the server. When looking at the report for block device activity, we can see how data was read and written to disk during the streaming session. The tall, crossed bars correspond to logging data being written to disk at a set interval, about every 30 seconds. When looking at the graphs for the client, we can see that this is the only block device activity happening on the system during the streaming session. See 61 on page 113 as an example. Graphs for other combinations of resolutions and bit rates were similar. From Figures 58, 59 and 60, we see that the server reads the file from disk in bursts of 1000 Kbps, regardless of the bit rate of the stream, or the size of the file. On the other hand, when we examine graphs of the network transfer statistics, we see that throughput varies considerably throughout the stream. For 1000 Kbps clips, we can see variations in transmission speed as large as 1500 Kbps, while for 500 Kbps and 200 Kbps, we have variations of about 700 Kbps and 200 Kbps respectively. The results for MPEG-1 encoding are similar to the results for the MPEG-4 streams. Also, the graphs for the client showing the amount of data received are similar. It thus appears that although FFserver reads the clip from disk at an almost constant data rate, it does not transmit it over the network at a constant rate. Rather, the fluctuations in transmission rate seems to increase

111 Block device activity 2500 read written

2000

1500 Kbps

1000

500

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 58: Block device activity: Server streaming action clip at 400x240 1000 Kbps with MPEG-4 encoding.

Block device activity 2000 read written 1800

1600

1400

1200

1000 Kbps

800

600

400

200

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 59: Block device activity: Server streaming action clip at 240x144 500 Kbps with MPEG-4 encoding.

112 Block device activity 1600 read written 1400

1200

1000

800 Kbps

600

400

200

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 60: Block device activity: Server streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

Block device activity 2000 read written 1800

1600

1400

1200

1000 Kbps

800

600

400

200

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 61: Block device activity: Client streaming action clip at 420x240 1000 Kbps with MPEG-4 encoding.

113 Network transfer statistics 1800 received transmitted 1600

1400

1200

1000 Kbps 800

600

400

200

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (seconds)

Figure 62: Network activity: Server streaming action clip at 420x240 1000 Kbps with MPEG-4 encoding. with the bit rate of the video stream. This suggests that increasing the bit rate of the video stream increases the bandwidth requirements by more than the increase in bit rate. In fact, from our figures, the maximum throughput requirements seem to be about 150% of the bit rate of the video stream.

The link layer The link quality was reported to be around 60 by both the server and the client node, throughout the entire session, with only a couple of exceptions, noted below. In Figure 65 on page 116 we see how the client reports of a sudden drop in link quality while streaming the action clip at 400x240 1000 Kbps with MPEG-1 encoding. This drop in link quality is not reported by the server. Such a drop in link quality is also reported again by the client while streaming the action clip at 240x144 200 Kbps with MPEG-1 encoding, and again, the drop in link quality is not reported by the server node. See Fig- ure 66 on page 116. In all other measurements, the link quality is reported by both server and client to be relatively stable. These two drops in link quality do not seem to have any impact on the network transmission speed, and analysis of the traces captured by the wire- less network sniffer show that retransmissions at the link layer are almost non-existent.

114 Network transfer statistics 900 received transmitted 800

700

600

500 Kbps 400

300

200

100

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (seconds)

Figure 63: Network activity: Server streaming action clip at 240x144 500 Kbps with MPEG-4 encoding.

Network transfer statistics 400 received transmitted 350

300

250

200 Kbps

150

100

50

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (seconds)

Figure 64: Network activity: Server streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

115 100

80

60 Link quality 40

20

0 00:00 00:15 00:30 00:45 01:00 01:15 01:30 01:45 02:00 02:15 02:30 02:45 03:00 Time (seconds)

Figure 65: Link quality: Client streaming action clip at 400x240 1000 Kbps with MPEG-1 encoding.

100

80

60 Link quality 40

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (seconds)

Figure 66: Link quality: Client streaming action clip at 240x144 200 Kbps with MPEG-1 encoding.

116 6000 MPEG-4 MPEG-1

5000

4000

3000

2000 Cumulative number of packets lost

1000

0 0 50 100 150 200 250 300 Time (seconds)

Figure 67: Cumulative packet loss as reported by RTCP (client capture): Streaming action clip at 400x240 1000 Kbps

Clip no. Retransmissions Total frames % Retransmissions 1 22 65533 0.03 2 17 66616 0.03 3 18 38124 0.05 4 13 38638 0.03 5 5 21065 0.02 6 6 21559 0.03 7 54 89828 0.06 8 37 91585 0.04 9 12 52556 0.02 10 24 54430 0.04 11 10 32193 0.03 12 3 31249 0.01

The transport layer We then analyzed the RTP traffic using Wireshark’s RTP stream analysis. Figures 67 and 68, shows a comparison of the cumulat- ive packet loss for the two different encodings as reported by RTCP packets captured on the client. We see that the packet loss for the MPEG-4 stream is consistently much higher than the MPEG-1 stream. The MPEG-4 stream has about 30% higher packet loss than the MPEG-1 stream of the same video.

117 8000 MPEG-4 MPEG-1

7000

6000

5000

4000

3000

Cumulative number of packets lost 2000

1000

0 0 50 100 150 200 250 300 350 400 450 Time (seconds)

Figure 68: Cumulative packet loss as reported by RTCP (client capture): Streaming interview clip at 400x240 1000 Kbps

The graphs for the interview clip at 400x240 resolution, show the same trend. For all of the rest of the clips, the packet loss reported by RTCP was 0 or close to 0. Despite the packet loss reported by RTCP, the wireless captures show very few link layer retransmissions. This is consistent with the results from the wireless experiment in 6 on page 68, where we could see that link layer retransmissions almost only occurred when devices transmitted at close to the maximum possible communication distance. The network error statistics reported by sar, showed 0 transmission er- rors, collisions. One might expect that since the system was operating at close to 100%, the packet loss reported by RTCP would be due to an over- flow in the receive buffers on the client. But, as mentioned in the analysis of the hardware layer, sar also reports 0 buffer overflows, so this is not the case. This leads us to believe that the packet loss is due to the RTP stack in the streaming client software, MPlayer, dropping packets. RTCP also has a field that reports the fraction of RTP packets lost since the last Sender Report or Receiver Report was received. Note that since the reports do not arrive at a set interval, they cannot be directly used to see if or how the packet loss rate changes during the streaming session. However, the loss rate per second can be obtained by dividing the loss fraction by the difference in NTP timestamps, expressed in seconds. Unfortunately, time prevented us from further in-depth analysis of this RTP/RTCP data.

118 The IP bandwidth measurements produced by the RTP stream analysis in Wireshark are similar to the network utilization measured by sar, although in our case, the measured RTP stream is the only traffic on the wireless interface. If we had multiple streams or a lot of other traffic, then this would provide us with a better view of the bandwidth consumed by the stream itself. As a note, these measurements are much more detailed, as they made per RTP packet and not just sampled once per second, as in our sar logging setup. Unfortunately, it seems that the jitter calculations fail for the MPEG-4 streams, as we got results in the 7-digit range. It turns out that reason is that Wireshark is able to detect the payload of our MPEG-1 streams, but not the MPEG-4 streams. As the jitter calculations require knowledge of the sampling rate of the RTP payload (Wireshark), MPEG-4 video in this case, the calculations fail. According to Wireshark, the sampling frequency is 8 kHz for most audio codecs, and 90 kHz for most video codecs. To be sure, we consulted Kiku- chi et al. (2000), which confirms that the timestamp resolution is set to a default value of 90 kHz, unless otherwise specified by a parameter during stream setup. van der Meer et al. (2003) also recommends a sampling rate of 90 kHz for MPEG-4 video streams. The logs from FFserver unfortunately only provides excerpts of the RTSP messages, so we could not see detailed parameters of the stream setup there. We therefore decided to inspect the packet captures of our RTP streams using Wireshark. By looking at the contents of the RTSP packet containing the session description (see Figure 69, we see that the first ”Media Attribute (a)” is set to ”rtpmap:96 MP4V-ES/90000” - in other words an MPEG-4 video elementary stream at 90 kHz sampling rate. By dividing the jitter values for the MPEG-4 streams provided by Wire- shark’s RTP analysis by the sampling rate of 90000 Hz, we got the correct jitter values. Unfortunately, we were also unable to perform analysis of the jitter measurements, due to time constraints.

Media layer During the measurements, the playback of the 400x240 1000 Kbps clips was choppy. Playback of all other clips was smooth without noticeable glitches or choppiness. Between the MPEG-4 and MPEG-1 version of the 400x240 clip, there were notable differences in how the quality of the video was affected when frames were dropped. The result of packet loss in the MPEG-4 stream seemed to be more or less limited to the parts of the frame where there were objects in motion, and resulted in these parts being ”smudged” across the imaged as they moved.

119 Frame 81 (471 bytes on wire, 471 bytes captured) IEEE 802.11 Data, Flags: ...... Logical-Link Control Internet Protocol, Src: 10.0.0.1 (10.0.0.1), Dst: 10.0.0.2 (10.0.0.2) Transmission Control Protocol, Src Port: apc-5454 (5454), Dst Port: giop-ssl (...) Real Time Streaming Protocol Response: RTSP/1.0 200 OK\r\n Status: 200 CSeq: 1\r\n Date: Sun Dec 2 22:29:14 2007 GMT\r\n Content-type: application/sdp Content-length: 271 \r\n Session Description Protocol Session Description Protocol Version (v): 0 Owner/Creator, Session Id (o): - 0 0 IN IPV4 127.0.0.1 Time Description, active time (t): 0 0 Session Name (s): No Title Session Attribute (a): tool:libavformat Media Description, name and address (m): video 0 RTP/AVP 96 Media Attribute (a): rtpmap:96 MP4V-ES/90000 Media Attribute (a): fmtp:96 profile-level-id=1; config=0000000000 (...) Media Attribute (a): control:streamid=0

Figure 69: Excerpt of the decoded packet headers from Wireshark

On the other hand, packet loss in the MPEG-1 stream resulted in randomly colored square artifacts at various places all over the video frames. The effect on the MPEG-1 video stream was arguably more objectionable than what was visible on the MPEG-4 video. Using Wireshark, we saved the payload of the captured RTP streams to file, and then calculated the MD5 checksum for each frame in each of the streams using MPlayer. We then compared these checksums to the checksums for each of the original video clips. Each checksum mismatch then indicated that a frame was corrupted in some way. But when playing the saved streams using MPlayer we quickly noticed that they looked much worse than when they were played back during the experiment. This means that the streams somehow got more corrupted when captured by the sniffer, and/or that Wireshark corrupted the stream when extracting it from the captured packets and saved it to file. We therefore disregarded the results of this analysis.

Summary In this scenario, we see that the streaming server does not really consume a lot of resources per stream. Playback of the video on the client is much more expensive. Coupled with the overhead of network transmission and streaming, playback of the 1000 Kbps streams is choppy.

120 Figure 70: MANET experiment setup

7.3.3 Scenario 3: MANET The measurement equipment in the third scenario consisted of three Nokia 770s, and one Dell laptop. The Nokias formed a three-node mobile ad-hoc network, connected as in Figure 70. Please see 7.2.5 for details. To further ensure that the two end-nodes would be out of radio range of each other, the transmission power was lowered to 10mW in the end-nodes, while the power at the intermediate node was kept at 100mW. The Dell laptop was placed next to the intermediate node, to sniff and store all wireless transmissions in the MANET. There was no mobility and the network, so the routes were static. Even so, observing the behavior of the intermediate node, as well as if and how the OLSR daemon affected the streaming performance, was interesting in itself. As in the second scenario, we streamed all 12 videos in sequence while performing measurements on all nodes of the network. Unfortunately there were issues with the wireless sniffing node. For some reason it would capture a fraction of the packets on the network, and we were not able to rectify this situation. Hence, we do not have a complete capture of the wireless traffic of the MANET in this scenario.

121 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 Time (MM:SS)

Figure 71: CPU utilization: Server node streaming action clip at 400x240 1000 Kbps with MPEG-4 encoding.

The hardware layer Figures 71 and 72 show that CPU utilization for both server and client are comparable to that in the second scenario. Results for the other clips are also similar to the ones from scenario 2. In Figure 73 on the next page, we can see the graph of the CPU utilization for intermediate node in the 3-node MANET. We see that the user CPU load is at the idle level, as no extra user processes are running, apart from the basic services. When looking at the system CPU load, however, we see that it is even higher than on the server node, sometimes spiking up to 60%. From Figure 74 on page 124, we see that the results for low bandwidth clip gives a somewhat more positive outlook. Comparing it to the graph for the server in Figure 75 on page 124 shows that the CPU utilization when playing the low-bandwidth stream is about the same for the server and the intermediate node. Figure 76 on page 125 shows that the client is still under quite heavy load, even when playing a low-bandwidth stream. From the figures, we also see that something happened at about 02:55, which caused the clip to stop playing until 03:25, resulting in a sharp drop in CPU utilization on all nodes in the MANET. The network utilization for all three nodes also drops at the same point in time. The graph for the server is shown in Figure 77, and the graphs for the intermediate node and client look similar.

122 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 06:00 Time (MM:SS)

Figure 72: CPU utilization: Client node streaming action clip at 400x240 1000 Kbps with MPEG-4 encoding.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 06:00 Time (MM:SS)

Figure 73: CPU utilization: Intermediate node streaming action clip at 400x240 1000 Kbps with MPEG-4 encoding.

123 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 74: CPU utilization: Intermediate node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 06:00 Time (MM:SS)

Figure 75: CPU utilization: Server node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

124 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 06:00 Time (MM:SS)

Figure 76: CPU utilization: Client node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

Network transfer statistics 300 received transmitted

250

200

150 Kbps

100

50

0 00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 08:00 Time (seconds)

Figure 77: Network activity: Server node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

125 100

80

60 Link quality 40

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 78: Link quality: Intermediate node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

The link layer When looking at the link quality reported by each of the nodes, we see that the intermediate node reports of an all-time low in the same interval (Figure 78). The graphs of the link quality reported by the server and client can be seen in Figures 79 and 80 respectively. As we can see, the server node re- ports a drop in link quality during the interval at which the streaming was interrupted, although it is not as low as on the intermediate node, and not any lower than reported at other times during the streaming session. The link quality reported by the client node does not show such a pro- nounced drop at the interval in question. This could suggests that the inter- rupted transmission was due to a degraded wireless link between the server and the intermediate node. Beware though, that as noted in Section 6, the link quality value reported by iwconfig is undefined when in ad-hoc mode. Nonetheless, at least in this case, there seems to be a correlation.

The network layer As the routes are supposed to be static throughout our measurements, there is not very much routing-related happening at the network layer most of the time. By examining the OLSR logs, we could confirm that there did not exist at direct link between the server and the client nodes at any point during the measurements. But the logs also reveal that the routing table was not very stable. According to the OLSR log on the server, the route between

126 100

80

60 Link quality 40

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 05:30 Time (MM:SS)

Figure 79: Link quality: Server node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

100

80

60 Link quality 40

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 80: Link quality: Client node streaming action clip at 240x144 200 Kbps with MPEG-4 encoding.

127 the server and the client appears to have timed out and been deleted from the routing table 35 times while performing the MANET measurements due to HELLO timeouts. Still, as only 1 video streams were interrupted during playback, it appears that this for the most part has not affected the routing of the video stream. In fact, the reason for the interruption of clip 6 seems to be that the link between the intermediate node and the client that went down. Just as the link came back up, as the streaming resumed.

The transport layer Despite the failed link, we note that neither sar, nor RTCP, reports of any packet losses. We also discovered that in order for Wireshark to perform IP bandwidth calculation, it requires a complete packet trace of the RTP packets with the payload. Since we were unable to make the wireless sniffer function correctly in this scenario, and the nodes only capture the packet headers, we could unfortunately not perform the bandwidth calculations. On the other hand, the experiences from scenario 2, show that with our RTP video stream being the only traffic being transmitted over the network, this bandwidth will be equal to what is reported by sar. However, in this scenario, we also have some OLSR traffic. Still, we it is reasonably safe to assume that this only accounts for a small fraction of the total data traffic. Thus, the bandwidth figures reported by sar, should be a reasonable approx- imation to the total bandwidth consumed by the RTP stream.

Media layer Subjectively, the quality of the media stream was very sim- ilar, if not indistinguishable from the results in scenario 2. The only excep- tion to this being when the link between the intermediate node and client node went down during the streaming of clip number 6, at which point the playback froze. As the link came back up, playback resumed at the point where it would normally have been if the streaming had not been interrupted. That is, it appears that the server did not halt the playback of the stream even if the only client became unreachable. Because of the trouble with the wireless sniffing in this scenario, we were not able to compare the actual payload of the RTP streams to the original video clips in order to objectively estimate the quality of the transmitted video stream. This also meant that we could not compare this to the results from scenario 2.

128 Summary Again we see that video playback causes the client to have the highest CPU utilization of all three nodes in the network, although the inter- mediate node, which has to receive and forward each packet of the stream, also suffers from a pretty high load. While it is easily able to support a single video stream, multiple streams could quite easily cause problems. The client, while able to play a single video stream, does not have the resources to do much else. One could easily imagine that it would cause trouble if it was required to forward traffic for other nodes in the MANET, for instance. The load on all nodes also seems rather insensitive towards the encoding scheme.

7.4 Lessons learned In general, we see that CPU utilization is an obvious issue in all the scenarios, and that the client and intermediate nodes suffer from the highest loads. This will of course continue to improve with the introduction of newer and more powerful devices, such as the Nokia N800 and N810, but it is clear that conservation of CPU time must be a priority when designing a streaming solution for MANETs. To make sure that the performance of the 1000 Kbps stream was not limited by the streaming server, we did a quick test by streaming to our laptop computer, also using MPlayer as the client. The stream played flawlessly. In fact, the server was able to server a total of 4 (four) 1000 Kbps streams to our laptop without experiencing any degradation in the quality of the stream. This proves that the bottleneck in our streaming experiment was in fact the Nokia 770 client. With regards to the monitoring, we see that sar provides us with large amounts of resource data which is very helpful in diagnosing performance issues on the Nokia devices. It was also able to consistently log statistics at the requested 1 second intervals, unlike our logging results in the wireless experiment in Section 6. As the measurements in this experiment only took about 1.5 hours for each scenario, there were no issues with battery life as in the wireless exper- iment in Section 6

7.5 Playback on the Nokia N800 During the thesis work, we were also able to borrow one Nokia N800 to perform some test and compare the results to our experiences with the Nokia 770. Due to time constraints, we had to limit our testing to a single scenario. We chose to measure the performance of the device during local playback of

129 CPU 320 MHz TI OMAP Display 800x480x16 touch-screen Connectivity 802.11g WLAN, Bluetooth 2.0 Memory 128 MB DDR DRAM Storage 256 MB flash, two memory card slots (SD, MicroSD, MiniSD, MMC, and RS-MMC - upgradable to 8 GB each) Camera 640x480 web camera Dimensions 75 x 144 x 13 mm Other Built-in FM radio

Table 14: Nokia N800 hardware specifications (maemo.org, h)

video, as this turned out to be by far the most resource consuming task on the 770.

7.5.1 About the Nokia N800 The Nokia N800 is the second generation of devices in Nokia’s Internet Tablet series, and was released at the end of 2006. It is about the same size as the 770, but has a much more robust and finished look. It has an more powerful CPU and twice the amount of memory as the 770. In addition, it supports two memory cards of up to 8 GB capacity. Also, the device has a built-in camera, and an undocumented FM radio receiver. A summary of the N800’s hardware specifications is provided in Table 14. The N800 runs OS2007, which is based on the 3.x series of the Maemo platform.

7.5.2 Performing the measurements Before testing, we upgraded the device with the latest firmware, which at the time was version 4.2007.38-2. This firmware update reportedly includes a bug fix relating to the memory cards (maemo.org, f). The monitoring scripts and software worked without modification or re- compilation. FFserver also ran without problems, but was not used in this small experiment. The MPlayer we installed on the Nokia N800, was version 1.0rc1-maemo.18.n800 armel. In contrast with the Nokia 770 Video Player, the Nokia N800 Media Player is also able to play videos without audio streams. We were therefore able to conduct a performance test between MPlayer and the N800 Media Player using our regular experiment videos. It is worth noting that the Nokia

130 Figure 81: The Nokia N800

N800 does not support DSP decoding of any video format. Decoding of most common audio formats is still supported. maemo.org (c)

7.5.3 Results Compared to the 770 version, MPlayer seems to consume considerably less resources on the N800. In fact, it even consumes less resources than the Nokia N800 Media Player for both MPEG-4 and MPEG-1 video, the difference being the most substantial for MPEG-1 video. An interesting observation is that the Media Player consumes consider- ably more resources when playing MPEG-1 video clips compared to MPlayer. The performance difference is still present when playing MPEG-4 clips, albeit less significant. This is exemplified in Figures 84, 85, 82 and 83. We also noticed that playback appeared somewhat choppy when the re- source logging was enabled and writing the log file to the same memory card as where the videos were stored and played from. The CPU utilization and I/O logs did not reveal anything, but it could have been a performance limitation with the memory card.

131 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 82: CPU utilization: Playing MPEG-4 video in MPlayer on the N800.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 83: CPU utilization: Playing MPEG-4 in MediaPlayer on the N800.

132 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 84: CPU utilization: Playing MPEG-1 video in MPlayer on the N800.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45 00:50 00:55 01:00 Time (MM:SS)

Figure 85: CPU utilization: Playing MPEG-1 in MediaPlayer on the N800.

133 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 86: CPU utilization: Local playback of action clip in 400x240 1000 Kbps with MPEG-4 encoding in MPlayer on the N800.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 87: CPU utilization: Local playback of action clip in 400x240 1000 Kbps with MPEG-4 encoding in MediaPlayer on the N800.

134 100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 88: CPU utilization: Local playback of action clip in 400x240 1000 Kbps with MPEG-1 encoding in MPlayer on the N800.

100 % user % system

80

60

40 CPU utilization (%)

20

0 00:00 00:30 01:00 01:30 02:00 02:30 03:00 03:30 04:00 04:30 05:00 Time (MM:SS)

Figure 89: CPU utilization: Local playback of action clip in 400x240 1000 Kbps with MPEG-1 encoding in MediaPlayer on the N800.

135 7.5.4 Lessons learned From our measurements, we see that one generation of development has resulted in considerable improvement in CPU power. This is good news, as the results from our video streaming experiment with the Nokia 770, suggest that the CPU is the greatest bottleneck on the device with regards to video streaming performance. As a side note relating to the Wireless driver: Although the wireless driver version used in the N800 is version 1.2, compared to version 0.8.1 in the 770, by looking at the output from iwconfig, it appears to use the same method of calculating link quality as the 770 (see Section 6.5).

136 8 Conclusions

8.1 Summary This thesis has presented a working solution for serving and consuming streaming video using the Nokia 770 Internet Tablet that can serve as a starting point for future work on video streaming in mobile ad-hoc networks. We have also shown how to perform cross-layer monitoring of this video streaming solution through a strategy of combining hardware resource mon- itoring and different network measurement methods. The complete platform is based entirely on open-source software, and can thus be freely used and modified for future research. Through a set of experiments, we have demonstrated our MANET stream- ing solution, and used our monitoring strategies to analyze the performance of video streaming with the Nokia 770 in three different scenarios. As part of our experiments, we have shown an example on how to set up a small-scale, real-world testbed for mobile ad-hoc networks. In addition, our experiments provided valuable information regarding the Nokia 770’s hardware capabilit- ies. Some of the observations made can probably also be extended to ad-hoc network nodes in general. Results show that the playback of the video stream consumes a large amount of resources on the client. In the three-node scenario, we could also see that the intermediate node consumes about twice as much CPU resources per video stream as the server, as forwarding requires both receiving and sending at the same time. To avoid degrading the normal operation of the devices or the network during streaming, conservation of CPU resources must therefore be a priority when designing a video streaming solution for mobile ad-hoc networks. For the Nokia 770 and similar devices, this implies that the DSP core should be used for decoding multimedia data, in order to offload the CPU core. Otherwise, one runs the risk that a node playing a video stream does not have spare CPU resources to perform other important tasks. While MPlayer did not provide the best performance on the Nokia 770, we have seen that when running on the N800, it performs better than the standard MediaPlayer that is bundled with the device during local playback of video files. From our experiments with the 770, we have seen that the difference between local playback and streaming is very small in terms of CPU utilization, which suggests that MPlayer also will perform much better in a streaming solution on the N800. Also, when comparing the 770 to the N800, we see that one generation of development has resulted in considerable improvement in terms of CPU

137 power. Since our results show that the CPU of the 770 was the greatest performance bottleneck in the video streaming solution, this strengthens the position of the Nokia Internet Tablet Series as a good choice of devices for future experiments.

8.2 Critical evaluation During the wireless experiment in Section 6, we could have used a more reliable method of bandwidth measurement, as the performance of the file transfers are heavily dependant on features of TCP. Also, as multimedia streams are normally transported using unreliable protocols, such as RTP, the results of these measurements are not very representative for the performance of video streaming. A more reliable way of measuring available bandwidth could be performed using a dedicated tool, such as Iperf. Looking back at the experiences from the video streaming experiment in Section 7 and the results produced, the amount of trouble we experienced with the wireless sniffing was very disappointing. Unfortunately, time did not permit further investigation of the cause of these issues. The traces we were able to capture in the node-to-node streaming experiment turned out to be very useful, and they would probably have been a greater source of link layer information in the MANET experiment. With regards to the video quality evaluation, our method of comparing the original clips to the stream captured by the wireless sniffer turned out to be unsuccessful. This could perhaps have been avoided by letting the client save the stream to disk while playing. On the other hand, there is a risk of this requiring too much resources, influencing the playback of the stream and affecting the results. As subjective evaluation of quality, just noting the general impression of the quality of the video while performing the measurements, turned out to be adequate for the purposes of this thesis, although using a Bluetooth keyboard would have enabled us to use MPlayer’s edit decision list feature to mark low quality parts. In conclusion, the hardware measurements performed with sar turned out to be the most successful. An important hardware related parameter that was not measured, however, was power consumption.

8.3 Open problems and future work Due to time constraints, we were not able to properly analyze all statics gathered by RTP/RTCP. Time also prevented us from computing and ana- lyzing route stability metrics. Other open problems include finding a reliable

138 way of capturing wireless traces so that the link layer mechanics in the small MANET streaming scenario can be properly analyzed. Also, finding out what caused the corruption of the captured video stream and solving this, would enable us to compare the captured stream to the original and providing an objective QoS measurement of the received video stream. Furthermore, examining the performance of video streaming on the newer Nokia N800, as well as the N810, which was released in late 2007 would be interesting. These devices are faster, more stable and have better software. The possibility of using the standard MediaPlayer on these devices for play- back of the video streams could also be investigated. This would have the benefit of being able to use a video player that is more integrated into the standard desktop, and avoiding the installation of extra software. Although, as mentioned, initial trials suggest that, in contrast to the situation on the 770, MPlayer may actually outperform the standard MediaPlayer on the N800. It would also be interesting to devise a solution to stream video from the built-in camera of these devices, instead of just streaming from existing video files stored on disk, as this would more closely resemble how the solution is envisioned to work in e.g. real-world emergency and rescue scenarios. The combination of FFmpeg and FFserver should support this, and there exists several examples on how set this up on regular desktop computers. Future work include finding a scalable method of building real-world MANET testbeds, and incorporating mobility into the testbed. This would then allow us to explore the effects of mobility and route changes during a MANET streaming experiment. The trials we did with a static setup is not very realistic, so this is very interesting and important work. As the size of the network increases and mobility is introduced, it will also become increasingly difficult to obtain complete wireless traces of the network in a reliable manner. One would need to use multiple sniffer nodes, and be able to merge the logs in order to obtain the complete picture. A method of capturing link layer information locally on each node would be preferable, but it is unclear whether this is possible.

139 References Daniel Aguayo, John Bicket, Sanjit Biswas, Glenn Judd, and Robert Morris. Link-level measurements from an 802.11b mesh network. SIGCOMM Comput. Commun. Rev., 34(4):121–132, 2004. ISSN 0146-4833.

Joshua Bardwell. You Believe You Understand What You Think I Said..., The Truth About 802.11 Signal And Noise Metrics, 2004.

G. Bianchi, F. Formisano, and D. Giustiniano. 802.11b/g link level measurements for an outdoor wire- less campus network. World of Wireless, Mobile and Multimedia Networks, 2006. WoWMoM 2006. International Symposium on a, pages 6 pp.–, June 2006.

Oliver Brown. Streaming video to the nokia 770 - oliver brown, January 2007. URL http://www.oliverbrown.me.uk/2007/01/22/ streaming-video-to-the-nokia-770/.

Wenli Chen, Nitin Jain, and S. Singh. Anmp: ad hoc network management protocol. Selected Areas in Communications, IEEE Journal on, 17(8):1506–1531, August 1999. ISSN 0733-8716.

T. Clausen and P. Jacquet. Optimized Link State Routing Protocol (OLSR). RFC 3626 (Experimental), October 2003. URL http://www.ietf.org/rfc/rfc3626.txt.

CX3110x Linux 2.6 driver. garage: Cx3110x linux 2.6 driver, January 2008. URL https://garage.maemo.org/projects/cx3110x/.

Saumitra M. Das, Himabindu Pucha1, Konstantina Papagiannaki, and Y. Charlie Hu. Studying wireless routing link metric dynamics. In IMC ’07: Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, pages 327–332, October 2007. ISBN 978-1-59593-908-1.

U. Deshpande, T. Henderson, and D. Kotz. Channel sampling strategies for monitoring wireless net- works. Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, 2006 4th International Symposium on, pages 1–7, April 2006.

Richard Draves, Jitendra Padhye, and Brian Zill. Routing in multi-radio, multi-hop wireless mesh net- works. In MobiCom ’04: Proceedings of the 10th annual international conference on Mobile computing and networking, pages 114–128, New York, NY, USA, 2004. ACM Press. ISBN 1-58113-868-7.

Bernd Eckenfels. Linux Programmer’s Manual, November 2001. netstat - Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.

Laura Marie Feeney. A taxonomy for routing protocols in mobile ad hoc networks. Technical Report T99–07, Swedish Institute of Computer Science, October 1999.

Sana Ghannay, Sonia Mettali Gammar, Farouk Kamoun, and Davor Males. The monitoring of ad hoc net- works based on routing. In MEDHOCNET ’04: The Third Annual Mediterranean Ad Hoc Networking Workshop, June 2004.

Jasminder Grewal and John M. DeDourek. A framework for quality of service in wireless networks. In CNSR ’05: Proceedings of the 3rd Annual Communication Networks and Services Research Conference (CNSR’05), pages 231–236, Washington, DC, USA, 2005. IEEE Computer Society. ISBN 0-7695-2333- 1.

GStreamer: open source multimedia framework. GStreamer: open source multimedia framework, January 2008. URL http://gstreamer.freedesktop.org.

Stephen R. Gulliver and Gheorghita Ghinea. Real-time monitoring of video quality in ip networks. ACM Transactions on Multimedia Communications and Applications, 2:241–257, November 2006.

Gavin Holland and Nitin Vaidya. Analysis of tcp performance over mobile ad-hoc networks. Technical Report 99-004, Texas A&M University - Dept. of Computer Science, February 1999.

140 Mathias Hollick and Ralf Steinmetz. Lecture notes in advanced topics in distributed systems - mobile ad-hoc networking. URL http://www.uio.no/studier/emner/matnat/ifi/INF5090/v06/ undervisningsmateriale/MANETs-2006.pdf. ., March 2006.

IEEE-SA Standards Board. ANSI/IEEE Std 802.11, 1999.

Ikke’s Blog. Ikke’s blog - post details: Maemo/nokia 770 sound streaming, June 2006. URL http://blog.eikke.com/index.php/ikke/2006/06/14/.

Sushant Jadhav, Timothy Brown, Sheetalkumar Doshi, Daniel Henkel, and Roshan Thekkekunnel. Lessons learned constructing a wireless ad hoc network test bed. In First Workshop on Wireless Network Measurements, April 2005.

James T. Kaba and Douglas R. Raichle. Testbed on a desktop: strategies and techniques to support multi- hop manet routing protocol development. In MobiHoc ’01: Proceedings of the 2nd ACM international symposium on Mobile ad hoc networking & computing, pages 164–172, New York, NY, USA, 2001. ACM. ISBN 1-58113-428-2.

S.K. Kaul, M. Gruteser, and I. Seskar. Creating wireless multi-hop topologies on space-constrained in- door testbeds through noise injection. Testbeds and Research Infrastructures for the Development of Networks and Communities, 2006. TRIDENTCOM 2006. 2nd International Conference on, pages 10 pp.–, March 2006.

Y. Kikuchi, T. Nomura, S. Fukunaga, Y. Matsui, and H. Kimata. RTP Payload Format for MPEG-4 Audio/Visual Streams. RFC 3016 (Proposed Standard), November 2000. URL http://www.ietf.org/rfc/rfc3016.txt.

Yevgeni Koucheryavy, Dmitri Moltchanov, and Jarmo Harju. Performance evaluation of live video streaming service in 802.11b wlan environment under different load conditions. Technical report, Institute of Communication Engineering, Tampere University of Technology, September 2004. URL http://www.cs.tut.fi/tlt/npg/icefin/documents/mips2003moltch.pdf.

Tianbo Kuang and Carey Williamson. Realmedia streaming performance on an ieee 802.11b wireless lan. Technical report, Department of Computer Science, University of Calgary, 2002.

Ted Taekyoung Kwon, Mario Gerla, Sajal Das, and Subir Das. Mobility management for voip service: Mobile ip vs. sip. IEEE Wireless Communications, 9:66 – 75, October 2002.

Baochun Li and K.H. Wang. Nonstop: continuous multimedia streaming in wireless ad hoc networks with node mobility. Selected Areas in Communications, IEEE Journal on, 21(10):1627–1641, December 2003. ISSN 0733-8716.

Joshua Lifton. Streaming live audio and video from a nokia n800 to second life, April 2007. URL http://web.media.mit.edu/~lifton/snippets/n800 to sl/.

Linux Programmer’s Manual. Linux Programmer’s Manual, December 2003a. iwgetid - Report ESSID, NWID or AP/Cell Address of wireless network.

Linux Programmer’s Manual. Linux Programmer’s Manual, May 2005b. proc - process information pseudo-filesystem.

Linux Programmer’s Manual. Linux Programmer’s Manual, March 2006c. iwspy - Get wireless statistics from specific nodes.

LIVE555. Rtsp/rtp streaming support for , January 2008. URL http://www.live555.com/mplayer/.

Thomas Lopatic. olsrd Link Quality Extensions, December 2004. URL http://www.olsr.org/docs/README-Link-Quality.html. maemo.org. HOWTO Flash latest Nokia image with Linux, Januar 2008a. URL http://maemo.org/community/wiki/HOWTO FlashLatestNokiaImageWithLinux.

141 maemo.org. Maemo 2.x multimedia architecture, January 2008b. URL http://maemo.org/development/documentation/how-tos/2-x/multimedia architecture.html. maemo.org. Maemo 3.x multimedia architecture, January 2008c. URL http://maemo.org/development/documentation/how-tos/3-x/multimedia architecture.html. maemo.org. Maemo 3.x: Transcoding how-to, January 2008d. URL http://maemo.org/development/documentation/how-tos/3-x/transcoding how-to.html. maemo.org. Maemo Connectivity Architecture, January 2008e. URL http://maemo.org/development/documentation/how-tos/2-x/howto connectivity guide.html. maemo.org. Maemo.org, January 2008f. URL http://www.maemo.org. maemo.org. Nokia 770 Hardware Specification, January 2008g. URL http://maemo.org/community/wiki/nokia 770 hardware specification/. maemo.org. Nokia N800 Hardware Specification, January 2008h. URL http://maemo.org/community/wiki/nokia 800 hardware specification/. maemo.org. Real Video Streams, January 2008i. URL http://maemo.org/community/wiki/ realvideostreams. maemo.org. Video Encoding, January 2008j. URL http://maemo.org/community/wiki/VideoEncoding.

Aniket Mahanti, Martin Arlitt, and Carey Williamson. Assessing the completeness of wireless-side tracing mechanisms. World of Wireless, Mobile and Multimedia Networks, 2007. WoWMoM 2007. IEEE International Symposium on a, pages 1–10, June 2007.

MPlayer for Maemo. MPlayer for maemo (Nokia 770/N800), January 2008. URL http://mplayer.garage.maemo.org.

Don Ngo, Naveed Hussain, Mahbub Hassan, and Jim Wu. Wanmon: A resource usage monitoring tool for ad hoc wireless networks. In LCN ’03: Proceedings of the 28th Annual IEEE International Conference on Local Computer Networks, page 738, Washington, DC, USA, 2003. IEEE Computer Society. ISBN 0-7695-2037-5.

Nokia. Internet Tablet 2006 edition User’s Guide, 2006.

Python GStreamer Documents. Python GStreamer Documents, January 2008. URL http://pygstdocs.berlios.de.

Krishna Ramachandran, Irfan Sheriff, Elizabeth Belding, and Kevin Almeroth. Routing stability in static wireless mesh networks. Technical report, University of California, Santa Barbara, 2007. URL http://www.cs.ucsb.edu/ isheriff/PAM07.pdf.

Amy R. Reibman, Vinay A. Vaishampayan, and Yegnaswamy Sermadevi. Quality monitoring of video over a packet network. IEEE Transactions on Multimedia, 6:327–334, April 2004.

K. Rojviboonchai, Fan Yang, Qian Zhang, H. Aida, and Wenwu Zhu. Amtp: a multipath multime- dia streaming protocol for mobile ad hoc networks. Communications, 2005. ICC 2005. 2005 IEEE International Conference on, 2:1246–1250 Vol. 2, May 2005.

Norun Christine Sanderson, Katrine Stemland Skjelsvik, Ovidiu Valentin Drugan, Matija Puzar, Vera Goebel, Ellen Munthe-Kaas, and Thomas Plagemann. Developing mobile middleware - an analysis of rescue and emergency operations. Technical Report 358, Department of Informatics, University of Oslo, June 2007.

Ralf Steinmetz and Klara Nahrstedt. Multimedia Systems. Springer, Germany, 2004.

142 Yuan Sun, Irfan Sheriff, Elizabeth M. Belding-Royer, and Kevin C. Almeroth. An experimental study of multimedia traffic performance in mesh networks. In WiTMeMo ’05: Papers presented at the 2005 workshop on Wireless traffic measurements and modeling, pages 25–30, Berkeley, CA, USA, 2005. USENIX Association. ISBN 1-931971-33-1.

Shu Tao, John Apostolopoulos, and Roch Gu´erin. Real-time monitoring of video quality in ip networks. In NOSSDAV ’05: Proceedings of the international workshop on Network and operating systems support for digital audio and video, pages 129–134, New York, NY, USA, 2005. ACM Press. ISBN 1-58113- 987-X.

The ITU Radiocommunication Assembly. Methodology for the subjective assessment of the quality of television pictures, 2002.

Josep Torra. The hitchhiker’s guide to the n770 galaxy, February 2006. URL n770galaxy.blogspot.com.

Jean Tourrilhes. Linux Programmer’s Manual, March 2006. iwconfig - configure a wireless network interface.

Jean Tourrilhes. Wireless tools for linux, October 2007. URL http://www.hpl.hp.com/personal/Jean Tourrilhes/Linux/Tools.html.

J. van der Meer, D. Mackie, V. Swaminathan, D. Singer, and P. Gentric. RTP Payload Format for Transport of MPEG-4 Elementary Streams. RFC 3640 (Proposed Standard), November 2003. URL http://www.ietf.org/rfc/rfc3640.txt.

Jacobus van der Merwe, Ram´on C´aceres, Yang hua Chu, and Cormac Sreenan. mmdump: a tool for monitoring internet multimedia traffic. SIGCOMM Comput. Commun. Rev., 30(5):48–59, 2000. ISSN 0146-4833.

Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13:600–612, April 2004.

Zhou Wang, Hamid R. Sheikh, and Alan C. Bovik. Objective video quality assessment. In B. Furht and O. Marqure, editors, The Handbook of Video Databases: Design and Applications, chapter 41, pages 1041–1078. CRC Press, 2003.

Wireshark. The Wireshark Wiki, January 2008. URL http://wiki.wireshark.org.

Peng Xue and Surendar Chandra. Revisiting multimedia streaming in mobile ad hoc networks. In NOSS- DAV ’06. ACM, 2006. ISBN 1-59593-285-2/06/0005.

Jihwang Yeo, Moustafa Youssef, and Ashok Agrawala. A framework for wireless lan monitoring and its applications. In WiSe ’04: Proceedings of the 3rd ACM workshop on Wireless security, pages 70–79, New York, NY, USA, 2004. ACM. ISBN 1-58113-925-X.

Jihwang Yeo, Moustafa Youssef, Tristan Henderson, and Ashok Agrawala. An accurate technique for measuring the wireless side of wireless networks. In WiTMeMo ’05: Papers presented at the 2005 workshop on Wireless traffic measurements and modeling, pages 13–18, Berkeley, CA, USA, 2005. USENIX Association. ISBN 1-931971-33-1.

Qingguo Zhou, Shuwei Bai, Rui Zhou, Guanghui Cheng, Nicholas MC Guire, and Lian Li. p3m: A kernel space tool for multimedia stream monitoring. Pervasive Computing and Applications, 2007. ICPCA 2007. 2nd International Conference on, pages 679–684, July 2007.

Michael Zink. Scalable Internet Video-on-Demand Systems. PhD thesis, Technischen Universitt Darm- stadt, 2003.

143 A Configuration of software

A.1 Flashing the latest Nokia image The flasher utility provided on the product page for the Nokia 770 on the Nokia web site is for Windows only. However, a flasher for Linux and MacOS X can be found on http://tablets-dev.nokia.com/. Note that for the 770, both version 2.0 and version 3.0 of the flasher utility can be used. For the N800 (and the N810), version 3.0 is required. Download the latest image from http://tablets-dev.nokia.com/. At the time of writing, this was U-18 2006SE 3.2006.49-2 PR F5 MR0 ARM.bin. Switch off the Nokia 770, and connect it to the computer using the USB cable. Then execute as root:

./flasher-3.0 -F SU-18_2006SE_3.2006.49-2_PR_F5_MR0_ARM.bin -f -R

The message ”Suitable USB device not found, waiting” will be displayed. Now, while holding the Home-button on the Nokia 770 pressed, switch it on using the power button or by connecting the charger. The image should now be loaded onto the 770. The device should reboot automatically when it is done. (maemo.org, a)

A.2 Gaining local root access on the Nokia 770 In order to gain root access on the Nokia 770, one needs to install an SSH server on the device. One can choose between the two SSH servers OpenSSH and Dropbear, both installable from the repositories listed in Appendix A.5. Also, in order to able to gain root access locally on the device, one must install xterm, which is also available from the listed repositories. After the installation of the SSH server, connect to the device using an SSH client, and log in as root, with the default password rootme. Once logged in, change the root password using passwd. Then, open the file /usr/sbin/gainroot using the text editor vi, and change the line that says:

MODE=‘/usr/sbin/chroot /mnt/initfs cal-tool --get-rd-mode‘ so that it just reads:

MODE=enabled

You should now be able to gain root access by opening an Xterm window, and simply executing the command:

144 sudo gainroot

A.3 Setting up Scratchbox Scratchbox is a toolkit used, among other things, to cross-compile software for the Maemo platform. It can be downloaded and installed from the Scratchbox APT source. At that time, of writing, the latest version version was 1.0.8. Install the following packages: • scratchbox-core

• scratchbox-devkit-cputransp

• scratchbox-devkit-debian

• scratchbox-libs

• scratchbox-toolchain-cs2005q3.2-glibc2.5-arm

• scratchbox-toolchain-cs2005q3.2-glibc2.5-i386

Set up a scratchbox user account for an existing Linux user by running (as root):

/scratchbox/sbin/sbox_adduser yes

As root, start the Scratchbox control daemon with the command:

/scratchbox/sbin/sbox_ctl start

Users with Scratchbox user accounts can now log in to the Scratchbox envir- onment with the command:

/scratchbox/login

To configure the compilation target, log in to Scratchbox and run the com- mand:

sb-menu

Choose Setup, and enter a name for the compilation target, e.g. armel. Select the cs2005q3.2-glibc2.5-arm compiler, and then the devkits cputransp and debian-lenny, and select DONE. For CPU-transparency method, select qemu-arm-0.8.2-sb2.

145 When you are asked to extract a rootstrap on the target, choose No. When you are asked if you wish to install files to the target, choose No. When you are asked if you wish to select the target, select Yes. Download the ARMEL SDK rootstrap from the Maemo repositories at http://repository.maemo.org/stable/gregale/armel/ Maemo Dev Platform v2.2 armel-rootstrap.tgz, and copy it to the direct- ory /scratchbox/packages.

Finally, install the rootstrap by logging into Scratchbox, and running the command:

sbox-config -er /scratchbox/packages/Maemo_Dev_Platform_v2.2_armel-rootstrap.tgz

A.4 Compiling and running FFserver Checkout FFmpeg from SVN. FFmpeg does not follow the convention of having official release versions, but maintains a stable trunk version. The FFmpeg team encourages everyone who needs to compile from source, to compile from SVN.

svn checkout svn://svn.mplayerhq.hu//trunk ffmpeg

At the time we checked out and compiled, the code was at revision 10835.

The compile process was configured using the following configure command:

./configure --arch=armv41 --enable-pp --enable-gpl --disable-ipv6

In order for FFmpeg to compile, we had to specify the architecture armv41 (”1” is the number and not the lowercase letter ”l”). A bug in FFserver required disabling IPv6 support at compile-time to allow streaming through RTP, with the configure option --disable-ipv6. Otherwise, FFserver reported the error ”Bind: port already bound...” when trying to set up an RTP connection. A patch has been submitted on the FFserver mailing list. However, since we do not need IPv6 support in our experiments, we found it easier to just compile FFserver with IPv6 disabled. Some of the post-processing filters are not enabled by default, and must be enabled with the option --enable-pp. These libraries are licenced under the GPL, and in order to use these, you therefore also need to include the option --enable-gpl. Afterwards, the code could successfully be compiled by running make. The resulting ffserver executable could then be copied to the Nokia 770.

146 A sample ffserver.conf to configure the server is included with the source code.

A.5 Software repositories The Nokia 770 software used in this thesis work can be installed from the following repositories:

URL Distribution Components http://catalogue.tableteer.nokia.com/certified/ mistral user http://catalogue.tableteer.nokia.com/non-certified/ mistral user http://repository.maemo.org mistral free non-free http://repository.maemo.org/extras mistral free non-free http://www.mulliner.org/nokia770/repository maemo2 free http://maemo-hackers.org/apt mistral main http://mg.pov.lt/770 mistral user other experimental http://www.math.ucla.edu/ jimc/nokia770 mistral user http://scratchbox.org/debian stable main

147 B Source code

B.1 wlantest.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ interface interval”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” != ”2” ] 13 then 14 print help; 15 exit 1; 16 fi 17 18 INTERFACE=$1; 19 INTERVAL=$2; 20 21 # Start the logging process. 22 LOG HOSTINFO=” h o s t i n f o . l o g ” ; 23 LOG SCANNING=” scanning . l o g ” ; 24 LOG IWCONFIG=” i w c o n f i g . l o g ” ; 25 LOG WIRELESS=” w i r e l e s s . l o g ” ; 26 LOG DEV=”dev . log” ; 27 LOG IFCONFIG=” i f c o n f i g . l o g ” ; 28 29 # Get some information about the host. 30 COMMAND LINE=” $0 $1 $2 $3”; 31 IPADDRESS=‘ i f c o n f i g | grep $INTERFACE −A1 | grep ”inet addr:” | \ 32 cut −d: −f 2 | cut −d’ ’ −f1 ‘; 33 OSPLATFORM=‘uname −srm ‘; 34 STARTTIME=‘date ‘ ; 35 HOSTNAME=‘hostname ‘ ; 36 37 # Create and cd into log directory 38 LOGDIR=” l o g $ {HOSTNAME} ‘ date +”%Y%m%d %H%M%S” ‘ ” 39 mkdir $LOGDIR 40 cd $LOGDIR 41 42 # Print and log host information. 43 p r i n t f ”Command line : ’$COMMAND LINE’ \ n”; 44 p r i n t f ”Command line : ’$COMMAND LINE’ \ n” >> $LOG HOSTINFO; 45 printf ”IP/Hostname: $IPADDRESS ($HOSTNAME)\ n”; 46 printf ”IP/Hostname: $IPADDRESS ($HOSTNAME)\ n” >> $LOG HOSTINFO; 47 printf ”OS/Platform: $OSPLATFORM\n”; 48 printf ”OS/Platform: $OSPLATFORM\n” >> $LOG HOSTINFO ; 49 printf ”Logging started at : $STARTTIME\n\n”; 50 printf ”Logging started at : $STARTTIME\n\n” >> $LOG HOSTINFO; 51 52 # Print and log scanning of wireless frequencies to detect ot her networks. 53 # EDIT: 54 # This will often disconnect or crash the Nokia devices, so we skip this part. 55 #OUTPUT SCANNING=‘ i w l i s t $INTERFACE scanning ‘ 56 #printf ”${OUTPUT SCANNING}\n\n”;

148 57 #printf ”${OUTPUT SCANNING}\n\n” >> $LOG SCANNING; 58 59 # Start logging the results. 60 while true 61 do 62 # Timestamp every log entry. 63 TIME=‘ date +”%F %H:%M:%S” ‘ ; 64 LOGLINE PREFIX=”Time : ${TIME}\n”; 65 66 # Run the commands and store the output. 67 OUTPUT IWCONFIG=‘ i w c o n f i g $INTERFACE‘ ; 68 OUTPUT IFCONFIG=‘ i f c o n f i g $INTERFACE‘ ; 69 OUTPUT WIRELESS=‘cat /proc/net/wireless | grep $INTERFACE‘ ; 70 OUTPUT DEV=‘cat /proc/net/dev | grep $INTERFACE‘ ; 71 72 # Print output to terminal and log files. 73 printf ”${LOGLINE PREFIX}${OUTPUT IWCONFIG}\n\n”; 74 printf ”${LOGLINE PREFIX}${OUTPUT IWCONFIG}\n\n” >> $LOG IWCONFIG; 75 76 printf ”${LOGLINE PREFIX}${OUTPUT IFCONFIG}\n\n”; 77 printf ”${LOGLINE PREFIX}${OUTPUT IFCONFIG}\n\n” >> $LOG IFCONFIG ; 78 79 printf ”${LOGLINE PREFIX}${OUTPUT WIRELESS}\n\n”; 80 printf ”${LOGLINE PREFIX}${OUTPUT WIRELESS}\n\n” >> $LOG WIRELESS; 81 82 printf ”${LOGLINE PREFIX}${OUTPUT DEV}\n\n”; 83 printf ”${LOGLINE PREFIX}${OUTPUT DEV}\n\n” >> $LOG DEV; 84 85 sleep $INTERVAL; 86 done 87 88 # Remember to cd back out of the log directory. 89 cd ..

149 B.2 generate plotdata.py

1 #!/usr/bin/python 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 # Parses log files created by ’wlantest.sh’ and outputs data for plotting. 7 8 from datetime import datetime 9 import r e 10 import sys 11 12 def get next timestamp(f ): 13 14 match=None 15 16 # Search for next timestamp. 17 while not match : 18 line=f.readline() 19 20 if line == ””: 21 # Reached end of log file. Stop parsing. 22 return None 23 24 # Search line for start of next log entry. 25 match=re timestamp.search(line) 26 27 # Parse and return timestamp. 28 return datetime(int(match.group( ’year ’)), int(match.group( ’ month’)) , 29 int(match.group(’day’)), int(match.group(’hour’)), \ 30 int(match.group(’minute’)), int(match.group(’second ’))) 31 32 if len(sys.argv) != 3: 33 34 # Print usage help. 35 print ”Usage: ” + sys.argv[0] + ” logfile1” 36 sys.exit(1) 37 38 re timestamp = re.compile(r”Time: (?P\d+)−(?P\d+)−(?P\d+)\ s+” 39 ”(?P\d+):(?P\d+):(?P\d+)”) 40 r e regexp = re.compile(sys.argv[1]) 41 42 # Open log file specified as parameters for parsing. 43 logfile = open(sys.argv[2], ”r”) 44 45 # Contains the timestamp of the next log entry to be parsed for each file. 46 current timestamp = {} 47 48 parsing = True 49 50 # Print a header for the output. 51 header = ”# Regexp : ” + sys.argv[1] + ”\n” 52 53 print header + ”\n” 54 55 while parsing : 56 57 # Line to be output after the processing the current entry of each log file. 58 l i n e t o print = ”” 59

150 60 current timestamp = get next timestamp(logfile) 61 62 if current timestamp == None: 63 64 # End of file. Stop parsing. 65 break 66 67 print ”%02d:%02d:%02d\ t” % \ 68 (current timestamp .hour , current timestamp.minute, current timestamp .second), 69 70 match=None 71 72 # Find the next line that contains a specified pattern. 73 while match == None: 74 75 line = logfile.readline() 76 77 if line == ””: 78 # Reached end of log file. Stop parsing. 79 parsing=False 80 break 81 82 # Search line for the specified pattern. 83 match=re regexp .search(line) 84 85 if not parsing : 86 break 87 88 print match. group(1) 89 90 91 logfile.close()

151 B.3 generate all plots.sh

1 #!/bin/sh 2 3 print help() 4 { 5 echo ”Runs gnuplot on all . gnuplot files i n a directory .”; 6 echo ”Usage: ‘basename $0‘ DIRECTORY” ; 7 } 8 9 # Check for correct number of command line arguments. 10 if [ ”$#” != ”1” ] 11 then 12 print help; 13 exit 1; 14 fi 15 16 # Move into directory. 17 cd ”$1”; 18 19 # Run gnuplot. 20 for i in ‘ls ∗ .gnuplot ‘; 21 do 22 gnuplot ” $ i”; 23 done

152 B.4 resourcelogger.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ [interval]”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” = ”1” ] 13 then 14 INTERVAL=”1” ; 15 16 elif [ ”$#” = ”2” ] 17 then 18 INTERVAL=$2; 19 else 20 print help; 21 exit 1; 22 fi 23 24 LOG DIR=$1; 25 26 # Start the logging process. 27 LOG HOSTINFO=” h o s t i n f o . l o g ” ; 28 LOG SAR=”sar . log”; 29 LOG PIDSTAT=” p i d s t a t . l o g ” ; 30 31 # Get some information about the host. 32 COMMAND LINE=” $0 $∗”; 33 INTERFACE=‘ i w c o n f i g 2> /dev/null | grep ”ESSID” | cut −d ’ ’ −f1 ‘; 34 IPADDRESS=‘ i f c o n f i g | grep $INTERFACE −A1 | grep ”inet addr:” | \ 35 cut −d: −f 2 | cut −d’ ’ −f1 ‘; 36 OSPLATFORM=‘uname −srm ‘; 37 STARTTIME=‘date ‘ ; 38 HOSTNAME=‘hostname ‘ ; 39 40 # Create log directory 41 LOG DIR=”${LOG DIR}/log resource $ {HOSTNAME} ‘ date +”%Y%m%d %H%M%S” ‘ ” 42 mkdir −p ${LOG DIR} 43 44 # Print and log host information. 45 p r i n t f ”Command line : ’$COMMAND LINE’ \ n”; 46 p r i n t f ”Command line : ’$COMMAND LINE’ \ n” >> ${LOG DIR}/ ${LOG HOSTINFO} ; 47 printf ”IP/Hostname: $IPADDRESS ($HOSTNAME)\ n”; 48 printf ”IP/Hostname: $IPADDRESS ($HOSTNAME)\ n” >> \ 49 ${LOG DIR}/ ${LOG HOSTINFO} ; 50 printf ”OS/Platform: $OSPLATFORM\n”; 51 printf ”OS/Platform: $OSPLATFORM\n” >> ${LOG DIR}/ ${LOG HOSTINFO} ; 52 printf ”Logging started at : $STARTTIME\n”; 53 printf ”Logging started at : $STARTTIME\n” >> ${LOG DIR}/${LOG HOSTINFO} ; 54 55 # Start logging. 56 sar −o ${LOG DIR}/${LOG SAR} −Ap ${INTERVAL} 0 > /dev/null 2>&1 & 57 PID SAR=$ !; 58 59 # Stop logging when a key is pressed.

153 60 read −p ”Press any key to stop...” reply 61 printf ”Killing logging processes ...” 62 kill $PID SAR 63 wait 64 printf ”Done. \ n”

154 B.5 networklogger.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ [interval ]”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” = ”3” ] 13 then 14 INTERVAL=”1” ; 15 16 elif [ ”$#” = ”4” ] 17 then 18 INTERVAL=$4; 19 else 20 print help; 21 exit 1; 22 fi 23 24 LOG DIR=$1; 25 INTERFACE=$2; 26 RTP PORT=$3; 27 RTCP PORT=‘expr ${RTP PORT} + 1‘ 28 OLSR PORT=” 698 ” ; 29 30 LOG HOSTINFO=” h o s t i n f o . l o g ” ; 31 LOG IWCONFIG=” i w c o n f i g . l o g ” ; 32 LOG TCPDUMP=”tcpdump . pcap ” ; 33 34 # Get some information about the host. 35 COMMAND LINE=” $0 $∗”; 36 IPADDRESS=‘ i f c o n f i g | grep $INTERFACE −A1 | grep ”inet addr:” | \ 37 cut −d: −f 2 | cut −d’ ’ −f1 ‘; 38 OSPLATFORM=‘uname −srm ‘; 39 STARTTIME=‘date ‘ ; 40 HOSTNAME=‘hostname ‘ ; 41 42 # Create log directory 43 LOG DIR=”${LOG DIR}/log network $ {HOSTNAME} ‘ date +”%Y%m%d %H%M%S” ‘ ” ; 44 mkdir −p ${LOG DIR} ; 45 46 # Print and log host information. 47 p r i n t f ”Command line : ’$COMMAND LINE’ \ n”; 48 p r i n t f ”Command line : ’$COMMAND LINE’ \ n” >> ${LOG DIR}/ ${LOG HOSTINFO} ; 49 printf ”IP/Hostname: $IPADDRESS ($HOSTNAME)\ n”; 50 printf ”IP/Hostname: $IPADDRESS ($HOSTNAME)\ n” >> \ 51 ${LOG DIR}/ ${LOG HOSTINFO} ; 52 printf ”OS/Platform: $OSPLATFORM\n”; 53 printf ”OS/Platform: $OSPLATFORM\n” >> ${LOG DIR}/ ${LOG HOSTINFO} ; 54 printf ”Logging started at : $STARTTIME\n”; 55 printf ”Logging started at : $STARTTIME\n” >> ${LOG DIR}/${LOG HOSTINFO} ; 56 57 # Start logging WLAN information. 58 wlanmonitor .sh ${INTERVAL} >> ${LOG DIR}/ ${LOG IWCONFIG} & 59 PID WLANMONITOR=$ !;

155 60 61 # Start logging OLSR, RTP and RTCP traffic. 62 tcpdump −w ${LOG DIR}/${LOG TCPDUMP} −i ${INTERFACE} −s 128 \ 63 −n port ${OLSR PORT} or port ${RTP PORT} or port ${RTCP PORT} > \ 64 /dev/null 2>&1 & 65 PID TCPDUMP=$ !; 66 67 # Stop logging when a key is pressed. 68 read −p ”Press any key to stop...” reply 69 printf ”Killing logging processes ...” 70 kill $PID WLANMONITOR $PID TCPDUMP; 71 wait 72 printf ”Done. \ n”

156 B.6 wlanmonitor.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ [interval]”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” = ”0” ] 13 then 14 INTERVAL=”1” ; 15 16 elif [ ”$#” = ”1” ] 17 then 18 INTERVAL=$1; 19 else 20 print help; 21 exit 1; 22 fi 23 24 while true 25 do 26 # Timestamp every log entry. 27 printf ”Time: ‘date +’%F %H:%M:%S ’ ‘ \ n”; 28 iwconfig2> /dev/null ; 29 30 sleep ${INTERVAL} 31 done

157 B.7 start ffserver.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ ”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” != ”1” ] 13 then 14 print help; 15 exit 1; 16 fi 17 18 LOG DIR=$1; 19 mkdir −p ${LOG DIR} ; 20 21 ffserver | te e ${LOG DIR}/log ffserver ‘ date +”%Y%m%d %H%M%S” ‘

158 B.8 start olsrd.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ ”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” != ”1” ] 13 then 14 print help; 15 exit 1; 16 fi 17 18 LOG DIR=$1; 19 mkdir −p ${LOG DIR} ; 20 21 olsrd > ${LOG DIR}/log olsrd ‘ date +”%Y%m%d %H%M%S” ‘

159 B.9 playallstreams.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ ”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” != ”3” ] 13 then 14 print help; 15 exit 1; 16 fi 17 18 LOG DIR=$1; 19 SERVER=$2; 20 RTP PORT=$3; 21 22 printf ”\n\n” 23 read −p ”Press any key to start playing the first stream...” reply 24 mplayer −rtsp −port ${RTP PORT} \ 25 rtsp:// ${SERVER}:5454/action 400x240 1000k mpeg4. avi > \ 26 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 27 28 printf ”\n\n” 29 read −p ”Press any key to start playing the next stream...” reply 30 mplayer −rtsp −port ${RTP PORT} \ 31 rtsp:// ${SERVER}:5454/action 400x240 1000k mpeg1 . mpg > \ 32 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 33 34 printf ”\n\n” 35 read −p ”Press any key to start playing the next stream...” reply 36 mplayer −rtsp −port ${RTP PORT} \ 37 rtsp:// ${SERVER}:5454/action 240x144 500k mpeg4. avi > \ 38 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 39 40 printf ”\n\n” 41 read −p ”Press any key to start playing the next stream...” reply 42 mplayer −rtsp −port ${RTP PORT} \ 43 rtsp:// ${SERVER}:5454/action 240x144 500k mpeg1 . mpg > \ 44 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 45 46 printf ”\n\n” 47 read −p ”Press any key to start playing the next stream...” reply 48 mplayer −rtsp −port ${RTP PORT} \ 49 rtsp:// ${SERVER}:5454/action 240x144 200k mpeg4. avi > \ 50 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 51 52 printf ”\n\n” 53 read −p ”Press any key to start playing the next stream...” reply 54 mplayer −rtsp −port ${RTP PORT} \ 55 rtsp:// ${SERVER}:5454/action 240x144 200k mpeg1 . mpg > \ 56 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 57 58 printf ”\n\n” 59 read −p ”Press any key to start playing the next stream...” reply

160 60 mplayer −rtsp −port ${RTP PORT} \ 61 rtsp:// ${SERVER}:5454/interview 400x240 1000k mpeg4. avi > \ 62 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 63 64 printf ”\n\n” 65 read −p ”Press any key to start playing the next stream...” reply 66 mplayer −rtsp −port ${RTP PORT} \ 67 rtsp:// ${SERVER}:5454/interview 400x240 1000k mpeg1 . mpg > \ 68 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 69 70 printf ”\n\n” 71 read −p ”Press any key to start playing the next stream...” reply 72 mplayer −rtsp −port ${RTP PORT} \ 73 rtsp:// ${SERVER}:5454/interview 240x144 500k mpeg4 . avi > \ 74 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 75 76 printf ”\n\n” 77 read −p ”Press any key to start playing the next stream...” reply 78 mplayer −rtsp −port ${RTP PORT} \ 79 rtsp:// ${SERVER}:5454/interview 240x144 500k mpeg1 . mpg > \ 80 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 81 82 printf ”\n\n” 83 read −p ”Press any key to start playing the next stream...” reply 84 mplayer −rtsp −port ${RTP PORT} \ 85 rtsp:// ${SERVER}:5454/interview 240x144 200k mpeg4 . avi > \ 86 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘ 87 88 printf ”\n\n” 89 read −p ”Press any key to start playing the last stream...” reply 90 mplayer −rtsp −port ${RTP PORT} \ 91 rtsp:// ${SERVER}:5454/interview 240x144 200k mpeg1 . mpg > \ 92 ${LOG DIR}/log mplayer ‘ date +”%Y%m%d %H%M%S” ‘

161 B.10 parse rtcp data.sh

1 #!/bin/sh 2 3 # Author: 4 # Magnus E. Halvorsen (magnuseh@ifi .uio.no) 5 6 print help() 7 { 8 echo ”Usage: ‘basename $0‘ { capturefile }”; 9 } 10 11 # Check for correct number of command line arguments. 12 if [ ”$#” != ”1” ] 13 then 14 print help; 15 exit 1; 16 fi 17 18 19 tshark −r $1 −d udp. port==3001,rtcp −R ”rtcp.pt == 201” −T fields \ 20 −E ”header=y” −e ”frame.time relative” −e ”rtcp.ssrc.cum nr” \ 21 −e ”rtcp.ssrc.fraction”

162 B.11 generate resource plots.sh

1 #!/bin/sh 2 3 sar −f sar.log > cpu . data 4 sar −f sar.log −r > memory. data 5 sar −f sar.log −d > blockdevices .data 6 sar −f sar.log −I 10 > dsp .data 7 sar −f sar.log −n DEV| grep wlan0 > network . data 8 sar −f sar.log −n EDEV| grep wlan0 > networkerrors .data 9 10 # Remove last line , which contains average values, so it doesn’t show up in the 11 # plotted graph. 12 sed −i ’ $d’ ∗ . data 13 14 # Generate plots. 15 gnuplot ˜/UiO/master/Experiments/cpu.gnuplot 16 gnuplot ˜/UiO/master/Experiments/memory. gnuplot 17 gnuplot ˜/UiO/master/Experiments/blockdevices .gnuplot 18 gnuplot ˜/UiO/master/Experiments/dsp.gnuplot 19 gnuplot ˜/UiO/master/Experiments/network.gnuplot 20 gnuplot ˜/UiO/master/Experiments/networkerrors.gnuplot

163