Report submitted in partial fulfillment of the requirement for the degree of B.Sc

In Electrical and Electronic Engineering Under the Supervision of Dr. Idrris Ahmed Ali

BY Duha Mohammed HassaneinYousif

To

Department of Electrical and Electronic Engineering University of Khartoum

July 2005 DedicationDedication To my father whose timely support is the basis of my career. To my mother for her sacrifice and encouragement. To my sisters and brothers for their great support. To Yousif for being there for me. To everyone who provided help and support during all my life.

2 Acknowledgement:

I am very grateful to Dr.Idris for his conscientious help throughout the project and his excellent supervision and thoughtful suggestions for the improvement of my work. Great thanks to Dr.Izzeldin kamil Amin and Dr.Salah Elhaggaz for their great contribution into making this project. I also want to gratefully thank my colleague Swaid Ali Elbakri for his valuable help to finish this work. . Finally, I want to thank my sister & my friend Hind for her patience and her assistance to make a better job and also every one who worked behind the scenes to finish this project.

3 Table of Contents Part one Chapter One: introduction 1 Part Two: Literature Review Chapter Two: Transmission Media 2.1 Guided Transmission Media 3 2.1.1: 3 2.1.2: Co-axial cable 4 2.1.3: Optical Fiber 5 2.1.4: comparison 7 2.2 Unguided Transmission Media 7 2.2.1 Microwave Frequency 7 2.2.2 Infrared frequency 8 2.2.3 8 Chapter Three: Broadband Wired Networks 3.1 Copper Networks 9 3.1.1 Intergraded Service Digital networks(ISDN) 9 3.1.1.1 Background 9 3.1.1.2 The Analog to Digital Conversion 9 3.1.1.3 ISDN Components 9 3.1.1.4 ISDN Access Interface 11 3.1.1.5 Broadband ISDN 12 3.1.2 (DSL) 13 3.1.2.1 Introduction 13 3.1.2.2 DSL Technologies 14 3.2 Cable Networks 18 3.2.1 18 3.2.2 Cable Modem Components 22 3.2.3 Cable Modem Termination System(CMTS) 24 3.2.4 Cable Modem application 24

4 3.2.5 Cable Modem Limitations 24 3.3 Fiber optic Networks 25 3.3.1 Introduction 25 3.3.2 Network Types 25 3.3.2.1 Ethernet 25 3.3.2.2 Fast Ethernet 25 3.3.2.3 Gigabit Ethernet 26 3.3.2.4 10 Gigabit Ethernet 26 3.3.2.5 Fiber Channel 28 3.3.2.5.1 Key Features of Fiber Channel 28 3.3.2.5.2 How Fiber Channel Works 29 3.3.2.5.3 Classes of Services 29 3.3.2.5.4 Applications of storage Area Networks 30 3.3.2.6 Fiber Distributed Data Interface(FDDI) 31 3.3.2.7 Synchronous Optical Networks(SONET) 32 3.3.2.8 Fiber to the Home(FTTH) 32

Chapter Four: Wireless Networks 4.1 Wireless Access 34 4.4.1 Spread Spectrum Modulation 34 4.4.1.1 Frequency Hopping Spread Spectrum(FHSS) 35 4.4.1.2 Direct Sequence Spread Spectrum 35 4.4.2 The Role of WLL 35 4.4.3 Advantages of WLL over Wired Approach 36 4 .4.4 Alternatives of WLL 36 4.4.5 Propagation Consideration for WLL 37 4.4.6 Wireless Services 38 4.4.7 Wireless Technologies 38 4.4.7.1 Wi-Fi 39 4.4.7.2 Wi-Max 40 4.4.7.3 3 Generation(3G) 41

5 4.4.7.4 Ultra wide Band (UWB) 41 4.2 Satellite Access 42 4.2.1 Background 42 4.2.2 2-Way Satellite Internet 43 4.2.3 Why Satellites 44 4.2.4 Satellite Frequency Bands 44 4.2.5 Satellite Types 45 4.2.5.1 Geosynchronous Earth orbit (GEO) Satellite 45 4.2.5.2 Medium Earth Orbit (MEO) Satellite 45 4.2.5.3 Low Earth Orbit (LEO) Satellite 46 4.2.6 Communications Equipment 46 4.2.7 Space Security Unit 47 4.2.8 Satellite Characteristics 47 4.2.9 Advantages of Broadband Satellite 48 4.2.10 Disadvantages of the Broadband Satellite 48 4.2.11 Applications of high Satellite Communication 49 Chapter Five: Video Streaming 5.1 Why Video Streaming as Broadband Connectivity Application 50 5.2 Streaming Vs Downloading 50 5.3 Classes of Streaming 51 5.3.1 Streaming Stored Video 51 5.3.2 Streaming Live Video 52 5.3.3 Web Server 52 5.3.4 Streaming Server 53 5.4 Video Compression and Decompression 54 5.4.1 Codecs 54 5.4.2 Video Compression Algorithms 56 5.4.2.1 Joint photographic Experts Group(JPEG) 56 5.4.2.2 Moving Picture Experts Group(MPEG) 56 5.4.2.3 H.261 57 5.4.2.4 H 263 57

6 5.5 Internet Transport Protocols 58 5.5.1 TCP Transmission Control Protocol 58 5.5.2 User Datagram Protocol(UDP) 58 5.5.3 Real Time Protocol(RTP) 58 5.5.4 VDP 58 5.5.5 Real Time Streaming Protocol(RSTP) 59 5.5.6 RSVP 59 5.6 Ways of Delivering the Streamed Data 59 5.7 Projecting Bandwidth requirements 59 5.8 Projecting the Capacity of available Bandwidth 60 Part Three Design, analysis & Results Chapter Six: Methods and Techniques 6.1 Installing and Running Streaming server 61 6.2 Helix Universal Server 61 6.2.1 Streaming Media Encoders 61 6.2.2 Protocols Compatible with Helix Universal server 62 6.2.3 Start Requirements 63 6.3 RealProducer 63 6.3.1 Targeting Audiences 65 6.3.2 Realproducer Video Codes 66 6.4 Bandwidth Requirements 66 6.5 Analysis 67 Chapter Seven: Results and Discussion 69 Chapter Eight: Conclusion 73 Refrences 74

7 Abstract The term "broadband" has become commonplace for describing the future of digital communications. It is widely used to refer to a range of technologies being offered or developed for the delivery of data communications services. Broadband refers most commonly to a new generation of high-speed transmission services aimed at residential and small business users. On its face, the term refers to the substantial bandwidth that a high-speed connection can provide to a user. Broadband refers to the very high-speed transfer of information through technologies such as fiber optics, satellites, wireless transmission, and co-axial cable. Data rich applications such as full motion video requires more telecommunication capacity or bandwidth than a low bandwidth application such as e-mail or still imagery.

8 

& .       !" !" #$   "   "

'( )$  & )# *  + " , -$./  0& -$ 1 2'

.3   

9# :0$ '( + ''; ; * "   "4 56 78 *#

"5# BC  & .-> :" ?@  & :A '( # <'=1 "

)#  * 9D :  "0 &EC  F" * 4 "  

.2'

G   "    3 *#   " *    HEI 4 &

.$ @ '@  K & LK0/ :0$ J" $ J  <

 I :   0 7 I@ I 'D   G 3 1 >  

.*3&K '  G   *# 2' HM + N0& 3 F" &

9 Part One Chapter One: Introduction Broadband connections have obvious technological superiority over dial-up Internet access. First of all, the most important characteristic of broadband is high speed or large bandwidth. However, there is no generally accepted definition for broadband. As defined by the FCC, the speed of an "advanced telecommunications service" should be at least 200 Kbps in each direction; ITU defines broadband as transmission capacity that is faster than ISDN’s primary rate (1.5 Mbps). Despite the lack of a consistent definition, broadband is generally considered to be able to provide sufficient connection speed for acceptable performance with most Internet applications, such as browsing Web, transferring file and streaming media. Moreover, the definition of broadband should keep up with the advance of technologies and the increasing bandwidth demand of various applications. Broadband does not merely mean large downstream bandwidth, but also large upstream bandwidth. However, the downstream and upstream bandwidths are not necessarily symmetric. In addition to higher speed, a broadband connection also generally provides an "always-on" connection to the Internet ,the always-on characteristic is especially crucial to applications that need connections to be constant or capable of being initiated at any time immediately. Broadband is also superior to dial-up connection in terms of latency and jitter. Latency, or delay, is a measure of how long it takes to deliver a packet across the network to its destination. Jitter measures the variance in latency. These two parameters are affected by many factors, such as congestion within the network, streaming technology and network management, not merely network bandwidth. These two additional parameters are crucial for accessing the performance of applications that depend on real-time delivery of information or interaction, such as online video or interactive video conferencing.. There are multiple transmission media or technologies that can be used to provide broadband access. These include cable, an enhanced telephone service called digital subscriber line (DSL), satellite, fixed wireless, and others. In this thesis various broadband techniques are introduced. Chapter two introduces the different transmission media the guided and unguided, brief description to the transmission characteristics in each medium and their limitations. Chapter three and four deal with the technology of broadband delivery through fixed wires and by wireless connections.

10 Chapter five introduces the concepts of video streaming showing how broadband connection represents a strong platform for video applications which always suffered from the problem of little bandwidth. Chapter six explains all the hardware and software needed for a complete media process, from the creation step until delivery to the audience. Chapter 7 shows graphical results given by a network analyzer “Ethereal”, results were taken while delivering from a web server and from a streaming server.

11 Part Two: Literature Review Chapter Two: Transmission Media In data transmission systems, the transmission medium is the physical path between transmitter and receiver. The transmission medias that are used to convey information can be classified as guided or unguided.

2.1 Guided transmission media: Guided media provide a physical path along which the signals are propagated; these include twisted pair, , and optical fiber. Unguided media employ an antenna for transmitting through air, vacuum, or water. Traditionally, twisted pair has been the workhorse for communications of all sorts. Higher data rates over longer distances can be achieved with coaxial cable, and so coaxial cable has often been used for high-speed local area network and for high- capacity long-distance trunk applications. However, the tremendous capacity of optical fiber has made that medium more attractive than coaxial cable, and thus optical fiber has taken over much of the market for high-speed LANs and for long-distance applications. Unguided transmission techniques commonly used for information communications include broadcast radio, terrestrial microwave, and satellite. Infrared transmission is used in some LAN applications. The characteristics and quality of a data transmission are determined both by the characteristics of the medium and the characteristics of the signal. In the case of guided media, the medium itself is more important in determining the limitations of transmission. For unguided media, the bandwidth of the signal produced by the transmitting antenna is more important than the medium in determining transmission characteristics. One key property of signals transmitted by antenna is directionality. In general, signals at lower frequencies are omni directional; that is, the signal propagates in all directions from the antenna. At higher frequencies, it is possible to focus the signal into a directional beam.

12 2.1.1 Twisted Pair: The least expensive and most widely used guided transmission medium is twisted pair. Physical Description A twisted pair consists of two insulated copper wires arranged in a regular spiral pattern (figure 2.1). A wire pair acts as a single communication link. Typically, a number of these pairs are bundled together into a cable by wrapping them in a tough protective sheath. Over longer distances, cables may contain hundreds of pairs. The twisting tends to decrease the crosstalk interference between adjacent pairs in a cable. Neighboring pairs in a bundle typically have somewhat different twist lengths to reduce the crosstalk interference. On long-distance links, the twist length typically varies from 5 to 15 cm. The wires in a pair have thicknesses of from 0.4 to 0.9 mm.

Applications 1. In telephone networks, to connect subscribers to their exchanges. 2. Handles digital data traffic at modest data rates by means of a modem. 3. Digital signaling for connecting to a digital switch or digital PBX, 64kbps is most common. 4. used within buildings for local area networks supporting personal computers. Data rates for such products are typically in the neighborhood of 10 Mbps. Transmission characteristics Twisted pair may be used to transmit both analog and digital transmission. For analog signals, amplifiers are required about every 5 to 6 km. For digital transmission (using either analog or digital signals), repeaters are required every 2 or 3 km. Compared to other commonly used guided transmission media (coaxial cable, optical fiber), twisted pair is limited in distance, bandwidth, and data rate. The medium is quite susceptible to interference and noise because of its easy coupling with electromagnetic fields.

Figure 2.1

Unshielded and Shielded Twisted Pair Twisted pair comes in two varieties: unshielded and shielded.

13 Unshielded twisted pair (UTP) is ordinary telephone wire. Office buildings, by universal practice, are rewired with excess unshielded twisted pair, more than is needed for simple telephone support. This is the least expensive of all the transmission media commonly used for local area networks and is easy to work with and easy to install. Unshielded twisted pair is subject to external electromagnetic interference, including interference from nearby twisted air and from noise generated in the environment. A way to improve the characteristics of this medium is to shield the twisted pair with a metallic braid or sheathing that reduces interference. This shielded twisted pair (STP) provides better performance at higher data rates. However, it s more expensive and more difficult to work with than unshielded twisted pair.

2.1.2 Coaxial Cable Physical Description Coaxial cable, like twisted pair, consists of two conductors, but is constructed differently to permit it to operate over a wider range of frequencies. It consists of a hollow outer cylindrical conductor that surrounds a single inner wire conductor (Figure 2.2).

Figure 2.2

A single coaxial cable has a diameter of from 1 to 2.5 cm. Coaxial cable can be used over longer distances and support more stations on a shared line than twisted pair. Applications • Television distribution • Long-distance telephone transmission • Short-run computer system links

14 • Local area networks Transmission Characteristics Coaxial cable is used to transmit both analog and digital signals. As can be seen from Figure 4.3b, coaxial cable has frequency characteristics that are superior to those f twisted pair, and can hence be used effectively at higher frequencies and data rates. Because of its shielded, concentric construction, coaxial cable is much less susceptible o interference and crosstalk than twisted pair. The principal constraints on performance are attenuation, thermal noise, and intermodulation noise. The latter is present only when several channels (FDM) or frequency bands are in use on the cable. or long-distance transmission of analog signals, amplifiers are needed every few kilometers, with closer spacing required if higher frequencies are used. The usable spectrum for analog signaling extends to about 500 MHz. For digital signaling, repeaters re needed every kilometer or so, with closer spacing needed for higher data rates.

2.1.3Optical Fiber Physical Description An optical fiber is a thin (2 to µm), flexible medium capable of guiding an optical ray. Various glasses and plastics can be used to make optical fibers. The lowest losses have been obtained using fibers of ultra pure fused silica. Ultra pure fiber is difficult to manufacture; higher-loss multicomponent glass fibers are more economical and still provide good performance. Plastic fiber is even less costly and can be used for short-haul links, for which moderately high losses are acceptable. An optical fiber cable has a cylindrical shape and consists of three concentric sections: the core, the cladding, and the jacket (Figure 2.3). Applications • Long-haul trunks • Metropolitan trunks • Rural exchange trunks • Subscriber loops • Local area networks

15 Figure 2.3

Transmission Characteristics Two types of propagation exists here: • Singlemode propagation.: Provides superior performance because there is a single transmission path. • Multimode propagation: Multiple propagation paths exist, each with a different path length this causes signal elements (light pulses) to spread out in time, which limits he rate at which data can be accurately received.(figure 2.4)

Figure 2.4

The following characteristics distinguish optical fiber from twisted pair or coaxial cable: • Greater capacity: • Smaller size and lighter weight • Lower attenuation: • Electromagnetic isolation

16 • Greater repeater spacing 2.1.4 Comparison Table 2.1

Point-to-point transmission data rate bandwidth repeater distance Twisted pair 4 Mbps 3 MHz 2-10 km Coaxial cable 500 Mbps 350 MHz 1-10 km optical fibre 2 Gbps 2 GHz 10-100 km

2.2 Unguided transmission media:

Three general ranges of frequencies are of interest in our discussion of wireless transmission:

2.2.1 Microwave frequencies: Microwave transmission is line of sight transmission. The transmit station must be in visible contact with the receive station. This sets a limit on the distance between stations depending on the local geography. Typically the line of sight due to the Earth's curvature is only 50 km to the horizon! Repeater stations must be placed so the data signal can hop, skip and jump across the country(figure 2.5).

Figure 2.5

Microwaves operate at high operating frequencies of 3 to 10 GHz. This allows them to carry large quantities of data due to their large bandwidth.

17 2.2.2 Infrared frequencies: Covers the range from 3 * 1011 to 2 * 1014 Hz. Infrared systems are

simple in design and therefore inexpensive. They use the same signal frequencies used on fiber optic links. IR systems detect only the amplitude of the signal and so interference is greatly reduced. These systems are not bandwidth limited and thus can achieve transmission speeds greater than the other systems. IR technology was initially very popular because it delivered high data rates and relatively cheap price. The drawbacks to IR systems are that the transmission spectrum is shared with the sun and other things such as fluorescent lights. If there is enough interference from other sources it can render the LAN useless. IR systems require an unobstructed line of sight (LOS). IR signals cannot penetrate opaque objects. This means that walls, dividers, curtains, or even fog can obstruct the signal.

2.2.3 Radio frequencies: There are three types of RF (radio frequency) propagation: Ground Wave, Ionospheric and Line of Sight (LOS).

Chapter Three: Broadband Wired Networks 3.1 Copper Networks:

3.1.1 Integrated Services Digital Network ( ISDN): 3.1.1.1 Background: The back end of the network (interoffice trunks) were practically all digital and the switching systems were becoming digital very quickly, it was expected that the only part of the network that would still be analog is the local loop or the last mile(loop from the central office to the

18 customer), this created what is always called the local loop problem, the problem is that much of the local loop plant was installed between 40 to 6o years ago and had been designed for normal voice communications. The existing voice networks didn’t deal well with data for the following reasons: • Modems had to be used to transmit data. • Data rates were around 9600b/s. • Connections were unreliable. Therefore the problem was how to run high speed digital data on the local loop. This problem existed until the ISDN was conceived in the 1980s by the world’s telephone companies as the next generation network. ISDN is the replacement of traditional analog plain old telephone service (POTS) equipment and wiring schemes with higher speed digital equipment. The transition from POTS to ISDN changed the way connections at the local loop area were processed. 3.1.1.2The Analog-to-Digital Conversion: To carry a voice signal on a digital carrier, the telephone companies had to develop equipment that could convert the voice signal. They did this by sampling the signal , this numerical data was then transmitted over the digital link, and converted back into analog at the receiving switch. Engineers found that most of what we hear could travel in the range between 300 and 3400Hz..So, the phone companies needed to provide 3Khz of bandwidth for each voice signal, plus 1Khz for separation between calls, for a total of 4Khz. They determined that for an accurate , they needed to sample at twice that rate, or 8,000 times per second. Each sample resulted in a number that was represented in an 8-bit digital "word". So, to transmit the human voice digitally, the company needed to provide enough bandwidth to send 8 bits of data, 8 thousand times per second, or 64Kbps. This became the foundation for the architecture of ISDN. The concept of ISDN was introduced as a possible solution. By moving the analog-digital conversion equipment into the customer premises, the phone company could provide both data and voice services over a single line. The voice service would be digitized at the customer premises, combined with any data services, and then these Integrated Services would be transmitted to the phone company’s central office. 3.1.1.3 ISDN Components: The ISDN standards define five groups of functional devices: The NT1, NT2, TE1, TE2, and the TA. Network Terminator 1: The Network Terminator 1 (NT1) is the device that communicates directly with the Central Office switch. The NT1 receives a "U" interface connection from the phone company, and puts out a "T" interface connection for the NT2, which is often in the same

19 piece of physical hardware.The NT1 handles the physical layer responsibilities of the connection, including physical and electrical termination, line monitoring and diagnostics, and multiplexing of D- and B-channels. Network Terminator 2: The Network Terminator 2 (NT2) sits between an NT1 device and any terminal equipment or adapters. An NT2 accepts a "T" interface from the NT1, and provides an "S" interface In most small installations, the NT1 and NT2 functions reside in the same piece of hardware. In larger installations, including all PRI installations, a separate NT2 may be used. ISDN Network routers and digital PBX's are examples of common NT2 devices. The NT2 handles data-link and network layer responsibilities in ISDN installations with many devices, including routing and contention monitoring(figure 3.1) Terminal Equipment 1: A Terminal Equipment 1 (TE1) device is a piece of user equipment that speaks the "S" interface language natively, and can connect directly to the NT devices. Examples of a TE1 device would be an ISDN workstation (such as the SGI Indy), an ISDN fax, or an ISDN-ready telephone. Terminal Equipment 2: Terminal Equipment 2 (TE2) devices are far more common--in fact, every telecommunications device that isn't in the TE1 category is a TE2 device. An analog phone, a PC, and a FAX are all examples of TE2 devices. To attach a TE2 device to the ISDN network, you need the appropriate Terminal Adapter. A TE2 device attaches to the Terminal Adapter through the "R" interface. Terminal Adapters: These devices connect a TE2 device to the ISDN network. The Terminal Adapter (TA) connects to the NT device using the "S" interface and connects to a TE2 device using the "R" interface.Terminal adapters are often combined with an NT1 for use with personal computers. Because of this, they are often referred to as ISDN modems. This is actually not accurate, because TA's do not perform analog-digital conversion like modems.

Figure 3.1

20 ISDN specifies a number of reference points that define logical interfaces between functional groupings. ISDN reference points include the following: • R—The reference point between non-ISDN equipment and a TA. • S—The reference point between user terminals and the NT2. • T—The reference point between NT1 and NT2 devices. U—The reference point between NT1 devices and line-termination equipment in the carrier network.

3.1.1.4 ISDN access interfaces: ISDN interfaces can be either PRI (primary rate interface) or BRI (Basic rate interface), they differ mainly in the number of channels. ISDN channels are usually divided into two different types- B and D:

• The bearer channel (B): The core of any ISDN channel is the bearer channel, or B- channel. A single B-channel carries 64Kbps of digital traffic. This traffic can be a digitized voice signal, digitized video, or raw data. The 64Kbps throughput is the perfect amount of bandwidth to sample a voice signal. B-channels are usually used in groups of 2, 23, or even more, to provide additional bandwidth or voice lines. To control the transmission of this data, they are always combined with a D-channel. • The Data channel (D): It’s used to convey signaling requests to an ISDN switch. In essence it provides a local loop to the telephone company’s central office. The router uses the D channel to dial destination phone number. It has bandwidth of 16kbps for BRI and 64kbps for PRI.. Although the D channel is used mainly for signaling ,it too can also carry packet switched data.

Basic Rate Interface (BRI)

It was intended to become the standard subscriber interface. The BRI specifies two bearer channels and one data channel .The two bearer channels would bear the customer information, the D channel is a 16kbps channel that provides signaling and framing for the B channel payload. There also happens to be an additional 48Kbps utilized for overhead that is not seen by the consumer. This constitutes the remainder of the channel not utilized by the D channel. When both B channels are active, the aggregate bandwidth becomes 128kbps. While some people will promote the BRI as being a 144Kbps data

21 channel (64 + 64 + 16 = 144), only the 128Kbps B-channel bandwidth (64 +64 = 128) is commonly available to the user (figure 3.2) The 16Kbps data channel is reserved for signaling in most circumstances. The BRI was intended for residential or home-office use. A BRI will allow users to access both voice and data services simultaneously. Depending on the hardware, one can connect up to eight distinct devices to your BRI. This allows one to build a network of devices, both phone, data, and video, and use any three of them at the same time.

Figure 3.2

Primary Rate Interface (PRI) The PRI provides 23B+D and all channels are 64kbps,then the physical interface or PRI is simply a T1(24 channels each one 64kbps).Technically it’s a T1 with extended super frame framing. PRI is frequently used for a PBX interface where the full signaling capability of D channel is needed. IF a router and another router need to be connected, an ordinary T1 will do fine and cost less. Circumstances will occur when we need both our router and another device on the same interface and will periodically adjust the amount of bandwidth to each. For this unique application, PRI is ideal.

Figure 3.3 Euro-ISDN will makes use of this additional bandwidth by assigning thirty B-channels to each PRI instead of the North American standard of 23(figure 3.3)

22 One of the most useful features of the ISDN network design is the ability to combine 64Kbps B- channels for applications that require larger amounts of bandwidth When the ITU originally developed the ISDN standards, they included several common B-channel aggregations, called H- channels, ex: H0 means 384kbps which means 6 B channels. 3.1.1.5Broadband ISDN B-ISDN supplies bandwidth in excess of 1.544Mbps, or faster than a T-1. B-ISDN will be found in three common forms : • Frame Relay: Available today in 56Kbps and 1.5Mbps lines, frame relay service is an example of a B-ISDN solution. Frame Relay uses packet switching instead of the circuit switching protocols commonly used in BRIs. • SMDS: The Switched Multimegabit Digital Service provides packet switched connectivity from 1.5Mbps to 45Mbps. • ATM: Asynchronous Transfer Mode communication uses small, 53byte "cells" to transfer data at up to 155Mbps and 622Mbps. ATM is commonly held as the future direction of B-ISDN services. 3.1.2 Digital subscriber line ( DSL):

3.1.2.1 Introduction: . In spite of that the Public Switch Telephone Network (PSTN) utilizes a bandwidth ranges from 400Hz up to 3400Hz to carry its caustic signals; the actual frequency spectrum of the copper line extends up to 1.1 MHz. Thus a new idea emerged to use the full frequency bandwidth of the copper lines by squeezing data through these lines using special DSL modems. In order to divide the full bandwidth of the line into a number of carrier subchannels; these modems use modern Digital Signal Processing (DSP) technologies. Each of these subchannels has a certain frequency band and separated from the others so interference between the channels is averted. One of these channels is allocated the normal Plain Old Telephone Service (POTS) with a frequency range of 0-4 KHz. The remaining channels are distributed, some for upstream and the others for downstream. Most DSL technologies require that a signal splitter be installed at the customer premises but it is possible to manage the splitting remotely from the central office which is known as splitterless DSL. The number of subchannels depends on the DSP technology being used. All transmission of data over wires involves coding these data in some way consistent with the carrying capacity and noise conditions of the wire. The familiar dial-up modems encodes and decodes data in such a way that the data can pass through the traditional switches and transmission links which were designed to carry voice, which more or less limits speeds to the 56 kbps ,DSL uses an advanced coding scheme that is not compatible with existing switches.. Consequently, new known as a DSL access

23 multiplexer (DSLAM) has to be installed in any central office where DSL is to be offered. The DSLAM must in turn be connected to a switched data network that ultimately connects the central office to the Internet (figure 3.4) . DSL service enables the transmission of packet-switched traffic over the twisted copper pairs at much higher speeds than a dial-up Internet access service can offer. DSL can operate at megabits per second, depending on the quality and length of the particular cable. Because DSL uses frequencies much higher than those used for voice communication, both voice and data can be sent over the same telephone line thus, customers can talk on their telephone while they are online, and voice service will continue even if the DSL service goes down. Theoretically, DSL services suppose to provide high data rates for about 1.5 MHz up to 52 MHz for downstream and 384 KHz up to 2.3 MHz for upstream.

Figure 3.4

But in practical there are some factors which these data rates depend on, like:

• The DSL type: there are many types of Digital Subscriber Lines; each has its own specifications and limitations. These types will be discussed in this chapter. • The distance between the customer premises and the telco’s central office: this is actually the length of the telephone line that connects the subscriber to the central office (the length the signal actually traverses). Roughly this distance must not be more than 5 kilometers.

24 • The quality of the telephone line and the equipments preparation on the customer side. • The services provided by the internet service provider ISP. • The quality of the internet backbone.

3.1.2.2 DSL Technologies: DSL technologies can be broken into two fundamental classifications: • Asymmetric(ADSL). • symmetric(SDSL). As the name implies ADSL uses higher downstream rates and lower upstream rates,in contrast SDSL uses the same rate for downstream and upstream. The term xDSL covers a number of similar yet competing forms of ADSL and SDSL technologies. xDSL is drawing significant attention from implementers and service providers because it promises to deliver high-bandwidth data rates to dispersed locations with relatively small changes to the existing telco infrastructure. xDSL services are dedicated, point-to-point, public network access over twisted-pair copper wire on the local loop (last mile) between a network service provider's (NSP) central office and the customer site, or on local loops.

1/ Asymmetric Digital subscriber Line (ADSL): ADSL technology is asymmetric. It allows more bandwidth downstream than upstream .This asymmetry, combined with always-on access (which eliminates call setup), makes ADSL ideal for Internet surfing, video-on-demand, and remote LAN access. Users of these applications typically download much more information than they send. ADSL succeeds because it takes advantage of the fact that most of its target applications(video on demand,internet access,multimedia and PC services)function perfectly well with a low upstream data rate, for such applications don’t need high bit rates in the upstream for example MPEG movies require 1.5 or 3.0 Mbps downstream while only needs between 16 Kbps and 64 Kbps upstream.(iec.org) Downstream data rates depend on a number of factors, including the length of the copper line, its wire gauge, presence of bridged taps, and cross-coupled interference. Line attenuation increases with line length. Ignoring bridged taps ADSL performs as shown in Table (3.1). Table 3.1

Data rate (Mbps) Wire gauge (AWG Wire size (mm) Distance (kilometers) 1.5 or 2 24 0.5 5.5 1.5 or 2 26 0.4 4.6

25 6.1 24 0.5 3.7 6.1 26 0.4 2.7

ADSL Modulation: Two choices are widely available :Carrierless Amplitude Phase (CAP) and Discrete MultiTone(DMT) .

1/Carrierless Amplitude Phase (CAP) CAP divides the available space into three regions: from 0 to 4 KHz is allocated for POTS transmission, from 25 KHz to 160 KHz for upstream data traffic and from 240 KHz to 1.5 MHz is allocated for downstream data traffic(figure 3.5)

Figure 3.5

Discrete MultiTone(DMT) Describes a version of multi carrier DSL modulation in which incoming data is collected and then distributed over a large number of small individual carriers, each uses a form of QAM. DMT divides signals into 256 equally sized pieces or channels(figure 3.6) .these channels can be modulated with a maximum of 15bps/Hz. Each channel is monitored constantly,should the quality become overly impaired the signal will be rellocated to another channel.DMT has the capability to step up or

26 down in 32 Kbps increments to maintain quality,this ability to adjust speed ,correct errors, relocate channels,etc generates higher rate of power consumption to maintain it all. DMT is more complex than CAP because of processes and resources involved in monitoring and allocating information ,but it allows more flexibility than CAP.

Figure 3.6

2/Very-High-Data-Rate Digital Subscriber Line (VDSL)

VDSL is the highest rate DSL technology available ,running at speeds of up to 52 Mbps.It’s is the second generation of DSL with higher throughput and simpler implementation requirements than ADSL. The relation between speed in downstream and distance between user and service provider is shown in the following table (2)

Table 3.2

Target Range (Mbps) Distance (meters) 12.96-13.8 1500 25.92-27.6 1000 51.84-55.2 300

3/G.lite ADS It’s a medium bandwidth version of ADSL that allows up to1.5 Mbps downstream and up to 512 Kbps upstream and allows voice and data to coexist on the wire without the use of splitters .Typical telco implementations currently provide 1.5 Mbps downstream and 160 kbps upstream.

27 4/Rate adaptive DSL(RADSL) A nonstandard version of ADSL that automatically adjust the connection speed to adjust for the quality of the telephone line This allows RADSL to function over longer distances than ADSL . Symmetric DSL flavors(SDf) SDSL methodologies are not widespread as those in the ADSL, SDSL is available in the following forms: 1/SDSL SDSL offers equivalent traffic flow in each direction but, like SHDSL, it cannot share the line with analogue signals, thus posing significant installation/modification costs in the local loop. SDSL is best suited to sites that require significant upload speeds such as web/FTP servers and business applications. The capacity of SDSL is adjusted according to signal quality and speeds and distance combinations ranging from 160 kbit/s over 7 km to 1.5 Mbit/s over 3 km are offered. Higher speeds are possible by combining multiple twisted pair wires. 2/Symmetric high data rate DSL(SHDSL) SHDSL connections are best suited for servers (web, FTP, file) and other business uses such as video conferencing that require high speeds in both directions. SHDSL uses a copper pair to send and receive data through two bands, which allow for speeds up to 2.3 Mbit/s in both directions. By including a second copper pair, SHDSL speeds can reach 4.6 Mbit/s in each direction. These speeds are possible over a 3 km range with data rates attenuating for longer distances. The two SHDSL bands send data over the low frequencies to extend the reach of the loop, making it impossible for SHDSL to carry a voice channel (POTS) like ADSL. This lack of voice capability imposes significant installation costs in the local loop,a cost that is passed on to the consumer through higher subscription costs. Therefore, SHDSL is more suitable as a replacement for traditional leased lines for business (businesses can usually absorb higher subscription rates than private users) rather than for the consumer market. 3/High data rate DSL(HDSL) HDSL is meant to deliver symmetric service at up to 2.3 Mbps.Although it’s available in 1.5 Mbps,this symmetric service don’t allow for standard telephone service in the same copper pair.. 4/HDSL-2 HDSL2 differs from HDSL in that it uses one pair of wires to convey 1.5 Mbps, whereas ANSI HDSL uses two wire pairs. 5/ISDN DSL(IDSL) Supports downstream and upstream rates of up to 144 Kbps using existing phone lines .It is unique in that it has the ability to deliver services through DLC(digital loop carrier),a remote device that is located in remote terminals placed in newer housing developments to simplify the

28 distribution of wiring from the telco.IDSL differs from ISDN in that it’s an always available service rather than dialup service.It’s however capable of using the same TA used in ISDN installations .IDSLis designed to extend DSL to locations with a long distance to a telephone central office.

3.2 Cable Networks 3.2.1 Cable Modem The television broadcast signal, regardless of the standard used, is one of the most complex signals used in commercial communications. The signal consists of a combination of amplitude, frequency, phase and pulse modulation techniques all on 6MHz channel with a single sideband transmission process called vestigial sideband. Television signals that are broadcast over the air are transmitted in 6MHz channels that are allocated to broadcasters by the Federal Communications Commission (FCC). . Ensuring that station that use the same channel are sufficiently far apart that they do not interfere with one another in any areas where they may be received. But preventing interference may also require that two stations do not use adjacent channels in the same area for a more subtle reason. In the air, broadcast signal strength falls off rapidly with the distance from the transmitter, and as a result, the signal from a nearby transmitter can be several orders of magnitude stronger than that from a distant transmitter. Because transmitters cannot contain their signals perfectly within their designated bandwidth and because receivers can not perfectly discriminate between signals from adjacent channels, a stronger nearby channel can interfere with a week distant signal. This is known as the near-far problem, and the result is that it is not always practical to use adjacent television channels in one area. Television signals delivered over traditional networks are sent the same way as they are over the air: by dividing the cable spectrum into 6 MHz channels of bandwidth and modulating each television signal into one channel (Frequency division multiplexing). But these cable systems can carry more channels than broadcast television for two reasons: 1. Since all channels can be transmitted at the same power level, the near-far problem does not exist allowing adjacent channels to be used in the cable. 2. Cable can carry as many channels as the infrastructure will permit (100 channels or more). Cable systems are not limited to the bandwidth that is designated by the FCC.

29 Cable TV appeared in industry during the early 1960s. The initial networks installed used a basic tree architecture in which all signals emanated from the head-end location and were distributed to individual subscribers via a series of main trunks (trees), sub trunks (branches), and feeders (twigs).The topology requires analog amplifiers to periodically boost signals to acceptable levels based on the service area being covered. However, all the benefits of solving the gain/loss problems were offset by the introduction of noise and directly attributed to amplifiers. Analog amplifiers, as noted in any communication discussion, do nothing to eliminate noise. Cable systems were one-way distribution systems built with coaxial cable and microwave radio systems. But in early 1988, the CATV companies discovered that fiber optic cables could be used as a means of improving the cable infrastructure both in quality and capacity. The initial deployments used a Fiber Backbone (FBB) overlay placed on top of existing tree networks to do the following • Improve performance • Reduce cascading amplifiers problems • Increase reliability • Segment systems into smaller, regional areas • Facilitate targeted programming • Improve upstream performance Over the past decade, CATV networks migrated from the tree architecture, to the FBB, to the current HFC platform. These Hybrid Fiber Coaxial (HFC) networks drive the fiber closer to the consumer’s door. They still use the conventional tree architecture, which branches off at the last mile from the node to subscriber. Figure. Unfortunately, amplifiers may still be placed inefficiently. Comparing the number of nodes during the period of evolution, the industry has reduced node servicing from 5000-20000 homes per node to approximately 500 homes, operators must continually consider a system that will improve reliability for the end user, while reducing the initial cost and the ongoing operating costs. The cable companies and manufacturers came together formally in December 1995 to begin working toward an open standard. The resultant specification is called Data Over Cable Service Interface Specification(DOCSIS),figure

30 Figure 3.7

Television signals are delivered over CATV networks by dividing up the cable spectrum into 6 MHz (in the American system) and 8 MHz (in the European system) channels. One or more of these channels are dedicated for high speed internet access. To deliver data services over a cable network, one television channel (in the 50-750 MHz range) is typically allocated for downstream traffic to subscribers and another channel (in the 5 - 42 MHz band) is used to carry upstream signals. Putting both upstream and downstream data on the cable television system requires two types of equipment: a cable modem on the customer end and a cable modem termination system (CMTS) at the cable provider's end .figure

31 Figure 3.8

Basically, cable modems and their termination systems use the preexisting cable network .Cable modems in the home can receive data at speeds as high as 40 million bits a second and can transmit at speeds up to 10 million bits a second. To implement high-speed access over a cable system, the system operator must dedicate some of the system’s capacity, typically the equivalent of a single television channel, to the cable modem service. One television channel provides a pool of about 40 million bits a second of downstream capacity. A fundamental problem for cable systems arises from the fact that the capacity of a single cable is shared by many subscribers. In a traditional cable network, the same signal is delivered to every household, and the tuner in the television set or in the set-top box selects the specific program that the subscriber views on the television set. Cable modems works the same way. The downstream data (from the head end to the subscriber) is broadcast over the cable just as a television signal. The interface specifications include an encryption element to prevent anyone other than the subscriber from reading the data in these broadcast packets. In a large cable system, say one with 100,000 subscribers, a downstream capacity of 40 million bits a second would work out to an average capacity of 400 bits a second per subscriber. Hybrid fiber-coaxial cable networks permit different data streams to be sent to each neighborhood node. Thus if a cable system serves an average of 2,000 homes for each fiber node, the average downstream capacity would be 20,000 bits a second for each home. If only 10 percent of the homes have cable modems, then the average downstream capacity would be 200,000 bits a second. Broadband access over a cable modem system Capable of delivering average traffic levels this high would result in an acceptable web-browsing service for most subscribers. The data capacity is not rigidly divided among subscribers. Rather, the capacity resides in a pool and is allocated as users need to communicate. Notice, however,

32 that if many consumers choose to use the web in a fashion that requires continuous transmission , say downloading lots of music files or watching a video conference or a football game, the shared capacity will be exhausted. When traffic levels grow, cable operators have two alternatives to expand capacity. One, they could dedicate more of the cable’s transmission capacity to data services and less to television. The least-watched television channel of a hundred channels could be sacrificed to expand data capacity. Or, two, they could reduce the number of households served by each fiber node, essentially moving the fiber closer to the home.

Cable modem technology basically depends on the “cable modem ” equipment in the customer premises ,it is connected to the computer from one side and with a passive splitter from the other side (figure3.9) The function of the splitter is to split the incoming basic cable to two sub paths, one to the cable modem and the other to the set-top box .The set-top box is an equipment used to achieve compatibility between the incoming signal and the analog television. With the dew developments in the cable TV network to digital transmission and the appearance of new interactive services the set-top box was improved to “Interactive Set-Top Box” which is able to do the following: 1. Enable analog televisions to receive Digital TV broadcast. 2. Enable subscribers to access the internet through the television screen since it contains a complete computer with a cable modem inside it. 3.2.2 Cable Modem Components: Cable modem consists of(figure 3.9):

• A tuner • A demodulator • A modulator • A media access control (MAC) device • A microprocessor

33 Figure 3.9

 Tuner: Once the cable modem is connected, the tuner is set to the channel frequency dedicated to internet access. The tuner contains a diplexer, which allows the tuner to make use of one set of frequencies (generally between 42 and 850 MHz) for downstream traffic, and another set of frequencies (between 5 and 42 MHz) for the upstream data  Demodulator: A quadrature amplitude modulation (QAM) demodulator takes a radio- frequency signal and turns it into a simple signal that can be processed by the analog-to- digital (A/D) converter. An error correction module then checks the received information against a known standard, so that problems in transmission can be found and fixed.  Modulator: A modulator is used to convert the digital computer network data into radio- frequency signals for transmission. It consists of: A section to insert information used for error correction on the receiving end , a QAM modulator and a digital-to-analog (D/A) converter

 Media access control (MAC) device: The MAC sits between the upstream and downstream portions of the cable modem, and acts as the interface between the hardware and software portions of the various network protocols. All computer network devices have MACs, but in the case of a cable modem the tasks are more complex than those of a normal network interface card. For this reason, in most cases, some of the MAC

34 functions will be assigned to a central processing unit (CPU) -- either the CPU in the cable modem or the CPU of the user's system.  Data and control logic(cpu): In systems where the cable modem is the sole unit required for Internet access, the microprocessor picks up MAC slack and much more. 3.2.3 Cable Modem Termination System(CMTS): A cable modem termination system is located at the local office of a cable television company. The CMTS takes the traffic coming in from a group of customers on a single channel and routes it to an Internet service provider (ISP) for connection to the Internet. At the head-end, the cable providers will have, or lease space for a third-party ISP to have, servers for accounting and logging, Dynamic Host Configuration Protocol (DHCP) for assigning and administering the IP addresses of all the cable system's users, and control servers for a protocol called Cable Labs Certified Cable Modems -- formerly Data Over Cable Service Interface Specifications (DOCSIS), the major standard used by cable systems in providing Internet access to users. On the upstream side, information is sent from the user to the CMTS -- other users don't see that data at all. The narrower upstream bandwidth is divided into slices of time, measured in milliseconds, in which users can transmit one "burst" at a time to the Internet. The division by time works well for the very short commands, queries and addresses that form the bulk of most users' traffic back to the Internet. A CMTS will enable as many as 1,000 users to connect to the Internet through a single 6-MHz channel. Since a single channel is capable of 30 to 40 megabits per second (Mbps) of total throughput

3.2.4 Cable Modem Applications: Cable modems open the door for customers to enjoy a range of high-speed data services, all at speeds hundreds of times faster than telephone modem calls. Among these services are • Internet Access-electronic mail, discussion groups, and the world wide web • Business Applications-interconnecting LANs or supporting collaborative work. • Cable commuting- enabling the already popular notion of working from home. • Education-allowing students to access educational resources from home. 3.2.5 Cable Modem Limitations: 1. The shared downstream path in cable systems limits the use of a cable modem system for bandwidth intensive services such as streaming media.

35 2. For cable companies to double their streaming capacity, they must double the number of neighborhood nodes in the hybrid fiber-coaxial cable network or give up some of their video capacity. Doubling the number of nodes is expensive. 3. Not every cable system can immediately deploy cable modem service. Rather, the cable system has to be upgraded to two-way capabilities. 4. The shared downstream path also presents possible privacy problems. When the equipment is set up correctly, the encryption in the data-over-cable-service interface specifications should adequately protect a consumer. However, mistakes in the configuration of the encryption equipment can make traffic vulnerable to interception. 3.3 Fiber Optic Networks:

3.3.1 Introduction: The need for speed and greater bandwidth continues to grow. People are turning to fiber optics to meet their bandwidth demands. The process of the fiber optic transmission provides a wide range of benefits not offered by traditional copper cable. These benefits increase rate of transmission and speed which satisfies the needs of today's consumer. Since the bandwidth of fiber far surpasses that of copper it is suitable for today's high-speed multi media needs. The cable is smaller and easier to install because it is lighter and occupies less space. These features make it very adaptable and as a result it is increasingly found in residential installations. Based on these benefits, fiber cable, including it's connectivity parts are competitively priced compared to copper. One must design based on the future and weigh the cost today versus future growth requirements. Also, fiber optic cable is not affected by interference such as lightning or atmospheric conditions and poses no danger of electric shock or fire hazard. Today's fiber optic cable offers almost unlimited bandwidth and unique advantages over all previously developed transmission media. Fiber optic cable can support higher data rates at greater distances and delivers the information with greater fidelity than copper, making it an ideal choice for the home office with high bandwidth requirements. Presently, with advances in digital technology and the further development of standards, fiber optic technology has become an integral part of today's networks and the foundation for our future.

3.3.2 Network types: 3.3.2.1 Ethernet: The term ETHERNET refers to a family of protocols and standards that together define the physical and data link layers of the world’s most popular type of LAN. Many variations of

36 Ethernet exists, they mainly differ in cabling details and so in speed and distance. Four types of cabling are commonly in use: 10Base5: popularly called thick Ethernet, came first. It operates at 10Mbps, uses baseband signaling, uses thick coaxial cable, and can support segments up to 500 meters. 10Base2: or thin ether, uses thin coaxial cable. It operates at 10Mbps, uses baseband signaling, but it can run only for 185 meters. 10Base-T: this uses telephone company twisted pairs so it is the cheapest system, it can run for 100 meters. 10Base-F: it uses fiber optics. It is expensive but has excellent noise immunity. run of up to 2000 meters are allowed. 3.3.2.2 Fast Ethernet: To pump up the speed, tow ring based optical LANs were proposed: Fiber Distributed Data Interface (FDDI) and Fiber Channel (FC). They had the problem of complexity and high price. The proposal was to keep 802.3 as it was but make it faster. The basic idea behind the fast Ethernet was to keep all the old frame formats, interfaces, and procedural rules but reduce the bit time from 100nsec to 10nsec. The original fast Ethernet cablings are: 100Base-T4: it is the category 3 UTP scheme (four twisted pairs). It uses a signaling speed of 25MHz. 100Base-TX: it uses category 5 UTP (two twisted pairs) at a signaling speed of 125MHz, it is a full duplex system. 100Base-FX: it is a full duplex system uses two strands of multimode fiber optic, one for each direction. Runs of up to 2Km with 100Mbps are available.

3.3.2.3 Gigabit Ethernet: Gigabit Ethernet builds on top of the Ethernet protocol, but increases speed to 1000 Mbps, or 1Gbps. This protocol, which was standardized in June 1998, promises to be a dominant player in high-speed local area network backbones and server connectivity. Since Gigabit Ethernet significantly leverages on Ethernet, customers will be able to leverage their existing knowledge base to manage and maintain gigabit networks.

In order to accelerate speeds from 100 Mbps Fast Ethernet up to 1Gbps, several changes need to be made to the physical interface. It has been decided that Gigabit Ethernet will look identical to Ethernet from the data link layer upward. The challenges involved in accelerating to 1Gbps have been resolved by merging two technologies together: IEEE 802.3 Ethernet and ANSI X3T11

37 Fiber Channel. Leveraging these two technologies means that the standard can take advantage of the existing high-speed physical interface technology of Fiber Channel while maintaining the IEEE 802.3 Ethernet frame format, backward compatibility for installed media, and use of full- or half-duplex carrier sense multiple access collision detect (CSMA/CD). This scenario helps minimize the technology complexity, resulting in a stable technology that can be quickly developed.

Gigabit Ethernet supports both copper and fiber cabling. Signaling at or near 1Gbps over fiber means that the light source has to be turned on and off in under 1nsec. LEDs simply cannot operate this fast, so lasers are required.

3.3.2.4 10 Gigabit Ethernet:

As the demand for high-speed networks continues to grow, the need for a faster Ethernet technology is apparent. In March 1999, a working group was formed to develop a standard for 10-Gigabit Ethernet. The preliminary objectives of the working group are:

• Support 10Gbps Ethernet with about 3-4 times the cost of 1-Gigabit Ethernet. • Maintain the Ethernet frame formats. • Simple forwarding between all speeds. • Maintain compatibility to Ethernet. • Full-duplex operation only. • Speed-independent medium access control layer to support 10Gbps in LAN and about 10Gbps in MAN. • Support star-wired LAN topologies. • Specify a family of physical layer which support a link distance of at least 200m on multi-mode fibers (MMF) and at least 3 km on single mode fibers (SMF). • Support the existing cabling infrastructure as well as the new infrastructure.

10-Gigabit Ethernet is basically the newest and the fastest-speed version of Ethernet. it offers similar benefits to those of the preceding Ethernet standard. However, it will not support the half- duplex operation mode. One of the main benefits of 10-Gigabit standard is that it offers a low- cost solution to solve the demands for bandwidth. Not only the cost of installation is low, but the cost of network maintenance and management is minimal as well.

38 10-Gigabit Ethernet also offers straightforward scalability (10/100/1000/10000 Mb/s). Upgrading to 10-Gigabit Ethernet should be simple since the upgrade paths are similar to those of 1-Gigabit Ethernet which should be well familiar by the time 10-Gigabit Ethernet standard is released. The main issues are that 10-Gigabit Ethernet is optimized for data and that it does not provide built-in quality of services. However, the quality of services may be provided in the higher layers.

In the Ethernet standard, there are two modes of operation: half-duplex and full-duplex modes. In the half-duplex mode data are transmitted using the popular Carrier-Sense Multiple Access/Collision Detection (CSMA/CD) protocol on a shared medium. The main disadvantages of the half-duplex are the efficiency and distance limitation. At the transmission rate of 10Gbps, the half-duplex mode is not an attractive option. No market currently exists for the half-duplex operation at this rate of transmission. Most of the links at this rate are point-to-point over optical fibers. In this case, the full-duplex operation is the preferred option. It can be expected that the standard for 10-Gigabit Ethernet will specify only the full-duplex operation. In the full-duplex operation, there is no contention. The MAC layer entity can transmit whenever it wants, provided that its peer is ready to receive. The distance of the link is limited by the characteristic of the physical medium and devices, power budgets, and modulation.

3.3.2.5 Fiber channel:

Fiber channel (FC) is a way to connect storage devices. Storage area network (SAN) is a group of interconnected devices and servers that use a common communication infrastructure; the protocol used for this infrastructure is fiber channel. Its goal is to carry different types of traffic applications that require the first rate capabilities of storage and network technologies. Fiber channel is a high-speed data transfer technology, which can be utilized by networks and mass storage. It is an open standard, defined by ANSI and OSI and which supports the most important higher protocols, such as Internet Protocol, ATM (Asynchronous Transfer Mode), IEEE 802 (Institute of Electrical and Electronics Engineers Standard. Fiber channel does not make use of its own command set, but merely facilitates the data transfer between the individual FC devices. Fiber channel is not limited to the transmission of optical signals through fibers, but can also utilize cost-effective copper cables (twisted pair or coaxial).

39 Fiber Channel completely separates the delivery of data from the content. It is concerned only with delivery, and is blind to content. It is therefore very flexible in the types of data it transports. It works by defining only a method for transmitting data from one node to another, regardless of the type of data transmitted. The nodes can be two computers exchanging network data, a computer sending file data to a peripheral device, or two peripherals. 3.3.2.5.1 Key Features of Fiber Channel: Fiber Channel defines a data transport service that is: • Fast: most current implementations operate at 1Gbps, or 100Mbps. Planned speeds of 2Gbps and 4Gbps (200 and 400Mbps) are already approved and in development. • Long-distance: up to 10 km between fiber optic connections makes remote mirroring or backup possible, and less expensive copper wire cables for near-vicinity connections can still span up to 30 meters. • Scalable: users can start with a basic setup and expand in many ways as a system’s requirements evolve. • Reliable: superior data encoding and error checking, and the improved reliability of serial communications. • Dependable: redundancies ensure that storage or network access is always available. • Flexible in many ways: o Carries any kind of data traffic. Mapping protocols have already been defined which enable sending SCSI, IP, ATM, and other higher-level protocols Supports a variety of system topologies. o Can be carried over various fiber optic or copper cables, and different cable types can be mixed in a single topology. o Compatibility with older technologies like Ethernet. o Standardized: unlike proprietary solutions, Fiber Channel is emerging as an industry- wide standard; so many different vendors can develop products which all work together.

3.3.2.5.2 How Fiber Channel Works: Fiber Channel standards specify the physical characteristics of the cables, connectors, and signals that link devices. Since Fiber Channel only provides a data transport mechanism, they define how to map upper-level protocols to the Fiber Channel data format. Fiber Channel carries data over many types of electrical and optical cable. In many cases, this permits converting cables already installed for other interconnects into Fiber Channel cables.

40 Cables connect to ports on the devices to form a link. Each port has two connections, one dedicated to transmitting data, the other to receiving (figure 3.10). In a simple link between two devices, A and B, one cable connects the transmit side of the port on device A with the receive side of the port on device B, and another cable connects device B’s transmit side with device A’s receive, as shown in the following diagram. Several kinds of acceptable copper wire and multi- mode or single-mode fiber optic cables have been approved for Fiber Channel. Fiber Channel is a serial data transfer protocol, meaning it transfers data one bit at a time. As a result, each direction of a link only requires one optical fiber or one pair of copper wires, so Fiber Channel cables are very small and simple.

3.3.2.5.3 Classes of Service:

Just like other communication protocols, Fiber channel offers many classes of service, which are usually chosen according to the higher-layer protocol mapped on the Fiber channel protocol. Currently, there are five classes of service, defined as follows:

Figure 3.10

• Class 1: it is connection oriented service, it is a dedicated channel between two connection devices. In this configuration, if a host and a device are connected, no other host can use that connection. The advantage of using Class 1 is speed and reliability.

41 • Class 2: it is known as a "connectionless" service. It is a frame-switched link that guarantees delivery of packets from device to device and packet receipt acknowledgements. • Class 3: it is called unacknowledged connectionless service and is good for broadcasts. This configuration allows multiple transmissions to be sent across the fiber channel fabric to multiple devices. • Class 4: it is called "intermix", which not only creates a dedicated connection, but also allows Class 2 traffic to access the link. This method is very efficient and it allows for greater bandwidth because more than one connection can access the system at any time. • Class 6: it is dedicated to multicast. It differs from Class 3 in that full channel bandwidth is guaranteed and the destination ports generate responses, which are aggregated in a single frame to the source port.

Class 3 is the most common class used. Data reliability is left up to the higher-level protocol mapped on the fiber channel protocol.

3.3.2.5.4 Application of Storage Area Networks: Although Fiber Channel defines a common interconnect method for all types of data traffic, including network and peripheral device communications, most development for Fiber Channel technology right now is for data storage devices. One reason for this is that the early development of Fiber-Channel-to-SCSI bridges makes it possible to add Fibre Channel storage devices to a system while maintaining compatibility with older SCSI storage devices. The networking capacity of Fiber Channel has led to a new concept in network data storage: the Storage Area Network (SAN). The high performance network capacity of Fiber Channel can combine with its peripheral I/O nature to create a network of storage devices, separate from the common LAN, with one or more servers acting as a gateway to storage access. A SAN provides high-speed data storage and retrieval which is scalable and highly available storage. The SAN concept also provides reliable data access. With a dual arbitrated loop, the SAN can be connected to two servers, providing redundant access to all storage in case the one server goes down.

42 3.3.2.6 Fiber Distributed Data Interface (FDDI):

The Fiber Distributed Data Interface (FDDI) is a high-speed token based technology using fiber optic links. With a data transfer rate of 100 Mbps, the ring can support up to 500 nodes with as much as 2 km of spacing between adjacent nodes. In addition, FDDI permits a large number of devices to exploit the extra transmission speed. It supports multiple tokens, which means that many different packets of data can be transmitted over the LAN simultaneously.

The FDDI may be used as a LAN, but due to the historical higher costs of its adapters and fiber optics, FDDI has been used primarily in backbone networks and for high-speed communications between host processors.

Technology Overview The continuing rapid increase in the speed and performance of personal computers and workstations has initiated a demand for a corresponding increase in LAN bandwidth. To meet this need, the FDDI standard, developed by an ANSI committee has come into widespread use.

The basic operation of FDDI is similar to that of token ring. Since it is very difficult to tap into a fiber, a ring was the logical solution. A station must be in possession of a token before it can transmit an information frame. Once it has seen the frame go around the ring it can then regenerate the token allowing someone else to transmit. However, the potentially large size of the FDDI ring means that it has a higher latency than token ring and so more than one frame may be circulating around the ring at a given time. FDDI groups stations-including workstations, bridges and routers-into a ring, with each station having an input fiber from the previous station and an output to the following one. The last station connects back to the first to complete the ring.

FDDI is based on dual counter-rotating 100 Mbps token passing rings. Dual counter-rotating rings are used to improve reliability. The rings are labeled the primary ring and the secondary ring. Stations attached to the FDDI may be connected to both rings: dual attach stations (DASs) or only to the primary ring: single attach station (SASs), via FDDI nodes. Data flows in opposite directions along each ring. During normal operation, in the absence of faults, the secondary ring remains inactive. If the nodes are attached to both rings, as in dual-attach, the node can heal itself if the primary cable fails. The disadvantage of this is that an extra fiber-optic interface is required, adding to the total cost. The alternative is the single-attached node connected to just one of the rings, which is cheaper but does not offer the same protection against cable faults. With single-

43 attached connections, a cable break could isolate the node from the network, rendering it unable to communicate with other nodes on the network.

3.3.2.7 Synchronous Optical Networks (SONET):

Basic SONET defines a technology for carrying many signals of different capacities through a synchronous, flexible, optical hierarchy. This is accomplished by means of a byte-interleaved multiplexing scheme.

The increased configuration flexibility and bandwidth availability of SONET provides significant advantages over the older telecommunications system. These advantages include reduction in equipment requirements and an increase in network reliability provision of overhead and payload bytes, the overhead bytes permit management of the payload bytes on an individual basis and facilitate centralized fault sectionalization definition of a synchronous multiplexing format for carrying lower level digital signals and a synchronous structure.

Traditionally, transmission systems have been asynchronous, with each terminal in the network running on its own clock. In digital transmission, clocking is one of the most important considerations. Clocking means using a series of repetitive pulses to keep the bit rate of data constant and to indicate where the ones and zeroes are located in a data stream.

In a synchronous system such as SONET, the average frequency of all clocks in the system will be the same or nearly the same. Every clock can be traced back to a highly stable reference supply.

3.3.2.8 Fiber to the Home (FTTH): In an FTTH system, equipment at the head end or CO is interfaced into the public switched telephone network (PSTN) using DS–1s and is connected to ATM or Ethernet interfaces. Video services enter the system from the cable television (CATV) head end or from a satellite feed.

All of these signals are then combined onto a single fiber using WDM techniques and transmitted to the end user via a passive optical splitter. The splitter is typically placed

44 approximately 30,000 feet from the central office (CO). The split ratio may range from 2 to 32 users and is done without using any active components in the network. The signal is then delivered another 3,000 feet to the home over a single fiber. An ideal FTTH system would have the ability to provide all of the services users are currently paying for, such as circuit-switched telephony, high-speed data, and broadcast video services.

At the home, the optical signal is converted into an electrical signal using an optical electrical converter (OEC). The OEC then splits the signal into the services required by the end user. Ideally, the OEC will have standard user interfaces so that special set-top boxes are not needed to provide service.

3.3.2.8.1 Advantages of FTTH:

There are several advantages associated with FTTH, including the following:

• It is a passive network, so there are no active components from the CO to the end user. This dramatically minimizes the network maintenance cost and requirements, as well as eliminating the need for a DC power network. • It is a single fiber to the end user, providing revenue-generating services with industry standard user interfaces, including voice, high-speed data, analog or digital CATV, DBS, and video on demand. • FTTH features local battery backup and low-power consumption. • FTTH is reliable, scalable, and secure. • The FTTH network is a future-proof architecture.

Chapter 4 Wireless Networks

4.1 Wireless Access:

Traditionally, the provision of voice and data communications to the end user over the local loop has been provided by wired systems while higher data rates are obtainable with broadband wireless technologies, but it was not widely used because of it’s great cost until the invention of new technologies such as CDMA, TDMA…etc so operators can deploy the

45 service faster and without the cost of a cable plant. By making the local loop wireless the technology is referred to as Wireless Local Loop (WLL). The radio technology on which wireless broadband based on is known as Spread Spectrum modulation.

4.1.1 Spread Spectrum Modulation: In any Spread Spectrum system the input is fed into a channel encoder that produces an with a relatively narrow bandwidth around some centre frequency, this signal is further modulated using a sequence of digits known as a spreading code generated by Pseudo noise generator, the effect of this modulation is to increase the bandwidth(spread the spectrum)of the signal to be transmitted. On the receiving end the same digit sequence is used to demodulate the spread spectrum signal. Finally the signal is fed into a channel decoder to recover the data. Things gained from this apparent waste of spectrum: • We can gain immunity from various kinds of noise. • It can be used for hiding and encrypting signals. • Several users can use the same higher bandwidth with very little interference, using the technique Code Division Multiple Access (CDMA). • Resistance to multipath and fading effects. As a result, Spread Spectrum systems can coexist with other radio systems, without being disturbed by their presence and without disturbing their activity. The immediate effect of this elegant behavior is that Spread Spectrum systems may be operated without the need for license, and that made the Spread Spectrum modulation to be the chosen technology for broadband wireless access operation. However, as mentioned above, spread spectrum technologies have many other advantages, making them an excellent option for the operation of systems in licensed bands, too. There are basically two types of Spread Spectrum modulation techniques: Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS).

4.1.1.1 Frequency Hopping Spread Spectrum( FHSS): - Process 1 - Spreading code modulation, the frequency of the carrier is periodically modified (hopped) following a specific sequence of frequencies. In FHSS systems, the spreading code is a

46 list of frequencies to be used for the carrier signal, the “hopping sequence” .The amount of time spent on each hop is known as dwell time and is typically in the range of 100 ms. - Process 2 - Message modulation ,the message modulates the (hopping) carrier (FSK), thus generating a narrow band signal for the duration of each dwell, but generating a wide band signal, if the process is regarded over periods of time in the range of seconds. - Redundancy is achieved through the possibility to execute re-transmissions on different carrier frequencies (hops).

4.1.1.2 Direct Sequence Spread Spectrum (DSSS):

- Process 1 - Spreading code modulation for the duration of every message bit, the carrier is modulated (PSK) following a specific sequence of bits (known as chips). The process is known as “chipping” and results in the substitution of every message bit by (same) sequence of chips. In DSSS systems, the spreading code is the chip sequence used to represent message bits. - Process 2 - Message modulation for message bits “0”, the sequence of chips used to represent the bit remains as dictated by process1 above. For message bits “1”, the sequence of chips dictated by process 1 above, is inverted. In this way message bits “0” and “1” are represented (over the air) by different chip sequences (one being the inverted version of the other one). - Redundancy is achieved by the presence of the message bit on each chip of the spreading code. Even if some of the chips of the spreading code are affected by noise, the receiver may recognize the sequence and take a correct decision regarding the received message bit.

4.1.2 The Role of WLL:

Wireless broadband services are often called "fixed wireless" because the transmitting and receiving stations (of both the service provider and the customers) are in fixed stationary positions. Typically, a company offering fixed or terrestrial wireless services will operate one or more master microwave antennas(base station antenna) installed on top of tall buildings or possibly mountains adjacent to populated areas, the users will rely on relatively small antennas on top of their office buildings or homes,(figure 3-1)From the base station there’s a link –wired or wireless-to a switching center which represents a telephone company office which provides connection to the local and long distance telephone networks. In the implementations most likely to compete with cable and DSL services, users’ antennas act as both receivers of downstream Internet data and transmitters of upstream data.

47 4.1.3 Advantages of WLL over Wired approaches: Cost: wireless systems are less expensive that wired ones, although the electronics of wireless transmitter/receiver may be more expensive but with WLL the cost of installing kilometers of cable is avoided as well as the cost of maintaining the wired infrastructure. Installation time: WLL systems typically can be installed rapidly after getting a permission to use a given frequency band and an elevated site for the base station antennas was found.

Figure 4.1

Selective installation: Radio units are installed only for those subscribers who want the service at a given time, while with wired system a cable is laid out in anticipation of serving every subscriber in a local area.

4.1.4 Alternatives to WLL: • Wired scheme using existing installed cable: A large fraction of the earth’s inhabitants don’t have a telephone line because of being too far from any central office or having a line but

48 don’t have sufficient quality for high speed applications and many of them also don’t have cable TV providing two way data services. • Mobile Cellular Technology: Current :cellular systems are too expensive and don’t provide sufficient facilities to act as a realistic alternative to WLL.A major advantage of WLL above them is that the subscriber unit is fixed and so the subscriber can use a directional antenna pointed at the base station antenna providing an improved signal quality in both directions.

4.1.5 Propagation considerations for WLL:

A signal radiated from an antenna travels along one of three routes: ground wave, sky wave, or line of sight (LOS). We are exclusively concerned with LOS communication because all wireless services require "line of sight" between a central antenna and a customer’s antenna (figure 4.2). Line-of-Sight Propagation Above 30 MHz, neither ground wave nor sky wave propagation modes operate, and communication must be by line of sight (Figure)

Figure 4.2

For ground-based communication, the transmitting and receiving antennas must be within an effective line of sight of each other. The term effective is used because microwaves are bent or refracted by the atmosphere. The amount and even the direction of the bend depend on conditions, but generally microwaves are bent with the curvature of the earth and will therefore propagate further than the optical line of sight.

49 For most WLL schemes frequencies used are in the range of 30 GHz to 300 GHz ,the reasons for using frequencies in this range are: • There are wide unused frequency bands available above 25 GHz. • At these high frequencies wide channel bandwidths can be used providing high data rates. • Small size tranceivers and adaptive antenna arrays can be used.

But this range of frequencies has some undesirable propagation characteristics like: • Losses are much higher because free space loss increases with the square of frequency. • Above 10 GHz attenuation effects are large. • Multipath losses can be quite high.

Because of these negative characteristics WLL systems can only serve cells of limited radius, usually just few Kms. There are some important effects and limits the range and availability of WLL systems: Fresnel Zone: For effective communication there must be unobstructed line of sight between transmitter and receiver as shown above. Atmospheric Absorption; At frequencies above 10 GHz radio waves propagating through the atmosphere are subject to molecular absorption. Effect Of Rain: The presence of raindrops can severly degrade the reliability and performance of communication links. Effects Of Vegetation: In some small areas obstacles can’t be avoided which leads to multipath fading.

4.1.6 Wireless services: The two leading types of wireless services that can support broadband access to the Internet are: • Local Multipoint Distribution Service (LMDS) • Multichannel Multipoint Distribution Service (MMDS). LMDS and MMDS generally both require a "line of sight" between a central antenna and a customer’s antenna. LMDS provides faster speeds than MMDS, but can only support customers within two or three miles of a central antenna. LMDS is therefore best suited for businesses located in dense urban areas. LMDS is, however, susceptible to interference from rain and snow In contrast, MMDS technology cannot support such high speeds, but it can reach customers who are located much further from an antenna than would be possible using an LMDS system.

50 Technically, MMDS’ zone of coverage could extend 35 miles in every direction from a central tower (covering over 3500 square miles compared to less than 50 square miles covered by a single LMDS station). MMDS can be deployed more cheaply and can reach more efficiently into suburban and rural areas On the whole, wireless technology will probably be an important aspect of widespread broadband availability, but it is unlikely that wireless will ever become a ubiquitous option for all consumers.

4.1.7 Wireless Technologies: 4.1.7.1 Wi-Fi:

Wi Fi stands for Wireless Fidelity .Wi-Fi is the industry name for wireless LAN (WLAN) communication technology related to the IEEE 802.11 family of wireless networking standards. It offers the opportunity to link a user to a network anywhere within a specific area or "hot spot" where a connection is available with peak operating speeds of around 54 Mbps so it is able to compete with many wired systems. As a result of the flexibility and performance of the system, many Wi-Fi standards had been set up and more are following. Wi-Fi enables people to use their laptop computers as they wait in hotels, airport lounges, cafes, and many other places using a wire less link rather than needing to use a cable. The only dissadvantage is that once you move out of the area where the network coverage is, you no longer have the Wi-Fi or network connection. Wi-Fi standards 802.11a - Wireless network bearer operating in the 5 GHz ISM band with data rate up to 54 Mbps 802.11b - Wireless network bearer operating in the 2.4 GHz ISM band with data rates up to 11Mbps. 802.11e - Quality of service and prioritization 802.11f - Handover 802.11g - Wireless network bearer operating in 2.4 GHz ISM band with data rates up to 54Mbps 802.11h - Power control 802.11i - Authentication and encryption 802.11j - Interworking 802.11k - Measurement reporting 802.11n - Stream multiplexing 802.11s - Mesh networking

51 Of these the standards that are most widely known are the network bearer standards, 802.11a, 802.11b, and 802.11g.

All the 802.11 standards operate within the ISM (Industrial, Scientific and Medical) frequency bands. These are shared by a variety of other users, but no license is required for operation within these frequencies. This makes them ideal for a general system for widespread use. The first of the 802.11 standards to gain widespread acceptance was IEEE 802.11b. This supports data rates up to maximum of 11 Mbps. The transmission standard uses direct sequence spread spectrum (DSSS) with a total of 52 carrier centre frequencies within the 2.4 GHz ISM band. There are a number of types of modulation used to achieve what is termed the basic rate of 1 Mbps, the carrier is modulated using differential binary phase shift keying (DBPSK). For a rate of 2 Mbps - extended rate, differential quadrature phase shift keying (DQPSK) is used. To achieve the maximum or enhanced rate of 11 Mbps quadrature phase shift keying (QPSK) is employed. A further specification 802.11a provides a higher performance, but its transmissions are located in the 5 GHz ISM band. Using orthogonal frequency division multiplex (OFDM) instead of DSSS it achieves data transmission rates of up to 54 Mbps. It is not as popular as the "b" version of the standard, operating at higher frequencies where chips are a little more expensive. A further standard 802.11g was released in June 2003. Offering data rates of 54 Mbps on the 2.4 GHz ISM band it has been designed to be completely backward compatible with 802.11b and it will work seamlessly with the existing Wi-fi cards or Access Points. The new standard achieves its higher speed by employing OFDM for the higher data transfer rates and the enhanced protocols enable the backward compatibility to be achieved. The launch of the new standard was eagerly awaited by the industry. Being compatible with the existing "b" standard, many chips were designed and available before it was officially launched. As such it is anticipated that it will gain a significant foothold in the market. Already many Wi-fi hotspots have adopted it so that as the cards become more widespread, they will be able to be used to gain the higher data transfer speeds.

4.1.7.2 Wi-Max:

Wireless Metropolitan Area Networks (WMANs) cover a much greater distance than WLANs, connecting buildings to one another over a broader geographic area. The emerging WiMAX technology will further enable mobility and reduce reliance on wired connections.WiMax is standardized under the specification IEEE 802.16, but the working group has implemented a

52 number of extensions to enable variants and other facilities or qualities to be defined. In this way the WiMax standard will be able to meet the requirements of a variety of users. IEEE 802.16a - This version of WiMax is for use in licensed and license-exempt frequency bands between 2 and 11 GHz. It supports mesh deployment in which transceivers can act as relay stations, passing messages on from one station to the next thereby increases the range. The use of the lower frequencies allows more flexible implementations of the technology as signals at these frequencies can penetrate walls and other barriers without the levels of attenuation experienced at higher frequencies. As a result most interest is currently in these frequencies. IEEE 802.16b - This increases the spectrum that is specified to include frequencies between 5 and 6 GHz while also providing for Quality of Service. IEEE 802.16c - This provides a system profile for operating between 10 and 66 GHz and provides more details for operations within this range. The aim to enable greater levels of interoperability. IEEE 802.16d - This provides minor improvements and fixes to 802.16a. This includes the use of 256 carriers OFDM. Profiles for compliance testing are also provided. IEEE 802.16e - This standard harmonize the networking between fixed base stations and mobile devices, rather than just between base stations and fixed recipients. This will enable such activities as handovers enabling mobile users to receive a high quality continuous service as their vehicles move

4.1.7.3 3 Generation( 3G): The term 3G is short for third-generation wireless technology-a technology that helped to bring together two of the world's fastest-growing industries-mobile communications and the wireless Internet, The first generation of cellular phones was based on frequency modulated (FM) analog technology. Most countries developed their own systems, but while these phones allowed for roaming within one region, they could not be used across different countries. This was especially a problem in Europe, where each country had its own standard. To address this problem, the European Telecommunications Standards Institute (ETSI) created the first second-generation (2G) digital technology called Global System for Mobile Communications (GSM). Japan took a different route by deploying Personal Digital Cellular (PDC) technology. These systems were designed to increase the voice capacity of the original analog systems, as the first-generation

53 analog was becoming capacity-limited due to the explosive growth of the wireless industry. PDC is a TDMA-based technology, operating in the 800 MHz and 1500 MHz frequency bands. To get the world on track for the deployment of 3G standards, the International Telecommunications Union (ITU) started the technical framework for 3G,the standard was called International Mobile Telecommunications-2000 (IMT-2000), consists of 5 operating modes, including 3 based on Code Division Multiple Access (CDMA) technology. These 3G CDMA modes are most commonly known as CDMA2000, WCDMA and TD-SCDMA. Since 3G CDMA efficiently provides high quality voice services and high-speed packet data access, it is the preferred technology for 3G.. The goal was to establish one worldwide global standard for the next generation of mobile communications.3G wireless networks are capable of transferring data at speeds of up to 384Kbps. 3G is considered high-speed or broadband mobile Internet access, and in the future 3G networks are expected to reach speeds of more than 2Mbps. 3G technologies are turning phones and other devices into multmedia players, making it possible to download music and video clips. The new service is called the freedom of mobile multimedia access (FOMA), and it uses wideband code division multiple access (W-CDMA) technology to transfer data over its networks. W-CDMA sends data in a digital format over a range of frequencies, which makes the data move faster, but also uses more bandwidth than digital voice services. W-CDMA is not the only 3G technology; competing technologies include CDMAOne, which differs technically, but should provide similar services.

4.1.7.4 Ultra Wide Band (UWB): Short for Ultra Wide Band, a wireless communications technology that can currently transmit data at speeds between 40 to 60 megabits per second and eventually up to 1 gigabit per second. Wireless Personal (UWB) brings the convenience and mobility of wireless communications to high-speed interconnects in devices throughout the digital home and office. Designed for short- range, wireless personal area networks (WPANs), UWB is the leading technology for freeing people from wires, enabling wireless connection of multiple devices for transmission of video, audio and other high-bandwidth data. UWB, short-range radio technology, complements other longer range radio technologies such as Wi-Fi, WiMAX, and cellular wide area communications. It is used to relay data from a host device to other devices in the immediate area (up to 10 meters, or 30 feet). How UWB Works A traditional UWB transmitter works by sending billions of pulses across a very wide spectrum of frequencies several GHz in bandwidth. The corresponding receiver then translates the

54 pulses into data by listening for a familiar pulse sequence sent by the transmitter. Specifically, UWB is defined as any radio technology having a spectrum that occupies a bandwidth greater than 20 percent of the center frequency, or a bandwidth of at least 500 MHz. Modern UWB systems use other modulation techniques, such as Orthogonal Frequency Division Multiplexing (OFDM), to occupy these extremely wide bandwidths. In addition, the use of multiple bands in combination with OFDM modulation can provide significant advantages to traditional UWB systems. UWB's combination of broader spectrum and lower power improves speed and reduces interference with other wireless spectra. In the United States, the Federal Communications Commission (FCC) has mandated that UWB radio transmissions can legally operate in the range from 3.1 GHz up to 10.6 GHz, at a limited transmit power of -41dBm/MHz. Consequently, UWB provides dramatic channel capacity at short range that limits interference.

4.2 Satellite Access:

4.2.1 Background: Satellite access technology uses satellites to deliver Internet access to homes and businesses. Until a near time, however only one satellite technology -- Hughes Network System's DirecPC -- has been available to residential customers for data transmission ( Internet access),with an 18-inch satellite dish, installed at the subscriber's home and aimed at a geostationary satellite located above the equator, downloading speeds of up to 400 Kbps can be achieved. As of December 2000, uploading via satellite has not been possible.To transmit data, subscribers used a 56K modem to dial-up an Internet connection over their telephone line. Unlike cable or DSL technology, DirectPC has not been "two-way" or "always on," because users must dial up an Internet connection to hook into the system. Both cable and DSL offered faster and cheaper broadband access than the satellite Internet access offered for residential customers. Also, like cable and unlike DSL, satellite is a shared medium, meaning that privacy may be compromised and performance speeds may vary depending upon the volume of simultaneous use. On the other hand, the big advantage of satellite is its universal availability. Satellite connections can be accessed by anyone with a satellite dish, this makes satellite Internet access a possible solution for rural or remote areas not served by other technologies. The satellite industry was working to develop upgraded systems that will allow two-way and higher speed satellite Internet connections. On November 6, 2000, Starband Communications announced the first two-way Internet access satellite service for the home, offering 500 Kbps downstream and 150 Kbps upstream. On December 21, 2000, Hughes announced the first

55 shipments of its new two-way broadband satellite service, with advertised download rates of 400 Kbps and upload rates of up to 256 Kbps and so on other satellite companies are planning to offer two-way Internet access at transmission speeds of millions of bits per second.

4.2.2 2-Way Satellite Internet: Two-way satellite systems consist of asymmetric satellite paths a broadband downstream path for the delivery of the actual content and a smaller upstream path for the carriage of the user to Internet requests. Internet via satellite having a permanent 2-way connection to the Internet that will save many hours delay in logging-on, downloading large files, surfing the web, downloading E-mail and waiting for connection to respond. All needed is a discrete box and a small satellite dish connected to the computer or network.(figure 4.3)

Figure 4.3

4.2.3 Why Satellite?

56 Satellite access is a mature technology, and recent technological advancements have given it new life. Three key developments have helped satellite to become a viable solution: 1) Broadband Speeds Better use of the satellite spectrum, in both the downstream and upstream directions, allows higher speed access. Since satellite is a broadcast-oriented technology, it relies on a shared pathway. In order to achieve higher speed access, especially in the upstream direction, it becomes necessary to optimize the allocation of spectrum. Key technology improvements that have helped to drive broadband satellite access are: • Improved data encoding and signal processing techniques to allow precious radio spectrum to be used most efficiently to deliver the maximum amount of data. • Packet prioritization and rate control to manage bandwidth allocation among users and applications, and to implement quality of service (QOS) controls. • Link fidelity that maintains a quality radio connection end-to-end, thus minimizing or eliminating errors. 2. Smaller, More Affordable CPE Satellite services equipment with price points closer to that of other broadband technologies has been a strong driving force in increasing its use. Additionally, having a small outdoor dish overcomes some zoning issues(figure 9), not to mention aesthetic concerns. Typically, dishes smaller than 1 meter in diameter do not require special zoning for residential deployment; and businesses can forego most zoning requirements with dishes of less than 1.5 meters in diameter. A negative outcome of reducing the size of the dish is that it limits the effective radiated power. This, in turn, limits access speeds. However, better signal processing and data encoding have helped to offset the effect of having a small-diameter dish. 3. IP Acceleration TCP/IP protocols are not well suited for use over a satellite network, given the inherent latency. To minimize the effects of latency, many IP acceleration innovations have been developed that create a better overall experience for the end-user. Techniques employed include acknowledgement (ACK) spoofing, compression, pre-fetching, protocol proxies, proprietary encapsulation, and content caching. Though all of these techniques help to improve performance, not all are compatible with every application. Business users need to look for standards-based IP acceleration that preserves the TCP/IP protocol packets end-to-end, without modifying any part of the packet, or generating its own “spoofed” acknowledgements. This is key for customers utilizing IP VPNs, remote terminal

57 applications, voiceover IP (VoIP), and other business software. Simple Web surfing and e-mail are often less affected and may be more tolerant of some of these schemes.

4.2.4 Satellite frequency Bands: The most commonly used satellite bands are :  C-band.  Ku-band.  Ka-band.

The C and Ku bands are the most common frequency spectrums used by today’ satellite. It’s important to note that there’s an inverse relationship between frequency and wavelength, when frequency increases the wavelength decreases. C-band satellite occupy 4 to 8 GHz frequency ranges, these relatively low frequencies translate to large wavelengths meaning that a large antenna is required to gather the minimum signal strength. The minimum size for C-band antenna is from 2 to 3 meters in diameter. Ku-band satellite occupy 11 to 17 GHz frequency ranges, these relatively high frequency transmissions correspond to shorter wavelengths, and so a shorter antenna can be used, 18 inches in diameter. Ka-band satellite occupy 20 to 30 GHz frequency ranges.Which means very small wavelengths and so very small receiving antennas.

4.2.5 Satellite Types: Satellites are positioned in different orbits,differents distances from the earth: 4.2.5.1 Geosynchronous Earth Orbit(GEO)Satellites: Majority of satellites are positioned at 22,300 miles above the earth equator in Geosynchronous Earth Orbit(GEO)(figure 4.4).A satellite here can maintain an orbit with a period of rotation around the earth exactly equal to 24 hours.since the satellite revolve at the same rotational speed of the earth they appear stationary from the earth’s surface and that’s why most station antennas(satellite dishes)don’t need to move after they have been properly aimed at a target satellite in the sky.GEO requires 0.25 seconds for a round trip. GEO satellites are the ones used for internet access. 4.2.5.2 Medium Earth Orbit(MEO)Satellites: MEO satellite networks have been proposed that will orbit at distances of about 8,000 miles,signals transmitted from MEO satellites travel a shorter distance,which translates to

58 improved signal strength at the receiving end,also there’s less transmission delay(time for a signal to travel up to the satellite and back down to a receiving station),MEO satellite requires less than 0.1 seconds to complete the job,for real time communications the shorter the transmission delay the better.

Figure 4.4

4.2.5.3 Low Earth Orbit(LEO) Satellites: Divided into three categories:  Little LEOs operates in 0.8 GHz.  Big LEOs operates in 2 GHZ.  Mega LEOs operates in 20 to 30 GHz. LEOs will orbit at 500 to 1,000 miles above earth. this short distance reduces the transmission delay to only 0 .05 seconds and further reduces the need for sensitive and bulky receiving equipment. The higher frequencies associated with mega LEOs translates into more information carrying capacity and capability of real-time, low delay video transmission. Speeds are expected to be 64 Mbps downlink and 2 Mbps uplink. How Satellites at the same Orbit are kept from attempting to use the same location in space? With more than 200 satellites in the Geosynchronous orbit satellites may run into each other ,to tackle that problem ITU & FCC designate the locations in Geosynchronous Orbit,where communications satellites can be located, these locations are specified in degrees known as orbital slots.ITU & FCC specified for orbit slots 2 degrees for C-band and Ku-band satellites

59 4.2.6 Communications Equipment: Equipment required to achieve good access to internet are: 1) Indoor Equipment:

• Indoor Receive Unit (IRU): it is a satellite receiver adapter or modem between the computer and the receiving dish. It receives the data signals transmitted by the satellite and convert it in a form capable of being processed by the computer. • Indoor Transmit Unit (ITU): it is a satellite receiver adapter or modem between the computer and the receiving dish. It receives the data signals produced by the computer and convert them in a form viable to be sent to the Radio Frequency (RF) transmitter being fixed in the receiving dish which retransmits them to the satellite. • CD Installation: required for system integrity and operability. • Power Feeder. • Connection cable: they are coaxial cables to connect indoor equipments to the outdoor equipments.

2) Outdoor Equipment: The outdoor equipments consist of an outdoor dish, its diameter is determined according to the field strength of the satellite directed to it. This outdoor dish is provided with a Low Noise Block (LNB) which amplifies the signal and reduces noise, and an RF transmitter in the case of the uplink transmission.

All components must be highly reliable and low weight. Global receive and transmit horns receive and transmit signals over wide areas on earth. Many satellites and ground stations have radio dishes that transmit and receive signals to communicate, the curved dishes reflect outgoing signals from the central horn and reflect incoming signals.

4.2.7 Space Security Unit: If data is being packaged up and Broadcast into space ,anyone with a scanner can just tune it! But systems uses air interface technologies which made it difficult for anyone to eavesdrop, combined systems will use; • Code Division Multiple Access (CDMA). • Time Division Multiple Access (TDMA).

60 • Frequency Division Multiple Access (FDMA). And a bunch of xDMA protocols. On top of that many networks offer some internal security encryption system.

4.2.8 Satellite Characteristics: Satellite communications have three characteristics that lead to interoperability problems with systems that have not been designed to accommodate them; Latency: There’s an inherent delay in the delivery of a message over a satellite link due to the finite speed of light and the altitude of communications satellites. As stated there’s 25o milliseconds delay in a GEO,this delay is for one ground-station to satellite to ground-station route(hop),so the round trip for a message and its reply is 500 milliseconds and delay may be increased if the link includes multiple hops or intersatellite links are used. Noise: The strength of a radio signal falls in propagation to the square of the distance traveled, in satellite this distance is very large and so the signal becomes very weak.This results in a low signal to noise ratio.Typical bit error rates might be 10 to power -7 . Noise becomes less of a problem when error control coding is used. Bandwidth: Bandwidth is limited by nature but the allocations for commercial communications are limited by international agreements so that the scarce resource can be used fairly.

Invention and installation of Fiber optics communications systems have been treated as ideal with very low latency, no noise and nearly infinite bandwidth but for satellite communications the characteristics of fiber make it very difficult to provide cost effective interoperability with land- based systems.

4.2.9 Advantages of Broadband Satellite: Can be used almost anywhere Satellites are wireless so they can reach geographically remote access. service can be used not only where DSL is not available, but even where telephone lines can't reach! Fast, 2-way connection speeds 2-way technology gives fast connection and transfer speeds both up and down stream. Actual speeds depend on the package chosen.

61 Always on Connection No dialling & no waiting to connect. service is available instantly at all times. Flexible packages to suit users needs Offering a range of satellite solutions to suit the needs of differing users. From single home users, to a demanding corporate network we have a package to meet needs. Unlimited usage Because systems don't use telephone lines, you never have to worry about high call charges from extended use. Systems are always on and can be used as much as one like for a fixed price. Reliability By utilizing the latest technologies solutions provide incredible reliability, better than 99.9%. No need to worry about dropped connections during critical transactions, or missed emails.

4.2.10 Disadvantages of Broadband Satellite • Latency – In order for a satellite to remain at the same location in the sky, it must be orbit over the equator at a distance of 22,300 miles. Since the radio waves that travel to and from the satellite are constrained by the speed of light, information being sent over the satellite network is subject to a delay or latency. This is most noticeable with highly interactive applications, such as VoIP. • Weather-Related Outages – Adverse weather conditions, such as strong thunderstorms, can affect the performance of a broadband satellite network. Despite unpredictability, these disruptions tend to be short-lived and may not completely disrupt communications. Additionally, ice may accumulate on the satellite dish during colder weather. Utilizing de-icing equipment can prevent the situation. • Solar-Related Outages – Even though the satellite remains relatively stationary within the sky, the sun does not. During the spring , the sun travels across the sky directly above the equator. Over the course of several days, for a short period of time each day, the sun will travel directly behind the satellite. When this occurs, the radio energy emitted from the sun can drown out the signal from the satellite, causing the receiving antennae on the ground to be subjected to a great deal of interference. Unlike weather-related outages, solar outages can be easily predicted and planned for. • Line of Sight (LOS) – Each satellite antenna must have an unobstructed, line-of-sight view of the satellite. Trees, mountains, tall buildings, and other structures that block the view will prohibit communication with the satellite.

62 4.2.11 Applications for High Speed satellite communications:  Desk to desk communications.  Videoconferencing.  High speed internet access.  E-mail.  Digital and colour fax.  Telemedicine.  Direct TV and Video.  Transaction processing.  Inter-active distance learning.

Chapter 5 Video Streaming

5.1 Why video streaming as broadband connectivity application:

The recent advent of widely available broadband Internet access has resulted in an explosive growth of new video streaming applications and research into methods to efficiently support such applications .Many approaches, including source and channel coding techniques have been proposed to deal with the delay, loss, and time-varying characteristics of best-effort packet- switched networks but nothing gave great results such as those introduced by Broadband various techniques. Broadband internet solved great problems that video streaming faced in narrowband connections because "bandwidth" limitations of telephone modem always represented a major obstacle in putting video material into meaningful use. Problems of video include: 1.Bandwidth: Bandwidth can not be reserved in the internet i.e. available bandwidth is dynamic so if transmission was faster than available bandwidth congestion will occur and some packets will be lost all this degrades video quality ,for this matching between the bit rate and available bandwidth must be achieved.

63 2. Delay Jitter: End-to-end delay may fluctuate from packet to packet; this fluctuation introduces what’s called jitter. This problem was some hoe solved by using a buffer at the receiver. 3. Packet Loss: Several solutions were introduced to overcome this big problem like: – Forward Error Correction (FEC) – Retransmission – Error concealment – Error-resilient video coding

5.2 Streaming Vs Downloading: There are two ways to view video from the internet :Downloading and streaming. Downloading the entire file is saved on computer (usually in a temporary folder), which the user then open and view. This has some advantages (such as quicker access to different parts of the file) but has the big disadvantage of having to wait for the whole file to download before any of it can be viewed. If the file is quite small this may not be too much of an inconvenience, but for large files and long presentations it can be untolerable delay. After download the player software plays the file by reading it off the users hard disk ,actually some media players have the capability of playing a file while it’s downloaded ,as long as it downloads fast enough but if the video bit rate is too high for the users bandwidth he may have to wait until it fully downloads. Streaming media works a bit differently - the end user can start watching the file almost as soon as it begins downloading. In effect, the file is sent to the user in a more or less constant stream, and the user watches it as it arrives. The obvious advantage with this method is that no waiting is involved when creating streaming video.

How to stream: When video is streamed, a small buffer space is created on the user's computer, and data starts downloading into it(figure 5.1). As soon as the buffer is full (usually just a matter of seconds), the file starts to play. As the file plays, it uses up information in the buffer, but while it is playing, more data is being downloaded. As long as the data can be downloaded as fast as it is used up in playback, the file will play smoothly.

64 Figure 5.1

There are two things to be well known: The video file format and the streaming method.

 File Format: There are many video file formats to choose from when creating video streams. The three most common: • Windows Media • RealMedia • Quicktime  Streaming Methods: There are two different ways to stream video: • HTTP (Hyper Text Transfer Protocol) • Specialized Streaming Servers

5.3 Classes of Streaming: 5.3.1 Streaming stored Video: In this class clients requests video files that are stored on servers, there are some distinguishing features here : Stored Video: The video has been prerecorded and stored at the server, so the user can pause, rewind or forward through the content. Streaming: Here a user begins playing the video after beginning to receive the file from the server, this means the user will start watching while receiving the rest parts of the video.

65 Continuous Playout: Once playout begins it must proceed according to the original timing of the recording and this may lead to large delays which makes the video arrive after its playout time and considered useless.

5.3.2 Streaming Live Video: This is similar to traditional broadcast television except that transmission takes place over the internet. Since video is live not stored the user can’t pause,rewind or forward through the content ,but however with local storage of received data these control operations could be possible.Here delays of up to tens of seconds can be accepted from when the user requests the video until start of being played.

Video can reside in an ordinary Web server or a special streaming server tailored for special audio & video applications.

5.3.3 web server: A web server delivers media files to the client over HTTP.When a user wants a media file residing in a web server the users host establishes a TCP connection with the web server and sends HTTP message requesting the file, if the file is audio there’ll be no problem but in the case of video file ,the video and audio of the file may be stored in two different files so two separate HTTP requests must be sent to the server but to keep discussion simple we’ll assume that the audio and video are stored in one file. Delivering video file using a web server is sometimes referred to as "progressive download" or "http streaming". Let's say we have a video file encoded at 200kbps, you place that file on your Web server, and put a link to the file on your web page,the web server does not know or care that it's a 200kbps video file it simply pushes the data out to the client as fast as it can. It may appear to be streaming since playback can begin almost immediately, in fact, it's not really streaming at all, but players allows them to begin playing the file as soon as enough data has been downloaded. Of course, you can't fast-forward to the end of the file until the whole file arrives from the server. If the actual network bandwidth is smaller than the 200kbps that the file is encoded at, then you may have to wait a while before you can begin playing it. But even on a 56kbps connection, the video will look great – we're essentially trading waiting time for video quality. The temporary file is saved to the user's computer, so they can play it again

66 if they want to without having to download it again. Web servers use HTTP (Hypertext Transport Protocol) to transfer files over the network. One of the features of HTTP is that it operates on top of TCP (Transport Control Protocol), which controls the actual transport of data packets over the network. TCP is optimized for guaranteed delivery of data, regardless of its format or size. For example, if your browser or media player realizes that it's missing a data packet from the server, it will request a resend of that packet. Resend requests take time, take up more bandwidth, and can increase the load on the server and if the network connection is sketchy, you could begin to use more bandwidth for resends than you're using for the video itself! TCP is not designed for efficient real time delivery or careful bandwidth control, but for accurate and reliable delivery of every bit.

5.3.4 Streaming Server: A streaming media server is a specialized piece of software that accepts requests for video files, Knows about the format, bandwidth, and structure of those files, and in many cases, pays attention to the performance of the player that's receiving the video. Streaming servers deliver just the amount of data necessary to play the video, at precisely the rate needed to play it. Unlike the web server, this simply starts dumping as much video data onto the network as it can, the streaming server opens a conversation with the media player. There are two sides to this Conversation – one to transfer the video and one for control messages between the player and the server. Because they continue to exchange these control messages with the player, streaming servers can adjust to changing network conditions as the video plays, improving the viewing experience. The control messages also include user actions like play, pause, stop, and seeking to a particular part of the file. Since the server sends video data only as it's needed and at just the rate it's needed, it also allows you to have precise control over the number of streams you serve and the maximum bandwidth you consume. If you've got a 56kbps connection to the network, you won't be able to receive that 200kbps video. You'll have to settle for a lower-quality version that's encoded for 56kbps connections. But streaming delivery of video data does have some advantages:

67  We can skip ahead in a video, or begin playback at a point somewhere in the middle. This is a convenience to users, but also a boon to a provider. It enables interactive applications like video search and personalized play lists.  It lets us monitor exactly what people are watching and for how long they are watching it.  It makes more efficient use of bandwidth since only the part of the file that's watched gets transferred.  The video file is not stored on the viewer's computer. The video data is played and then discarded by the media player, so you maintain more control over your content. Streaming servers can use HTTP and TCP to deliver video streams, but by default they use protocols more suited to streaming, such as RTSP (Real Time Streaming Protocol) and UDP (User Datagram Protocol). RTSP provides built-in support for the control messages and other features of streaming servers. UDP is a lightweight protocol that saves bandwidth by introducing less overhead than other protocols, it’s more concerned with continuous delivery than with being 100% accurate – a feature that makes it well-suited to real time operations like streaming unlike TCP, it doesn't request resends of missing packets. With UDP, if a packet gets dropped on the way from the server to the player, the server just keeps sending data. The idea behind UDP is that it's better to have a momentary glitch in the audio or video than to stop everything and wait for the missing data to arrive. Finally, a streaming server is necessary to deliver live webcasts and to use multicast, for networks that support it, multicast allows more than one client to tune in to a single stream, saving bandwidth at every part of the delivery chain. If we have a heavy traffic load or if we want to web cast live events, we will need to choose a specialized streaming server. This is a type of internet server which is built to handle large numbers of simultaneous connections with people viewing media content. One example is the Helix Universal Server from RealNetworks. This server supports a variety of formats, including Real Media, Windows Media, QuickTime and MPEG-4,briefly discussion about this server will be introduced in chapter 5”Installing and Running a Streaming Server” All of this adds up to a few simple rules for when to use streaming and when to use http downloading to deliver your video. The main reason for downloading video from your Web server is that it's simple and you can do it with infrastructure you already have. It's most useful when your videos are short, when you're

68 more interested in delivering high-bit rate encodings than in delivering in real time, or when you want your viewers to be able to keep a copy of the video on their own computers. Streaming is the better solution when your clips are more than a few minutes long, when you want to enable interactive applications like video search or linking deep into a file, or you want to collect statistics on what's actually being watched. Streaming is the way to go when you want to control the impact of video on your network, or when you need to support large numbers of viewers. And of course, it's the only way to do live webcasts and multicasting. Obviously even live video is not transmitted with its real size but its compressed then sent to the streaming server to be delivered to the user, large number of compression techniques is available.

5.4 Video Compression-Decompression:

5.4.1 Codecs: A codec (coder-decoder or compressor-decompressor) is a software algorithm that performs both compression of the file when transmitting it and decompression at the receiving end. Compression is the process of reducing the file size so it can be streamed through the internet or any network. The average digital video size is 4MB (4000KB) for 1 second of AVI. Few computers can handle that many megabytes per second. The average home 56K modem user may only receive 34Kbps (around 4KB per second). Compression therefore has to reduce the file size of the video for more effective delivery out to the user. Streaming media encoders use codecs during encoding while streaming media players have to use the same codec to play back the file in its original format before being compressed. When compressing video for internet delivery we need to make some substantial compromises in quality, the main question will always be “Which Codec should be used”. There is a number of factors that can guide us to take the decision: the nature of the content, the quality requirements for the audience and the fundamental bandwidth limitations of the audience. Codecs break down into two main types: • Lossless codecs: the original file is exactly recreated. These codecs are great for quality concerns. • Lossy codecs: the decompressed file is an approximation of the original file depending on the amount of compression used.These codecs achieve much higher rates of file size reduction by discarding redundant or unnecessary data.

69 The choices of video codecs are so many so we have to tradeoff between compute power needed to implement the compression, compute power needed for decompression, quality of decompressed data relative to the input data, output size/input size compression ratio, and the time delay imposed by the compression scheme. The key in video streaming is a compression algorithm which produces an image data rate compatible with telephone communication links,in live streaming compression must have low delay or latency since video must be synchronized, and the purpose of a live video is to communicate "live", not by delayed broadcast or video mail.

Codecs can be described by a wide range of parameters or considerations including:

Computational Complexity: Highly complex algorithms may require computing power not available in a desktop configuration. Obviously, there must be a match between the available horsepower and the requirements of the algorithm.

Output Bitstream: The output bitstream must be compatible with the transmission medium.

Output Quality: Quality has been one of the overriding concerns in multimedia streaming developments; fortunately quality is a benefactor of the continuous improvements being made both in algorithms and in computational horsepower.

Latency: A long latency between the time a signal is created to the time it is received at the other end creates a dysfunctional teleconference.

Cost: The cost end user includes the cost of the hardware and the cost of the software. Software may include royalties for licensing of CODEC algorithms. As is obvious, cost plays a direct role in the marketability of any product.

Losslessness: Some CODECs are lossless, some are lossy as described above,also some codecs like JPEG can be either.

5.4.2 Video compression algorithms:

5.4.2.1 JPEG(Joint Photographic Experts Group): The compression ratio is typically about 16:1 with no visible degradation. If more compression is needed and noticeable degradation can be tolerated, as in downline loading several images over a communications link that only need to

70 be identified for selection purposes by the recipient, compression of up to 100:1 may be employed. JPEG compression involves several processing stages, starting with an image from a camera or other video source. The image frame consists of three 2-D patterns of pixels, one for luminance and two for chrominance. Because the human eye is less sensitive to high-frequency color information, JPEG calls for the coding of chrominance (color) information at a reduced resolution compared to the luminance (brightness) information. In the pixel format, there is usually a large amount of low-spatial-frequency information and relatively small amounts of high-frequency information. The image information is then transformed from the pixel (spatial) domain to the frequency domain by a discrete cosine transform (DCT), a DSP algorithm similar to the fast Fourier transform (FFT). This produces two-dimensional spatial-frequency components, many of which will be zero and discarded. Near-zero components are truncated to zero and need not be sent on, either. This quantization step is where most of the actual compression takes place. The remaining components are then entropy coded by the Huffman tree method which assigns short codes to frequent symbols and longer codes to infrequent symbols. This results in additional compression of about 3x.

Decompression reverses this procedure, beginning with the Huffman tree decoding and inverse DCT, transforming the image back to the pixel domain. Since the computational complexity is virtually identical in either direction, JPEG is considered a symmetrical compression method.JPEG, while designed for still images, is often applied to moving images, or video. Motion JPEG is possible if the compression/decompression algorithm is executed fast enough (on a fast chip or chip set) to keep up with the video data stream.

5.4.2.2 MPEG (Moving Picture Experts Group): compression ratios above 100:1 are common. The scheme is asymmetric; the MPEG encoder is very complex and places a very heavy computational load for motion estimation. Decoding is much simpler and can be done by today's desktop CPUs or with low cost decoder chips. The MPEG encoder may chose to make a prediction about an image and transform and encode the difference between the prediction and the image. The prediction accounts for movement within an image by using motion estimation. Because a given image's prediction may be based on future images as well as past ones, the encoder must reorder images to put reference images before the predicted ones. The decoder puts the images back into display sequence. It takes on the order of 1.1-1.5 billion operations per second for real-time MPEG encoding.

71 MPEG-1: Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to about 1.5 Mbps, is an International Standard ISO completed in October, 1992. MPEG-1 is intended primarily for stored interactive video applications (CD-ROM); MPEG-2: MPEG–2 is a widely used, standardized video coding and compression technology. MPEG–2 is used in DVD movies and digital satellite distribution. Non-compressed video stream is roughly 200 Mbps, but with MPEG–2 the video can be encoded at 1.5–18 Mbps. DVD quality can be reached between 5–9 Mbps, but 2–3 Mbps is enough to exceed VHS quality. MPEG–4 is also a video coding and compression technology. MPEG-3: was merged into MPEG-2 and no longer exists. MPEG-4 : MPEG–4 is a compression/decompression technology that aims to achieve interactivity, efficiency, and stability in transmissions. The result of another international effort involving hundreds of researchers and engineers from all over the world, MPEG–4 offers higher video quality and resolution at a lower data rate than other MPEGs. Also, the MPEG–4 stream encoding rate range is wider (5 kbps– 60 Mbps). MPEG–4 allows interactive objects in the stream, making it more multimedia ready. On a broader level, MPEG–4 aims to pave the way toward a uniform, high-quality encoding and decoding standard that would replace the many proprietary streaming technologies in use on the Internet today. MPEG–4 is also designed for low bit-rate communications devices. MPEG–4 supports scalable content, which means content is encoded once and automatically played back and transmitted at different rates depending on the available network connection.

5.4.2.3 H.261: H.261 is a motion compression algorithm developed specifically for videoconferencing, though it may be employed for any motion video compression task. H.261 allows for use with communication channels that are multiples of 64 kbps (P=1,2,3...30.), the same data structure as ISDN. H.261 is sometimes called Px64. H.261, intended for telephony, minimizes encoding and decoding delay while achieving a fixed data rate. H.261 implementations allow a tradeoff between frame rate and picture quality. As the motion content of the images increases (subject moves, for example), the CODEC has to do more computations and usually has to give up on image quality to maintain frame rate, or the reverse.

5.4.2.4 H.263:H.263 is a structurally similar refinement (a five year update) to H.261 and is backward compatible with H.261. At bandwidths under 1000 kbps, H.263 picture quality is superior to that of H.261. Also new in H.263 are PB frames. A PB-frame consists of two pictures being coded as one unit. Thus a PB-frame consists of one P-picture which is predicted from the

72 last decoded P-picture and one B-picture which is predicted from both the last decoded P-picture and the P-picture currently being decoded. This last picture is called a B-picture, because parts of it may be bidirectionally predicted from the past and future P-pictures. With this coding option, the picture rate can be increased considerably without increasing the bitrate significantly. After the video is encoded and compressed it becomes ready to be transmitted through the internet and a very important part in the media streaming is the protocol used in transmission.

5.5 Internet Transport Protocols: 5.5.1 TCP Transmission Control Protocol: HTTP (Hypertext Transfer Protocol) uses TCP as the protocol for reliable data transfer. If packets are delayed or damaged, TCP will effectively stop traffic until either the original packets or backup packets arrive. Hence it's unsuitable for video and audio because: TCP imposes its own flow control and windowing schemes on the data stream, effectively destroying temporal relations between video frames and audio packets Reliable message delivery is unnecessary for video and audio - losses are tolerable and TCP retransmission causes further jitter. 5.5.2User Datagram Protocol( UDP): UDP is the alternative to TCP. RealPlayer uses this approach, (RealPlayer gives you a choice of UDP or TCP, but the former is preferred.) UDP forsakes TCP's error correction and allows packets to drop out if they're late or damaged. Despite the prospect of dropouts, this approach is arguably better for continuous media delivery. If broadcasting live events, everyone will get the same information simultaneously. One disadvantage to the UDP approach is that many network firewalls block UDP information, some users simply may not be able to access UDP files. 5.5.3 Real Time Protocol(RTP): RTP is an Internet-standard protocol for the transport of real-time data, including audio and video. RTP consists of a data and a control part called RTCP. The data part of RTP is a thin protocol providing support for applications with real-time properties such as continuous media (e.g., audio and video), including timing reconstruction, loss detection, security and content identification. RTCP provides support for real-time conferencing of groups of any size within an internet. This support includes source identification and support for gateways like audio and video bridges as well as multicast-to-unicast translators. It offers quality-of-service feedback from receivers to the multicast group as well as support for the synchronization of different media streams. 5.5.4 VDP

73 VDP is an augmented RTP i.e. RTP with demand resend. VDP improves the reliability of the data stream by creating two channels between the client and server. One is a control channel the two machines use to coordinate what information is being sent across the network, and the other channel is for the streaming data.

5.5.5 Real Time Streaming Protocol(RTSP): Real Time Streaming Protocol (RTSP), a proposed open standard for delivery of real-time media over the Internet. RTSP is a communications protocol for control and delivery of real-time media. It defines the connection between streaming media client and server software, and provides a standard way for clients and servers from multiple vendors to stream multimedia content. RTSP is built on top of Internet standard protocols, including: UDP, TCP/IP, RTP, RTCP, SCP and IP Multicast. Media Player products use RTSP to stream audio over the Internet. 5.5.6 RSVP: RSVP is an Internet Engineering Task Force (IETF) proposed standard for requesting defined quality-of-service levels over IP networks such as the Internet. The protocol was designed to allow the assignment of priorities to "streaming" applications, such as audio and video, which generate continuous traffic that requires predictable delivery. RSVP works by permitting an application transmitting data over a routed network to request and receive a given level of bandwidth. Two classes of reservation are defined: a controlled load reservation provides service approximating "best effort" service under unloaded conditions; a guaranteed service reservation provides service that guarantees both bandwidth and delay.

5.6 Ways of delivering the streamed media: There are three ways:  Unicast: A stream to each individual user (on demand delivery)  Broadcast: A single stream to many users simultaneously (live broadcasts)  Multicast: a single steam using a special multicast IP address. Players grab copies of the broadcast packets (only for live delivery).Multicasting is more efficient because a single copy is sent out across a multicast enabled network and players grap copies of the data.

5.7 Projecting bandwidth requirements:

74 Number of simultaneous user * average bandwidth per stream= total streaming bandwidth required. In real world it is more complex and. Four factors must be included in calculations:  Different user connect at different rates  Bandwidth projections must be based on peak usage, not on average usage:  If a site’s infrastructure can’t handle the higher traffic level, this can be a catastrophe because the site becomes unavailable in demand times.  In a case o a live broadcast, another important factor: some programs draw more than others.  Prime times: during which the number of viewers is much higher than average  A further complication is the lack of geographical limitations for an internet broadcast  Using figures for peak usage means that a certain amount of computing resources will be sitting idle much o the time.  Allow for network overhead: no network has 100% of it is theoretical capacity available for date transfer. Limitation (70-80%) because of the way TCP/IP traffic is handled. Theoretically maximum*70% = practical network capacity  Allow extra capacity for future growth. To avoid the self-defeating trap in which higher usage triggers services degradation, always err on the high side when forecasting user traffic.

5.8 Projecting the capacity of available bandwidth: Practical network capacity/bit rate per stream= maximum simultaneous streams When calculating the maximum number of streams considers: 1) The capacity of server 2) The practical capacity of the internet network 3) The practical capacity of connection to the internet Calculation 2 and 3 are easy. Predicting the number of streams supported by a server depends on the operating system processor speed, amount of RAM…etc.

75 Part three Chapter 6 Methods and Techniques: 6.1 installing and running a streaming server: Tow types of streaming, stored video and live broadcasting have been handled. The tow types include encoding the video by Real Producer and sending the stream directly to the Helix Universal Server which in turn broadcasts it straight out to the audiences. A full streaming media has been designed. A video file has been captured by a camera which is connected to the server. The server in turn contained Helix Server and RealProducer software. The RealProducer encoded the file and, via Helix Server, the file was broadcasted the connected audiences. Two files, on of 1.5MHz and the other is of 0.15MHz have been downloaded over three different media: traditional 56Kbps dial up modem, 128Kbps ISDN, and 384Kbps DSL. Also two screen shots have been taken from thej0.15Kbps file, to visibly show the difference in the quality.

6.2 Helix Universal Server 1. It is the most powerful server software available for streaming media files across an intranet or the Internet. 2. Helix Universal Server can stream on-demand clips and broadcast live events in more media formats than any other media server, the following table represents the major media formats available with Helix Universal Server :

RealNetworks: RealAudio (.rm), RealVideo (.rm, .rmvb), RealPix (.rp), RealText (.rt) Macromedia: Flash (.swf) Microsoft: Windows Media (.asf, .wma, .wmv) Apple: QuickTime (.mov) Standards-Based: MPEG-1, MPEG-2, MPEG-4, MP3

Table 6-1

76 6.2.1 Streaming Media Encoders For delivering on-demand clips, the three major steps are encoding a clip with an encoding tool, streaming a clip through Helix Universal Server, and playing a clip with a media player. Many encoders also accept live input, encoding it as a stream that is sent to Helix Universal Server for live broadcast without being saved as a streaming clip first.

Figure 6-1

For each media type, a specific tool (or family of tools) can be used to encode audio and video as a streaming clip or live broadcast. Helix Producer, for example, turns files in formats such as AVI, WAV, and uncompressed QuickTime into RealAudio and RealVideo clips. It can also encode live input from a camera or microphone.

Figure 6-2

77 6.2.2 Protocols compatible with Helix Universal Server: 1. Real Time Streaming Protocol (RTSP). 2. Progressive Networks Audio (PNA). 3. Microsoft Media Services (MMS). 4. HyperText Transfer Protocol (HTTP). Although HTTP is not a streaming media protocol, Helix Universal Server uses HTTP in a number of ways. For example, it uses HTTP to deliver the Helix Administrator HTML pages that allow you to configure and run Helix Universal Server.

6.2.3 Start Requirements: Software and hardware on the computer running Helix Server: • Helix Universal Server. • Producer (sends clips to Helix Server to be broadcasted ). • Camera and microphone when streaming live events(live broadcasting). Software and hardware on a computer other than the one that runs Helix Universal Server: • RealOne Player • multimedia equipment: o CD player and software o sound card o speakers • Web browser • HTML editor or text editor for creating an HTML page (optional) Helix Universal Server is ready to stream prerecorded clips right after installation. As with any Internet server, you request content from Helix Universal Server using a hypertext URL, which you typically add to a Web page. A Web page hyperlink to content on Helix Universal Server launches a media player and streams some content, whether a prerecorded clip or a live broadcast. A typical link to a media clip or a broadcast served by Helix Universal Server includes the server address, protocol port (optional), mount points, path, and file name: protocol://address:port/mount_points/path/file

78 6.3 RealProducer: RealProducer creates streaming media data packets by "encoding." During encoding, the source media is transformed into streaming media using "codecs". The entire process is summed up in the following steps: 1. RealProducer receives the source media as a file or live audio/video. 2. RealProducer uses a codec to compress the media source's data into packets. 3. The packets are sent to Helix server. 4. The data packets are streamed via the Internet or network to the user. 5. At the user's end, the same codecs are used to piece back the media so that it can be played out.

Figure 6-3

A complex task for RealProducer is to convert standard video into streaming media. A RealVideo clip is created by converting a video file or by capturing from a video source. RealProducer

79 converts different attributes of the video such as frame rate, type of motion, and size of the image into a RealVideo clip using a video codec. Plus, if the video includes audio data, that must also be converted using the audio codecs. The RealProducer can be at the same machine containing the Helix Server and hence the server address will be localhost, the producer can also be at a different machine and packets will be sent to the IP address of the server machine. After receiving the video by the server it will be sent to audiences with the attributes decided by the producer. The window shows the interface of the producer where the pane on the left shows the original file (in any of a variety of formats) and the right pane shows the file as it is being converted to a streaming media format. The program has numerous options for adjusting the quality of the video, file size, etc.

6.3.1 Targeting Audiences Before RealProducer can compress the input media data, it needs to know something about the audience that you will be targeting. An audience is defined by the bit rate at which they can connect. For example, a person using a 56 kbps dial-up modem to connect to your stream is a member of the 56K Modem audience. If you target only one audience, you are targeting a single bit rate.

Figure 6.4

Since compressing data loses some information, picking the correct audience is a key to deciding how much of your source's data you keep. With RealProducer's SureStream technology you can reach the widest possible audience, and provide all users with the best listening and viewing experience optimized for their bandwidth.

80 Figure 6-5

There are advantages to using SureStream. You can create a single RealMedia clip recorded for multiple target audiences, or you can create a clip that will automatically switch to a lower bandwidth during poor network conditions. SureStream RealMedia files can combine several different streams that take advantage of these features.

6.3.2 RealProducer Video Codecs: • RealVideo 10 • RealVideo 9 • RealVideo 8

RealVideo 10: RealVideo 10 is the default video codec for RealProducer 10, RealVideo 10 provides significant quality advances over its predecessors, RealVideo 9 and RealVideo 8. Clips encoded with RealVideo 10 exhibit greater clarity and increased smoothness of motion, particularly in fast- action scenes. Whether used for download or streaming, RealVideo 10 codec delivers unparalleled quality from narrowband to HDTV. By providing dramatically improved compression over previous generation technologies, RealVideo 10 reduces bandwidth costs while enabling high-quality, rich media experiences — at any bit rate and on any device. Unparalleled quality • Same Quality at 30% lower bitrate than RealVideo 9 • Same Quality at 80% lower bitrate than MPEG-2 • Same Quality at 75% lower bitrate than HDTV • Same Quality at 45% lower bitrate than MPEG-4

81 • Same Quality at 15% lower bitrate than H.264 Many video codecs employ “block-based” algorithms to do compression and decompression of video. These algorithms process several pixels of video together in blocks. As the compression ratios increase, these block-based algorithms tend to represent individual blocks as simply as possible. A single block may be represented simply as a single color (e.g. the entire block is all “light-blue”). Carefully studying competing video codecs, one can easily see this visual effect (so-called visual “artifacts”). When using block-based algorithms strong discontinuities, so-called block edges, can become very pronounced. RealVideo 10 avoids blockiness by employing sophisticated algorithms that are able to more accurately compress the video. New proprietary analysis and synthesis algorithms (transforms), more sophisticated motion analysis, content adaptive filtering technology, and other compression schemes built inside RealVideo 10 allow it to provide a higher fidelity reproduction of the video and maintain a more natural look and feel.

The RealVedio 10 Encoder: RealVideo 10 supports a wide range of video applications from real-time streaming to download .To accommodate these applications the RealVideo 10 encoder supports the following encoding modes: • Constant Bitrate • Variable Bitrate • Quality-Based Encoding

In Constant Bitrate mode, the encoder maintains the target bitrate throughout the duration of the content; with a small allowed buffer for slight deviations in bit usage. The size of this buffer determines the pre-buffering time and is settable in the Helix Producer Plus using the “maximum startup latency” setting. This mode should be used for most real-time streaming applications to maximize visual quality over a constant bitrate connection. Using the Variable Bitrate mode, the encoder attempts to meet the target bitrate over the length of the content, but makes no particular effort to maintain a constant rate throughout. Variable Bitrate encoding should be used when the overall bitrate or file size needs to be constrained, but there are no instantaneous bitrate requirements, such as for downloaded content. Using Variable Bitrate, a maximum constrained bitrate can be set to limit the instantaneous bitrate.

82 Quality-Based Encoding compresses content without regard for bit usage, but instead maintains a constant level of visual quality throughout. This mode should be used when there is no need to maintain bitrate or file size, but a certain level of visual quality is desired. As in Variable Bitrate mode, a maximum constrained bitrate can be set to limit the instantaneous bitrate. Additionally, other related parameters such as frame rate, key frame rate, error protection and two-pass encoding modes are settable. Using two-pass encoding, the RealVideo 10 encoder is able to first analyze the video before compressing the content. That analysis allows the encoder to better maximize the visual quality while meeting the bit usage requirements in Constant and Variable Bitrate modes. Using Quality-Based Encoding, analysis is done to better maintain the targeted visual quality throughout the content.

RealVideo 10 Decoder The encoder/decoder complexity is asymmetric with the difference in complexity between the encoder and decoder near a factor of 3-5 times under normal (default) encoder and decoder operation.

6.4 Bandwidth Requirements: Bandwidth describes how fast data flows on a communications path at a given time. The quality of the output stream will depend greatly on the bandwidth of the sending and receiving sides. As we know bandwidth or connection speed fluctuates with internet use conditions, i.e at times when the Internet is in great demand it can be difficult to view any media. The table below shows typical connection speeds (bit rates) during average conditions for different bandwidths. It also shows the typical use of each bandwidth.

83 Table 6-2

6.5 Analysis (Ethereal network analyzer): It is a GUI network protocol analyzer. It provides browsing packet data from a live network or from a previously saved capture file. There is no need to tell Ethereal the type the file; it will determine the file type by itself. After capturing the file, Ethereal provides full statistics for the conversation between the source and the destination. This statistics include frame number, frame size, end to end delay, propagation delay, delay between frames, and end to end delay. First a file of 1.5Mb is captured over a 56Kbps dial-up modem, a 128Kbps ISDN, and 384 Kbps ADSL connections. Then a file of 0.15Mb is captured over the same three media environments.

84 Figure 6-6

85 Chapter seven Results and Discussion:

First in broadcasting the quality was almost poor because the producer limited the bandwidth to 34kbps paying no attention to the real bandwidth of neither the server nor the audiences, this problem was solved by the new version of the producer (RealProducer Plus 10) which gives the ability to adjust the bandwidth wanted by the video file. RealProducer gives many advantages in how controlling the stream by managing the rate of transmission, matches the audiences’ bandwidth, and makes use of video filtering and quality of encoding. This process was carried out in 100Mbps Ethernet LAN, i.e. a bandwidth sufficient for a streaming media application.

Analysis: xDSL techniques now days are the fastest and most reliable internet connection techniques. The ISDN as the first technique considered to provide high data delivery is the second speed technique used in this statistics. The dial-up modem as it is the traditionally narrow band connection technique is the reference to make clear comparison between narrow band and broad band connection techniques. The following table shows the results -obtained by Ethereal- of time required to download a 1.5Mb file in the media shown:

Technique Time of Downloading

56Kbps dial-up modem 292.305

128Kbps ISDN 96.127

384Kbps ADSL 2.025

Table 7-1

86 As mentioned earlier, the bandwidth of the connection is the most effective factor that influences the characteristics of the stream. The following chart describes the tabulated results in a graphical manner. As the bandwidth of the connection increases the time it requires decreases slightly, and vice versa.

300 250 200 Download 150 Time 100 50 0 dial-up ISDN(128k) ADSL(384k) modem(56k) Bandwidth

Figure 7-1 Time required to download a 1.5Mb file

Table 7-2 shows the results obtained by the Ethereal network analyzer after capturing a file of 0.15Mb.

Technique Propagation Delay End-End Delay Delay Jitter

56Kbps dial-up modem 43.52 50.074906 0.002002

128Kbps ISDN 14.507 29.69702 0.0067

384Kbps ADSL 2.946 13.089 0.00044

87 Table 7-2

As the network suffers a lot of obstacles in order to deliver its traffic, and as long as the routers struggle with packets and links to achieve appropriate transmission, all these problems appear as side effects in the quality and reliability of the transmission. Propagation delay, end to end delay, and delay jitter are the major of these. These are shown in the following charts describing the differences between these effects in accordance to the bandwidth difference

0.0025

0.002

0.0015 Delay jitter 0.001

0.0005

0 Dial-up ISDN(128 k) ADSL(384 k) modem(56 k) Bandwidth

88 45 40 35 30 Propagation 25 delay 20 15 10 5 0 Dial-up ISDN(128k) ADSL(384 k) modem(56 k) Bandwidth

60 50 40 end to end 30 delay 20 10 0 dial-up ISDN(128k) ADSL(384k) modem(56k) Bandwidth

89 90 The above pictures display the difference in quality between a connection of 225Kbps quality (first picture as arrow points), and a quality of 34Kbps (second picture).

91 Chapter eight Conclusion

In this thesis the main Broadband technologies were introduced showing their requirements, operations, available bandwidth and limitations. As an implementation example showing the great capabilities of “Broadband”, video steaming was the best chose because video as introduced in the thesis is the most bandwidth consuming application. From the results obtained it was observed that broadband connections largely improved video quality reducing the delay jitter, end to end delay..etc. As explained in this thesis, the practical bandwidth doesn’t equal the theoretical one. The screen shots obtained displays this fact. The quality of the 225Kbps quality is not as much as must be obtained theoretically.

92 References:

ا{|Z\q] rن دwxyj ، اravb اr\s، ^qaumb اopoqbاma\nbي؛ اZajZjت و Zaghfت اZefل اZ`Za\bت _^ [\]Zت .1 اjZ€b• ا~b^ 2. Regis J.”Baud” Bates; Broadband Telecommunications Handbook 3. James F.Kurose, Keith W. Ross; Computer Networking: A Top-Down approach featuring the internet 4. Andrew S.Tanenbuam; Computer Networks 5. William Stallings; Data and Computer Communications 6. Damien Stolarz; Mastering Internet Video 7. Steve Mack; Streaming Multimedia, (Broadcasting with Realmdeis and Windowsmedia Technologies), Bible 8. William Stallings; Wireless Communications and Networks 9. www.cisco.com 10. www.whatis.com 11. www.techonline.com 12. www.itu.org 13. www.ccitt.com 14. www.iec.org

93