<<

WHITE PAPER

Bandwidth Transport, Optimization and Protection for Backhaul Meeting 3G & 4G service delivery challenges

October 2009

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 2

Contents Executive Summary...... 3 Situation Overview ...... 3 Evolution of Wireless RAN & Backhaul Networks...... 4 over PDH (EoPDH)...... 7 Circuit Emulation Services ...... 7 Challenges to achieving Evolution ...... 8 Transport ...... 8 Bandwidth Optimization and Security ...... 9 Addressing the Challenges...... 11 Mapping of Ethernet onto T-1/ E-1 Links ...... 11 High Density T-1 – E-1 Physical Layer Connections ...... 12 PDH over SONET / SDH ...... 13 Compression and Security ...... 14 Conclusions ...... 18

List of Figures

Figure 1 Key Elements of a ...... 5 Figure 2 Femtocells in 3G Cellular Networks...... 6 Figure 3 Backhaul aggregation using EoPDH...... 7 Figure 4 Backhaul Aggregation using Psuedowire Emulation...... 8 Figure 5 CPU Loading from Software Compression and Encryption...... 10 Figure 6 Network Latency Requirements by Cellular Standard ...... 10 Figure 7 CopperNode Block Diagram ...... 12 Figure 8 XRT86VX38 8 Channel Framer/LIU Combo ...... 13 Figure 9 Voyager Block Diagram...... 14 Figure 10 Look-Aside Architecture Data Flow ...... 15 Figure 11 FlowThrough Architecture Data Flow ...... 16 Figure 12 Block Diagram of HIfn's FlowThrough Architecture ...... 17 Figure 13 DS4050 Card - SerDes Backplane Interface Configuration ...... 17

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 3

Executive Summary The legacy TDM/PDH1 based Network Infrastructure is moving to a packet based model characterized by ubiquitous native Ethernet connectivity to an IMS2 based core for the switching and transport of voice, data, and multimedia traffic. As part of that transition, the 3GPP & 3GPP23 standards bodies have updated their physical layer specifications to include native Ethernet Network Interfaces on base stations. Meanwhile, user demand for bandwidth driven by so called “smart phones” that support multimedia, web access and social networking applications is saturating capacity constrained wireless backhaul networks – most of which are still serviced by some type of TDM access network. Operators and their equipment suppliers are caught in the middle, they must somehow leverage their current investments in SONET/SDH access networks and twisted pair copper to address both the insatiable bandwidth demands of their users and the security issues that unfortunately accompany packet traffic transported over the public .

This white paper delves into the issues surrounding the transition of the wireless network to an Ethernet / IP architecture and discusses solutions that will enable equipment OEMs and their Service Provider customers to address the challenges they are facing in today’s rapidly changing environment.

Situation Overview As of mid 2009, there were approximately 2.4 million cellular sites worldwide, with less than 3% of them serviced by some type of packet based backhaul network4. Many of these sites support multiple cellular technologies (2G, 3G, & soon 4G). In addition, a significant number of these sites are shared by multiple carriers, making an already dire situation worse by multiplying by 2, 3, or even 4 times the already demanding backhaul bandwidth requirements of a single carrier for the site. The access medium for these sites depends upon geography and whether the site is Greenfield or has been in service for some time. Microwave and Copper fed TDM are currently the dominant backhaul solutions, with Fiber being retrofitted to key sites and deployed into new ones whenever possible. Still, this leaves millions of cellular sites in need of bandwidth upgrades and Ethernet connectivity, with the carriers constrained by the need to deliver these upgrades over TDM access to a SONET/SDH infrastructure. Even Microwave Ethernet solutions face challenges, as spectrum allocations limit the ability to scale bandwidth available to a cell site rapidly. Data Optimization is the key to achieving maximum effective bandwidth in this situation.

1 Time Division / Plesiochronous Digital Hierarchy 2 IP Multimedia Subsystem 3 3GPP is the standards body that sets standards for GSM, WCDMA, & LTE networks. 3GPP2 is responsible for CDMA based networks such as IS‐95 and CDMA2000. 4 Packet Backhaul: Carrier Strategies & Real World Deployments, Heavy Reading

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 4

Back at the core, the transition from a circuit based to packet based switching and transport architecture5 continues in its progress towards the inevitable end state of an Ethernet / IP network – a process certain to take many years. As a result, even so called “private network” data cannot be guaranteed to stay off the public internet – for example a carrier may use CES6 to transport T-1’s across its Packet Transport Infrastructure. This means that security of the connectivity between network elements is no longer assured, necessitating the addition of IPsec (IP Security) capability throughout the network.

On the user side, multimedia applications and web browsing have become ubiquitous on mobile phones. Cellular data enabled laptops (the initial target of 4G deployments), “Netbooks” and special purpose appliances such as Amazon’s Kindle are also driving exponential growth in bandwidth requirements for cellular networks. Operators are responding aggressively, with seemingly daily announcements of UTRA7 radio technology upgrades and trials. However, all of this “over the air” bandwidth is useless if it immediately hits a backhaul bottleneck – as is happening today. In an ideal scenario (unlimited capital, favorable regulatory environment, investors willing to bear losses caused by writing down hundreds of billions of dollars of investment, unlimited installation resources and a supply chain capable of instantaneously ramping capacity), carriers could respond by ripping up the current infrastructure and installing a Fiber fed Ethernet / IP access network, immediately solving the problem. Obviously, the real world scenario faced by the carriers (limited capital, uncertain regulatory environment, investors who demand steady if not increasing profits, constrained craft resources and limited ability of the supply chain to ramp capacity in the short term) dictates a different solution. Somehow, that investment in copper cable and SONET/SDH equipment must be leveraged to provide Ethernet interfaces to the base stations – driving the need for Ethernet Transport solutions over PDH.

Evolution of Wireless RAN & Backhaul Networks Wireless mobile technologies include 3GPP and 3GPP2 cellular and WiMAX8 networks. Cellular networks are in the midst of 2G (GSM / Edge ) and 3G (WCDMA, HSPA, EVDO) volume deployments, with 4G9 LTE equipment currently beginning initial carrier trials. LTE networks will eventually enable subscriber burst data download rates as high as several hundred Mbps. WiMAX is expected to evolve into a 4G mobile technology that competes with cellular. Its peak user data rate will be around 70 Mbps. Table below shows per sector peak data rates, latency, and number of users

5 The wireless Core Network architecture is known as “SAE (Service Architecture Evolution). 6 Circuit Emulation Services – The transport of TDM data over a packet network 7 UMTS Terrestrial Radio Access – The infrastructure that provides connectivity between a mobile device and the Base Station. 8 Worldwide Interoperability for Microwave Access, a technology based on IEEE 802.16 standard 9 Technically, 4G standards have yet to be defined by the cellular standards bodies – a purist would call LTE a “3.9G” standard. This paper chooses to go with industry convention and refers to LTE & WiMax as “4G”.

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 5 supported for various 3GPP 3G standards releases. Releases 99 – 7 are 3G, while Release 8 is the first 4G release.

Rel 99 Rel 5 Rel 6 Rel 7 Rel 8 Rel 8 WCDMA HSDPA HSUPA HSPA LTE 2x2 LTE 4x4 DL BW 0.384 3.6 14 28 144 288 (Mb/s) UL BW 0.64 0.384 1.4 5.8 48 96 (Mb/s) Latency ~ 150 ~ 75 ~ 50 ~25 ~10 (ms) Users ~ 100 ~ 3000

Table 1: Performance Metrics of various 3GPP Standards Releases

Wireless backhaul refers to the connection from base stations to the remainder of the network. A base station in a cellular network is called either a base transceiver station (BTS), B, or eNode B, depending on 2G, 3G, or 4G technologies are being referred to10. The network controller responsible for coordinating handoffs between base stations is called either a base station controller (BSC) or controller (RNC). In 4G networks, the functionality of the RNC has been distributed to the base stations, dramatically increasing the protocol processing demands of the base station hardware and software. The following diagram illustrates the key elements of a cellular network.

Figure 1 Key Elements of a Cellular Network

10 Note that 3GPP2’s CDMA2000, although a 3G technology, uses the term BTS for base stations

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 6

Another version of a base station known as a Femtocell is being trialed by carriers as an alternative approach to increase coverage inside buildings or other areas of limited coverage. Rather than adding a costly macro base station, a much smaller footprint femtocell (Similar in size and coverage to a Wifi access point) can be deployed inside the subscriber’s home. Each femtocell base station typically supports up to four users and connects to a femtocell Access Gateway via the Internet. Each Access Gateway aggregates the traffic of potentially hundreds of femtocell base stations before passing it to the RNC.

Since each femtocell is connected to the cellular Core Network via the public internet, standards mandate usage of IPsec security to protect the connection. This dramatically increases the number of IPsec connections required to support the same number of users, since where previously one IPsec link between an RNC and base station might support several hundred users, it now supports just a few. The following diagram illustrates a cellular network before and after the addition of Femtocells.

Figure 2 Femtocells in 3G Cellular Networks

With the transition of the Core Network to a Packet based architecture interconnected by Ethernet, the challenge facing OEMs and operators is leveraging the current infrastructure to simultaneously deploy the T-1/E-1 interfaces required by legacy 2G base stations, IMA11 or Ethernet for current 3G deployments, Ethernet interfaces for future 4G base stations and beyond. The question isn’t if to transport, optimize and secure Ethernet connectivity, but how.

11 IMA – Inverse Multiplexing over ATM. Inverse multiplexing refers to technology that groups several smaller bandwidth links into a single large virtual link.

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 7 Two complementary approaches stand out as solutions to this dilemma. Both recognize the practical need to leverage the current access infrastructure as much as possible. One approach is better suited to Greenfield or easily upgradeable installations (such as Microwave), while the other is best suited for cell sites currently serviced by copper TDM networks. Both require utilization of new standards such as OTN and the leverage of legacy SONET/SDH, differing only in exactly where Ethernet Transport goes “native”.

Ethernet over PDH (EoPDH) For cell sites serviced by legacy T-1/E-1 TDM interfaces where copper twisted pairs are the primary transmission medium, transport of Ethernet links over this medium makes the most sense. In this scenario, a MSPP12 drops copper T-1/E-1 pairs to the base station. Some of the T-1/E-1 pairs support 2G interface standards, while others are allocated for transport of the T-1/E-1 based IMA traffic destined for 3G base stations. Ethernet is also transported over T-1/E-1 pairs by mapping the Ethernet link into multiple links. This is very similar to IMA, except that Ethernet frames instead of ATM cells are being inverse multiplexed. The EoPDH deployment scenario is shown in the figure below:

Figure 3 Backhaul aggregation using EoPDH

Circuit Emulation Services Pseudowire emulation (PWE) has been defined by the IETF13 as the emulation of a TDM service (such as T-1/E-1 link) over a packet switched network (such as IP). The aggregation of disparate links from 2G, 3G, and 4G base stations via PWE access equipment is an alternative approach for providing T-1/E-1 and IMA interfaces to legacy equipment. In this case, Ethernet connectivity is present at the base station site,

12 Multi Service Provisioning Platform – A Network Element that can add / drop PDH or Packet Services 13 Internet Engineering Task Force

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 8 where a PWE device converts the T-1/E-1 PDH traffic into packets which are transported over an Ethernet Network to the RNC, where they are converted back to T-1/E-1 links. Ethernet traffic destined for 4G base stations requires no interworking – it is carried natively.

Figure 4 Backhaul Aggregation using Psuedowire Emulation

Challenges to achieving Wireless Network Evolution

Bandwidth Transport

Cellular base station traffic has traditionally been backhauled via standard T-1 / E-1 TDM connections. The traffic typically stayed entirely within a carrier’s network and was transported via PDH links carried over SONET/SDH access networks. This worked fine in the early days of wireless networks, as voice traffic dominated the traffic mix and data rates were equivalent to analog or ISDN modems. This meant that bandwidth requirements were easily satisfied by a few T-1/E-1 lines. As standards evolved towards Ethernet connectivity and handsets capable of consuming more bandwidth became ubiquitous, bandwidth requirements have exploded and packet based interface requirements have been added.

Yet, carriers have invested billions in building their networks, and must somehow leverage them to meet current and future wireless backhaul requirements. Network evolution mandates Ethernet connectivity, but many cell sites will be served by copper for some time. This means that individual copper pairs must be aggregated into a

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 9 single virtual link to meet overall bandwidth demands. It is easy to visualize a cell site being served by sixteen or even more T-1/E-1 interfaces in the near future. High density, low power & high performance physical layer interfaces will be needed to achieve acceptable performance in the high crosstalk environment of high density twisted pair cables in this scenario. Likewise, application specific mapping and framing silicon will need to be purpose built to address power, cost, and density requirements.

Bandwidth Optimization and Security

Regardless of the amount of bandwidth provided a cell site; economics demand maximum utilization of available capacity. As the network traffic mix shifts from voice centric to data centric this optimization can be achieved by compressing raw data (such as XML, Javascript, Documents, etc.) before it is transported by the backhaul network. Although compression ratios are traffic-dependent, achieving a 50% reduction in bandwidth usage using IPcomp14 is a reasonable expectation for this type of traffic.

Shared networks pose security risks that need to be mitigated using encryption technology. The de facto standard for security in packet-based IP networks is IPsec. In fact, the 3GPP15 has made IPsec mandatory for the backhaul connection of 4G base stations. Femtocell base stations reside on customer premises and connect to the mobile operator’s network via the Internet. The 3GPP also mandates that these backhaul connections use IPsec for security16. Finally, legacy traffic that may be carried via packet networks should be protected via encryption as well.

Some of the barriers for adding compression and security to communications links have to do with performance, cost, and power budget. The algorithms used in IPcomp and IPsec are very CPU intensive. As such, performing these functions can degrade the overall performance of the system. The compressed and encrypted may also be inconsistent depending on the load of the CPU performing concurrent tasks. This would translate into poor service for subscribers. The following chart illustrates the enormous amount of CPU resources taken up by performing compression and encryption in software.

14 Industry‐standard compression protocol using LZS (Lempel‐Ziv Stac) algorithm 15 3rd Generation Partnership Project – a collaboration of telecom associations that specifies standards for cellular technologies 16 The technical specifications TS 33.210 and TR 33.821 specify the security requirements for wireless networks

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 10

Figure 5 CPU Loading from Software Compression and Encryption

Each successive generation of mobile technology will also require network latency to decrease. Network latency is defined as the round-trip time for data to travel from the mobile unit through the wireless network. For example, the latency requirement decreased from over 600 ms in a 2G GPRS network to about 10 ms in 4G LTE networks. This is so operators can deliver an improved end-user experience for real-time and interactive applications such as online gaming, multi-cast, and VOIP.

Network Latency

700

600

500 ) s m ( 400 y c n 300 te a L 200

100

0 GPRS EDGE Rel EDGE Rel WCDMA HSPA LTE '99 '04 Cellular Technology Source: Rysavy Research Sept/08

Figure 6 Network Latency Requirements by Cellular Standard

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 11

Addressing the Challenges

Equipment manufacturers and network operators are left with two daunting problems:

1. Transporting Ethernet over legacy PDH networks and compressing/encrypting backhaul links while reducing network latency

2. Keeping costs and power consumption under control.

Successfully addressing these challenges requires a holistic approach to architecting backhaul equipment. Physical Layer devices must be high density, low cost, and high performance, all while minimizing power consumption. Layer 2 mapping devices must the support the multiple native interfaces required by legacy 2G, current 3G, and future 4G standards. Network equipment must implement the mapping of Ethernet over SONET/SDH for compatibility with legacy carrier infrastructure. Finally, compression and security support must be added while minimizing impacts on network performance, cost, and power consumption.

Mapping of Ethernet onto T-1/ E-1 Links EoPDH requires efficient mapping of Ethernet frames onto multiple T-1/E-1 links. Depending on the bandwidth requirements of a particular cell site, four, eight or even 16 links may be aggregated into a single Ethernet channel for transport over the SONET / SDH infrastructure. In addition, the possibility of being required to backhaul traffic from multiple operators places the requirement to differentiate traffic from different interfaces on the aggregated link. In this use case, some type of priority mechanism must be also provided to prevent one channel from saturating the link. Lastly, support for bandwidth upgrades should be provided to better allow adaptation to ever growing bandwidth demands.

Via its Galazar acquisition, Exar provides the CopperNode Multi-Service Framer as a solution to these requirements. Using standardized Virtual Concatenation, individual transport links can be bonded together into higher capacity VCGs17. With LCAS18, the VCG is protected against link failures. LCAS also provides a simple mechanism for increasing or decreasing the bandwidth of the link. Multiple carriers are supported with individual bonded groups or by VLAN tags and Q in Q, while a built in priority mechanism prevents any particular Ethernet port from saturating the virtual link. A block diagram of the CopperNode is given in the diagram below:

17 Virtual Circuit Groups 18 Link Capacity Adjustment Scheme

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 12

Figure 7 CopperNode Block Diagram

High Density T-1 – E-1 Physical Layer Connections Designing high density interfaces such as T-1 / E-1 is a challenging endeavor. Exposure to the outside plant environment characteristic of cell sites infers a requirement on the equipment to meet stringent power cross, power fail, and lightening protection specifications. Depending on geographical considerations, these specifications will vary significantly, mandating a configurable solution able to meet the disparate requirements of North America, Europe, and Asia. Carriers will typically demand redundancy and failover protection as well, complicating the design and consuming valuable board space with relays. Solutions suitable for T-1/E-1 devices designed for Low End Router Wan Interface Cards fall short in this environment.

Exar has been developing high performance telecom analog circuitry for over two decades, and has considered these requirements in its latest family of high density T- 1/E-1/J-1 Line Interface Units (LIUs) and Framer + LIU combo devices. Representative of the framer plus LIU family is the XRT86VX38. The XRT86VX38 supports all of the latest T-1/E- 1/J-1 specifications. It features R3 technology (Relayless, Reconfigurable, Redundant) to eliminate the cost, power, and board area requirements typical of relay based designs. With its patented pad structure, the XRT86 VX38 provides integrated line side redundancy and supports hot swapping. The physical interfaces are optimized with internal impedances to support the various interface standards. These high performance analog front ends are designed for high density applications, meeting performance specifications while operating in the high crosstalk environment characteristic of the wireless backhaul application. A block diagram of the XRT86VX38 is provided below.

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 13

Figure 8 XRT86VX38 8 Channel Framer/LIU Combo

PDH over SONET / SDH Once Ethernet has been mapped into PDH links and transported via copper pairs to the SONET/SDH access network, it must be then mapped into VTs19 for transport over the network to the BSC/RNC or gateway. Essential in this application is a standards based, high density, PDH to SONET/SDH mapper to enable aggregation of T-1/E-1 links to STS- 3/STM-1 or STS-1 / STM-0 via standard mapping protocols.

The Exar XRT86SH328 Voyager device is well suited to this task. Voyager supports all the framing, mapping, and grooming functions required for STS-3 / STM-1 mapping applications. The device generates and terminates all SONET/SDH Regenerator Section, Multiplexer Section, and Path Overhead, including the low order Virtual Container (VC) Path Overhead. A single Voyager performs mapping of up to 28/21 T- 1/E-1 links to SONET/SDH. The figure below shows the high level block diagram of the XRT86SH328.

19 Virtual Tributaries

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 14

Figure 9 Voyager Block Diagram

Compression and Security

Adding compression and security requirements typically increases latency, as those functions take many CPU cycles and memory copies/transfers to process a single packet. A faster, more expensive CPU can address throughput, but may not address the latency problem inherent in memory copies. The throughput may also be inconsistent depending on the load of other concurrent processes. The problem needs to be solved by coprocessors that completely offload compression and security functions from the CPU while adding almost zero latency.

The economics of using dedicated coprocessors to perform specialized functions have been proven over time. However, as the specialized function matures, it eventually becomes integrated into the main processor. Security coprocessors have been in existence for many years and many embedded processors now have an integrated security core. However, these traditional security processing architectures still require the use of many memory copies/transfers to process a single packet – adding to latency. In security coprocessor jargon, this configuration is known as a “look-aside” architecture. The following diagram illustrates the steps needed to process an IPsec packet.

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 15

Figure 10 Look-Aside Architecture Data Flow

Network equipment that support compression and encryption using look-aside architectures need to be designed with those functions in mind from the start; thus adding them to an existing design becomes impractical. Steps 3 and 4 in Figure 10 also use CPU cycles unnecessarily relative to a Flowthrough architecture (described below). When experiencing heavy CPU loads, those steps may be delayed – introducing additional latency.

A better way to add compression and security to wireless backhaul is to use a FlowThroughTM coprocessor. In this scenario, the CPU is relieved of all compression and encryption processing responsibility. The coprocessor is added as a “drop-in” or “bump-in-the-wire” device between the MAC and PHY devices connected to the backhaul network. The following diagram illustrates how this solution simplifies and addresses the wireless backhaul challenge. As can be seen, the multi-hop journey of a packet to and from the host CPU is eliminated. The Flowthrough coprocessor performs all of the steps necessary to compress the data and convert a clear-text IP packet into a secure IPsec packet to be transported to the backhaul network. A carefully designed Flowthrough device will also add minimal latency to the data path by using dedicated compression, crypto, and Public Key (PK) cores to process each packet.

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 16

Figure 11 FlowThrough Architecture Data Flow

Hifn Technology, acquired by Exar, brings years of experience building hardware coprocessors that perform the compression and encryption functions. FlowThrough coprocessors completely offload those tasks from the host CPU, including algorithm and protocol processing. For example, the Hifn 9150 coprocessor compresses and encrypts data at up to 4 Gbps, enough to support two full-duplex Gigabit Ethernet links at line rate. Compression and encryption are done in a single pass.

The 9150 adds only 4 µs of latency and dissipates on average 3 watts of power, making it ideal for wireless backhaul applications. In addition, its small packet performance (greater than 1Mpps) matches will with the traffic profiles of wireless networks. The 9150 interfaces to other devices via standard gigabit Ethernet interfaces20.

The control interface to the 9150 is achieved via in-band Ethernet frames via the Host Interface. However, an additional RMII (100Mbps Ethernet) interface offers an optional out-of-band control port. This port may also be used to establish an inter-chip link for multi-chip designs.

It also comes with an SDK21 that provides the necessary API’s22 and utilities to configure and manage the security and compression policies. Once the policies are set up, the 9150 takes care of all processing associated with compressing and encrypting the traffic. Termination of IP and Ethernet is completely implemented on- chip, including fragmentation and reassembly of IP packets and ARP resolution for the Ethernet interface.

Also available is an optional IKE23 stack which runs internal to the 9150, providing additional offload to the host CPU and further enabling quick, easy integration of

20 GMII/TBI, RGMII/RTBI, SGMII, SerDes – Industry standard interfaces to Ethernet PHY devices 21 Software development kit – a suite of software tools that enables system integration & development 22 Application programming interfaces – software layer that abstracts hardware features from software 23 Internet Key Exchange – part of the IPsec Protocol Suite

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 17 security and compression into the equipment – yielding quicker time to market and reduced development costs. The figure below shows a high level block diagram of the 9150.

ECC DDR2 SDRAM 3 2/39-bit

Memory Bridge SA RAM RNG Policy Co de Post Crypto eSCeSC TCAM RAM Control (Control Path) DPUDPU Processor (Control Path) Co de Data Port (Fast(Fast Path)Path) RAM (Fast(Fast Path)Path) MAC RAM (RMII) Da ta Public-Key RAM Engine Bridge RGMII/ RTBI

Host 0 Post Crypto Network 0 Buffer Crypto Buffer Engine 0 SGMII/ Packet DMA Proc. 0 SGMII/ SerDes MAC GbE MAC GbE SerDes SerDes Queue SerDes Manager Post Host 1 Crypto Network 1 Buffer Crypto Buffer Engine1 SGMII/ Proc. 1 SGMII/ GbE MAC GbE GbE MAC GbE SerDes

SerDes SerDes SerDes

Figure 12 Block Diagram of HIfn's FlowThrough Architecture

Many equipment manufacturers are leveraging the Advanced TCA ecosystem for reliability, time to market and cost benefits. Exar has added the Hifn Express DS4050 card family to the ATCA ecosystems to enable the addition of “drop in” security and compression for ATCA based systems. The DS4050 card is an Advanced Mezzanine Card based on Hifn’s 9150 Flow Through security and compression processor. The DS4050 family provides solutions with either a SerDes or PCI Express data interface to the backplane. Either configuration supports Rear Transition Module connectivity to maximize customer flexibility for their system level architecture. Hifn 4050S8-4SFP Flash DDR2 JTAG Clock & 4 GbE port MMC SDRAM Power

SFP SGMII / SERDES Hifn 9150 SERDES Port 8 FlowThrough SERDES Port 9 Security SGMII / SERDES SFP Processor Port 17 Port 18 Port 19 Port 20 SGMII / SERDES Hifn 9150 SFP SERDES Port 0 FlowThrough Security SERDES Port 1 SGMII / SERDES SFP Processor

Flash DDR2 SDRAM

Figure 13 DS4050 Card - SerDes Backplane Interface Configuration

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009

Page 18

Conclusions

The evolution of wireless services from supporting plain voice to rich multimedia and the resulting Core Network shift from Circuit Switched to Packet Switched architecture is driving a shift from TDM to Ethernet connectivity for next generation base station deployments. However, carriers must continue to support the TDM interfaces of legacy 2G & 3G deployments and the transition current 3G infrastructure to Packet Based interfaces as standards evolve. Multiple challenges exist in achieving this objective.

Backhaul Challenge The Exar Solution The Exar 86VX38 and 83VX38 family of T-1/E-1 framer High Density T/E Carrier and LIU’s provides high density, power optimized, cost Copper Interfaces effective solutions for the PDH physical layer of wireless backhaul and base station equipment. The Exar EoPDH family of protocol mappers enables transport of next generation Ethernet traffic over Mapping Ethernet Links into legacy PDH networks – a huge advantage to carriers PDH interfaces that must utilize legacy copper infrastructure to meet next generation backhaul requirements. The PDH over SONET capabilities of the Voyager provide high density, integrated mapping of EoPDH T/E Mapping PDH into packet carrier links into VT’s while supporting packet optimized optimized SONET/SDH features such as VCGs and LCAS – enabling carriers to utilize installed SONET/SDH infrastructure to transport Ethernet links. The Hifn Technology FlowThrough coprocessor provides mobile equipment vendors the ability to offer bandwidth optimization and protection with minimal Protecting and Optimizing software integration effort - all while remaining within Backhaul Links cost, power, and latency budgets. The “drop-in” nature of the device also enables the quickest time-to- market for wireless backhaul vendors.

Exar is uniquely positioned to meet these challenges, providing end to end packet solutions for carrier backhaul OEMs that transport, optimize, and protect revenue generating user data throughout the backhaul network.

For more information, please contact Exar at:

Exar Corporation 48720 Kato Road Fremont, CA 95 510-668-7000 www.exar.com

Copyright ©2009 Exar Corporation. All rights reserved. BHWP1009