<<

52-10-26.1

DATA COMMUNICATIONS MANAGEMENT FAST : TECHNOLOGIES AND APPLICATIONS

Nathan J. Muller

INSIDE 100BaseT , Competing Solutions, Fast Ethernet Consortium

INTRODUCTION To prevent LANs from becoming the bottleneck in distributed computing environments, higher performance local area networks (LANs) must be devised. Higher networking is required to handle the growing number of users, bandwidth-intensive applications, and the demand for sustained network use. Although desktop computers have increased in performance and capabilities, the performance of the LAN to which they are attached has remained the same. Currently, there are several options for solving the bandwidth prob- lem. Among them is a category of technologies called Fast Ethernet. Data center managers face the task of determining when to implement higher bandwidth LANs and which solutions make the most sense for their environment. The conditions that are creating the need for higher bandwidth net- working are varied and include the following:

•Today’s LANs share a fixed

amount of bandwidth among all PAYOFF IDEA attached users. If many users si- As computing becomes more distributed and ac- multaneously are accessing the commodates multimedia applications, band- network, average bandwidth per width becomes more in demand. Traditional user declines as more users are Ethernet and LANs have difficulty in meeting the demand for more bandwidth while added, resulting in unacceptably providing acceptable response times. This article long delays. explains the different solutions for meeting the •With more Pentium-based com- demand for more bandwidth and focuses on one puters (or their non- equiva- Fast Ethernet solution that currently enjoys wide- spread acceptance: 100BaseT.

lents) on the network, users can work with extremely large files. With many users contending with other users for network access, moving these large files could take an excessive amount of time. •More symmetrical multi-processor (SMP) servers also are on the net- work to serve the needs of more clients. These servers easily can generate network traffic exceeding the capacity of today’s 4, 10, and 16M b/s LANs. •Applications such as document imaging, multimedia, video confer- encing, computer-aided design/manufacturing (CAD/CAM), automat- ed workflow, and object-oriented RDBMS can generate enormous amounts of network traffic.

The requirement for sustained network use is becoming critical, espe- cially for companies that want to provide more employees with access to private intranets for such applications as online analytical processing (OLAP) and decision support systems (DSS). Even telephone calls are be- ing carried over LANs and, from there, over IP nets to other corporate lo- cations. According to one survey, about 20% of the traffic over corporate intranets is voice.

100BASET FAST ETHERNET A number of high-speed LAN technologies are available to address the needs for more bandwidth and improved response times. Among them is 100BaseT, a technology designed to provide a smooth migration from 10BaseT Ethernet — the dominant 10M b/s network type in use today — to high speed 100M b/s performance.

Compatibility 100BaseT Ethernet uses the same contention-based media access control (MAC) method — carrier sense multiple access with collision detection (CSMA/CD) — that is at the heart of 10BaseT Ethernet. The 100BaseT MAC specification simply reduces the bit time — the time it takes for each bit to be transmitted — by a factor of ten, providing a 10x boost in speed. The packet format, packet length, error control, and management information in 100BaseT are identical to those in 10BaseT. In the simplest configuration, 100BaseT is a star-wire topology with all stations connected directly to a hub’s multiport repeater. The repeater detects collisions. An input signal is repeated on all output links. If two inputs occur at the same time, a jamming signal is sent out on all links. Stations connected to the same multiport repeater are within the same . Stations separated by a bridge are in different collision domains. The bridge would use two CSMA/CD algorithms, one for each domain.

EXHIBIT 1 — 10BaseT Etherenet versus 100BaseT Fast Ethernet

10BaseT 100BaseT Speed 10M b/s 100M b/s IEEE Standard 802.3 802.3u Media Access Control CSMA/CD CSMA/CD Topology Bus or star star Cable Support Coaxial, UTP, optical fiber UTP and optical fiber UTP Cable Support Category 3, 4, 5 Category 3, 4, 5 UTP Link Distance (max) 100 meters 100 meters Network Diameter 500 meters 210+ meters Media Independent Yes (via AUI) Yes (via MMI) Full Duplex Capability Yes Yes

Notes: AUI, Attachment unit interface; CSMA/CD, Carrier-sense multiple access with collision detection; MMI, Media-independent interface; and UTP, Unshielded .

Because no protocol translation is required, data can pass between 10BaseT and 100BaseT stations via a hub equipped with an integral 10/100 bridge. Both types of LANs are also full-duplex capable, can be managed from the same SNMP management application, and use the same installed cabling. However, 100BaseT goes beyond 10BaseT in terms of media, offering extensions for optical fiber. 100BaseT also includes the media-independent interface (MII) specifi- cation, which is similar to the 10-Mbps attachment unit interface (AUI). The MII provides a single interface, which can support external trans- ceivers for any of the 100BaseT media specifications. Exhibit 1 summa- rizes the characteristics of 10BaseT Ethernet and 100BaseT Fast Ethernet.

Media Choices To ease the migration from 10BaseT to 100BaseT, Fast Ethernet can run over Category 3, 4 or 5 UTP cable, while preserving the critical 100-meter segment length between hubs and end stations. The use of fiber allows more flexibility with regard to distance. For example, the maximum dis- tance from a 100BaseT repeater to a fiber bridge, router, or switch using fiber optic cable is 225 meters (742 feet). The maximum fiber distance between bridges, routers, or switches is 450 meters (1485 feet). The max- imum fiber distance between a fiber bridge, router, or switch — when the network is configured for half-duplex — is 2 kilometers (1.2 miles). By connecting together repeaters and internetworking devices, large, well-structured networks can be created easily with 100BaseT. The types of media used to implement 100BaseT networks is summa- rized as follows:

•100BaseTX: a two-pair system for data grade (Category 5) unshielded twisted-pair (UTP) and STP (shielded twisted-pair) cabling.

EXHIBIT 2 — Network Configuration Integrating 10M-b/s and 100M-b/s Ethernet

•100BaseT4: a four-pair system for both voice and data grade (Catego- ry 3, 4, or 5) UTP cabling. •100BaseFX: a two-strand multi-mode fiber system.

Together, the 100BaseTX and 100BaseT4 media specifications cover all cable types currently in use in 10BaseT networks. In addition, all of these cabling systems can be interconnected through a hub. This helps organizations retain their existing cabling infrastructure while migrating to Fast Ethernet, as shown in Exhibit 2.

Network Interface Cards Fast Ethernet NICs are available for the client side and the server side of the network. Both have a variety of features that affect price, but the server NICs usually cost more due to dedicated processors and increased

onboard RAM. Server-side NICs will cost $300 or more, while client-side NICs can be found for under $50. Server-side NICs with onboard processors generally have better per- formance and lower host CPU utilization. Some feature RISC-based mi- croprocessors that run portions of the NIC driver, leaving more CPU cycles for the host. Server NICs should have at least 16 KB of RAM. How- ever, adding more RAM reduces the number of dropped packets, which increases the network’s performance. Some server NICs can be equipped with as much as 1GB of RAM. The choice depends on how busy the serv- er is with other processes. Fault tolerance and load balancing are two important considerations when selecting a server-side NIC. A fault-tolerant NIC comes with a failover driver that diverts traffic to another NIC when it becomes dis- abled. Some vendors have taken fault tolerance one step further, balanc- ing the load across multiple NICs. Server-side NICs and hub modules can be equipped to support a ven- dor’s proprietary features. For example, 3Com Corp.’s server-specific Fast Ethernet cards can encapsulate token ring traffic, allowing a Fast Ethernet backbone to carry token ring traffic between stations. Bay Networks of- fers a Fast Ethernet module for its Multimedia Switch that maps Ethernet- based QoS levels to ATM QoS so that Ethernet end stations can access voice and video already traveling over an ATM backbone.

COMPETING SOLUTIONS Throughout the 1980s, network bandwidth was delivered by shared low- speed LANs — typically Ethernet and token ring. Devices such as bridges and routers helped solve bandwidth problems by decreasing the number of users per LAN, a solution known as segmentation. In the 1990s, a rich- er set of options for solving bandwidth problems has become available. These alternatives include several varieties of Fast Ethernet, each offering 100M b/s. At an average of ten times the speed of today’s shared band- width LANs, these solutions can eliminate server bottlenecks, handle the needs of bandwidth-intensive applications, and meet the growing need for sustained network utilization.

100BaseVG Another standard for 100M b/s transmission over copper wire is 100BaseVG (voice grade), originally developed by Hewlett-Packard and AT&T. It does away with the MAC Carrier Sense Multiple Access/Collision Detection layer and replaces it with another technique called Demand Priority and a signaling layer that ostensibly makes the network more se- cure and efficient. Advocates of 100BaseVG note that CSMA/CD originally was designed for a bus topology and had to have the collision-detection mechanism

that today’s Ethernet networks provide. However, because most users have moved to 10BaseT, which uses a star topology and hub-based ar- chitecture, such collision detection is outdated. Demand Priority accom- modates this reality by making the hub a switch instead of a repeater, which makes for a more efficient and secure network. With CSMA/CD, nodes contend for access to the network. Each node listens to the network to determine whether it is idle. Upon sensing that no traffic is currently on the line, the node is free to transmit. When sev- eral nodes try to transmit at the same time, a collision results, forcing the nodes to back off and try again at staggered intervals. The more nodes that are connected to the network, the higher the probability that such collisions will occur. 100BaseVG centralizes the management and alloca- tion of network access within the network hub. Unlike the contention-based CSMA/CD method, Demand Priority is deterministic: the 100BaseVG hub serves as a traffic cop, providing equal access to each of the attached nodes. The hub scans each port to test for a transmission request and then grants the request based on priority. Once the hub gives the node access to the network, it is guaranteed a full 100M b/s of bandwidth. 100BaseVG includes two levels of access priori- ty. High priority requests jump ahead of low priority requests. It is the applications and drivers that issue requests for high priority access, based on the type of data queued up to cross the LAN. Advocates of 100BaseVG claim this method reduces overhead at each node and per- mits greater bandwidth use than CSMA/CD. In addition to replacing CSMA/CD with Demand Priority, 100BaseVG replaces 10BaseT’s Manchester coding with 5B/6B coding. Manchester coding requires 20 MHz of bandwidth to send information at a rate of 10M b/s. With 100BaseVG, data and signaling are sent in parallel over all four pairs at rates of 25 MHz each using 5B/6B, which is a more efficient coding scheme. This quadrature (or quartet) signaling entails the use of all four pairs of the wire used for 10BaseT to send and receive data and access signals, whereas 10BaseT uses one pair to transmit and one pair to receive. 100BaseVG runs over any grade of unshielded twisted-pair (UTP) wiring, including Category 3 and Category 5. IBM and HP expanded the 100BaseVG specification to include sup- port for token ring networks as well as Ethernet. The result is 100VG- AnyLAN, which allows Ethernet and token ring packet frames to share the same media. It supports Category 3 unshielded twisted-pair, Type 4, Type 5, and other cable types. However, AnyLAN does not make Ether- net and token-ring networks interoperable — it can only unite them physically through the use of common wiring and a common hub. For true interoperability, a router is needed with the capability to merge logically the two protocols into a common packet and addressing scheme.

EXHIBIT 3 — ATM Quality of Service (QoS) Parameters

Quality of Service Requirements Delay Bandwidth Variation Throughput Congestion Category Application Guarantee Guarantee Guarantee Feedback Constant Bit Provides a fixed virtual Yes Yes Yes No Rate (CBR) circuit for applications that require a steady supply of bandwidth, such as voice, video and multimedia traffic. Variable Bit Provides enough Yes Yes Yes No Rate (VBR) bandwidth for bursty traffic such as transaction processing and LAN interconnection, as long as rates do not exceed a specified average. Unspecified Makes use of any No No No No available bandwidth for (UBR) routine communications between computers, but does not guarantee when or if data will arrive at its destination. Available Makes use of available Yes No Yes Yes Bit Rate bandwidth and (ABR) minimizes data loss through congestion notification. Applications include E- mail and file transfers.

Asynchronous Transfer Mode Asynchronous transfer mode (ATM) is a protocol-independent, cell- switching technology that offers high-speed and low-latency for the sup- port of data, voice, and video traffic. Over optical fiber facilities, ATM can support data transmission in the gigabit-per-second range. Most ATM net- works in place today operate at 155M b/s to 622M b/s. ATM provides for the automatic and guaranteed assignment of band- width to meet the specific needs of applications, making it ideally suited to supporting multimedia applications. ATM is also highly scalable, mak- ing it equally suited for interconnecting local area networks and building wide area networks and interconnecting the two environments. ATM serves a broad range of applications very efficiently by allowing an appropriate quality of service (QoS) to be specified for each applica- tion. Various categories have been developed to help characterize net- work traffic, each of which has its own QoS requirements. These categories and QoS requirements are summarized in Exhibit 3. In the mid-1990s, IBM tried to market a 25M-b/s ATM solution that ran over twisted pair wiring. At lower speeds, ATM would allow desktop sys-

tems to have dedicated bandwidth on the network for merging voice, video, and data. However, this proprietary approach never really caught on. It only provided 9M-bps more than 16M-bps token ring and required an investment in infrastructure that would be short-lived once higher speed ATM standards issues were resolved.

Fiber Distributed Data Interface At one time, the FDDI was considered the technology best capable of meeting the increasing need for more bandwidth. However, most FDDI installations are backbone networks, not local networks in which each node is directly attached. A key advantage of FDDI is that it consistently averages 95% total bandwidth use under heavy loads, which bodes well for bandwidth-intensive applications. A second fiber optic ring provides fault tolerance for mission-critical applications. Optical fiber offers several advantages over copper-based twisted-pair and coaxial cabling, including:

•More bandwidth capacity •Lower signal attenuation •Higher data integrity •More immunity to electromagnetic interference and radio frequency interference •Higher security •Greater durability •Longer distances between repeaters

Furthermore, optical fiber has great expansion potential over the long term, as demonstrated by a rapidly developing technology called wave- length division multiplexing (WDM). Commercial products are available that provide 40G b/s. Lucent Bell Labs reportedly has demonstrated a multi-terabit system with up to 100 simultaneously operating channels over distances of 400 km. This is enough capacity to carry the entire globe’s Internet traffic over a single fiber cable.

Copper FDDI Alternatives CDDI. The copper distributed data interface (CDDI) was the result of research by AT&T, Apple, Crescendo, and Fibronics in the early 1990s. It used UTP wiring and was intended as a more economical alternative to optical fiber. The maximum distance between a station and the con- centrator was 100 meters (328 ft.). However, because CDDI eliminated the NRZI (non return to zero inverted) encoding of the classical FDDI, it was a proprietary solution. It also required a special management protocol.

SDDI. At approximately the same time, an IBM-led group advocated the shielded twisted pair distributed data interface (SDDI), which allows us- ers to run the FDDI-standard, 100M-b/s transmission speed over shield- ed twisted-pair wiring at a distance of 100 meters, from station to hub, which is enough to reach the nearest wiring closet on most office floors. SDDI has been promoted as a way to help users make an easy, econom- ical transition to 100M b/s at the in support of bandwidth- intensive applications. Its main advantage is that it allows the intercon- nection of two computers without the need for a concentrator, within the 100-meter limitation. Products based on SDDI are still available from sev- eral hub vendors, but they benefit mostly IBM customers with STP wir- ing in place.

TP-PMD. Twisted-pair physical media dependent (TP-PMD) is part of the ANSI X3T9.5 standard that allows for 100M b/s transmission over UTP copper wiring. It replaces the proprietary approaches that had been used previously to run FDDI over copper, including CDDI and SDDI. Inter- connect vendors, such as Bay Networks and Cabletron Systems, offer TP- PMD modules that allow single-attached FDDI stations to join the FDDI optical fiber ring supported by the hub chassis. The FDDI stations are connected to the hub via FDDI-standard RJ-45 jacks on the TP-PMD modules. The modules also support TP-PMD over STP cable plants.

FAST ETHERNET CONSORTIUM The Fast Ethernet Consortium was formed in December 1993 and is one of several consortiums at the University of New Hampshire InterOpera- bility Lab (IOL). Formed through the cooperative agreement of vendors interested in testing Fast Ethernet products, the Fast Ethernet Consortium performs tests on products and software from both an interoperability and conformance perspective. Each member provides a platform representing their equipment at the IOL for at least 18 months. This allows the users of the lab to perform in- teroperability testing with current equipment throughout the year, without having to make special legal arrangements with other players in the tech- nology. It also provides consortium members with the ability to test against other’s products in a neutral setting without having to incur the capital ex- pense of setting up and operating individual vendor test facilities. Members of the Fast Ethernet Consortium can schedule the lab for testing at any time during the course of their membership on an available basis. Even when a member’s equipment is not directly under test it is in a highly metered network in which failures and abnormal behavior can be observed. Members can become involved in special group testing programs at no additional cost. The lab is able to act as an extension of a member company’s own R&D/QA organization for the isolation and

resolution of complex problems. Open and widely reviewed testing pro- cedures for both conformance and interoperability can be incorporated into product test plans. The intent of most of the testing programs of the Fast Ethernet Con- sortium, and of the lab itself, is to isolate problems in member’s equip- ment before it gets into consumers’ hands. Testing is performed from a quality assurance point of view, not from a marketing or promotions point of view. Therefore, the Consortium agreement limits the disclosure of specific product test results to respective members only.

CONCLUSION With 10M-b/s Ethernet rapidly becoming saturated by bandwidth-inten- sive applications and the need for sustained network use increasing, the time has come to consider higher performance networks. Unlike other high speed technologies, Ethernet has been installed for over 20 years in business, government, and educational networks. The migration to 100M b/s Ethernet is made easier by the compatibility of 10BaseT and 100BaseT technologies, making it unnecessary to alter existing applica- tions for transport at the higher speed. This compatibility allows 10BaseT and 100BaseT segments to be combined in both shared and switched ar- chitectures, allowing network administrators to apply the right amount of bandwidth easily, precisely, and cost effectively. Fast Ethernet is man- aged with the same tools as 10BaseT networks, and no changes to cur- rent applications are required to run them over the higher speed 100BaseT network. On the heels of 100BaseT’s success, Ethernet is being scaled further, up to 1G b/s over twisted-pair wiring.

Nathan Muller is an independent consultant in Huntsville, Alabama who specializes in advanced technology mar- keting and education. In his 25 years of industry experience, he has written extensively on many aspects of com- puters and communications, having published 15 books and over 1,500 articles. He has held numerous technical and marketing positions with such companies as Control Data Corporation, Planning Research Corporation, Ca- ble & Wireless Communications, ITT Telecom and General DataComm Inc. He has an M.A. in Social and Organi- zational Behavior from George Washington University.