Architectural Models
Total Page:16
File Type:pdf, Size:1020Kb
C HAPTER 1 3 Architectural Models This chapter describes the architectural models that can be appHed to GMPLS networks. These architectures are not only useful for driving the ways in which networking equipment is deployed, but they are equally important in determining how the protocols themselves are constructed, and the responsibilities of the various protocol components. Several distinct protocol models have been advanced and the choice between them is far from simple. To some extent, the architectures reflect the backgrounds of their proponents: GMPLS sits uncomfortably between the world of the Internet Protocol and the sphere of influence of more traditional telecommunications companies. As a result, some of the architectures are heavily influenced by the Internet, while others have their roots in SONET/SDH, ATM, and even the telephone system (POTS). The supporters of the diff'erent architectures tend to be polarized and fairly dogmatic. Even though there are many similarities between the models, the proponents will often fail to recognize the overlaps and focus on what is different, making bold and forceful statements about the inadequacy of the other approaches. This chapter does not attempt to anoint any architecture as the best, nor does it even try to draw direct comparisons. Instead, each architecture is presented in its own right, and the reader is left to make up her own mind. Also introduced in this chapter is the end-to-end principle that underlies the lETF's Internet architecture and then describes three diff'erent GMPLS architectural models. The peer and overlay models are simple views of the network and are natural derivatives of the end-to-end architectural model: They can be combined into the third model, the hybrid model, which has the combined flexibihty of the two approaches. The architectural model specified by the International Telecommunication Union (ITU) for the Automatically Switched Optical Network (ASON) presents a diff*erent paradigm based on significant experience deploying and managing transport networks; it is presented at the end of the chapter and is followed by a discussion of the various ways 325 326 CHAPTER 13 Architectural Models to realize the architecture and the attempts to bridge the gap between the two architectures. 13.1 The Internet's End-to-End Model The architectural principles of the Internet are described in RFC 1958, but, as that document points out, the Internet is continuously growing and evolving so that principles that seemed safe and obvious ten years ago are now no longer quite as straightforward. As new technologies and ideas are developed, it is possible to conceive of new architectural frameworks within which the Internet can continue to expand. Still, it is important to note that the Internet cannot be dismantled and rebuilt into a new network — it is a Uve network that must continue to operate in the face of innovation, and so new architectural paradigms must be integrated into the existing concepts in order to ensure a gentle migration. The basic premise underlying the Internet's architecture is the delivery of end- to-end connectivity for the transport of data using inteUigence that, as much as possible, is placed at the edges of the network. That is, an application wishing to supply a service across the Internet looks into the network to make an intelligent decision about how to achieve the service, and then makes specific directed requests to facilitate the service. The end-to-end principle means that information is only made available within the network on a "need-to-know" basis; the core of the network should be spared knowledge about the services that it is carrying, thus making the Internet massively more scalable. It also allows transit nodes to implement only basic protocols associated with data delivery, and avoid awareness of appUcation protocols required to realize specific services. This makes the core nodes simpler to implement and, more important, means that new services and applications can be delivered over the Internet without the need to upgrade the core network. A secondary determination is to make the Internet as independent as possible of the underlying physical technology; that is, it must be possible to construct the Internet from a wide variety of devices and connections that support a huge range of data speeds and very diff"erent switching granularities. The protocol layering architecture that is often described goes a long way to resolve this, and one of the key purposes of IP itself is to build up all data Unk layers to a common level of service for use by transport and application technologies. In summary, the purpose of the nodes within the Internet is to deliver (and arrange for the delivery of) IP datagrams. Everything else should be done at the edges. 13.1 The Internet's End-to-End Model 327 13.1.1 How Far Can You Stretch an Architectural Principle? The origins of the end-to-end principle are rooted in discussions of where to place the "smarts." Where should the function of the communication system be placed? The answer was at the edges. But as the Internet evolved, grew larger, and became more complex, the question was extended to the consideration of where to store and maintain the protocol state associated with achieving end-to-end connections and services. The desire for scalability and flexibility drove this state to the edges of the network as well, and was reinforced by the growth of importance of network robustness and survivabiHty. To recover from a partial network failure there should be no reliance on state held within the network, because that might be lost during a failure. This model speaks loudly in favor of datagram services, because each datagram is independent and carries its own state information. However, more recent trends for traffic engineering in MPLS networks move away from datagram- or packet- based delivery and tend toward the provision of virtual circuits across the Internet. With GMPLS and the control of transport networks we are fully in the realm of logical and physical connections that are "nailed up" across the network. Connections require state: At the very least they require data plane state in the form of cross-connects. Where, then, does this leave the end-to-end architectural principle that tries to remove intelligence and state from the core of the network? Even the IP packet network required some state to be held within the network. Not the least of this is the routing information needed for next hop forwarding of IP packets, but originally all of this information was independent of the transmitted data streams. Thus the core of the network did not need to know what appHcations or services it was delivering. Over time, however, the boundaries became fuzzy: QoS guarantees and session-based protocols were introduced and, although every eff*ort was made to ensure that these protocols used "soft state" and were adaptive to network changes, these new protocols started to require the installation of state within the Internet. New rules were expressed stating that this state was acceptable, but must be kept to an absolute minimum. Hard state — a state that is required for the proper operation of apphcations and that cannot be dynamically changed and reconstructed within the network — was still frowned upon and held at the edges of the network. Thus, RSVP (a session-based protocol that requires resources associated with individual data flows to be specifically reserved along the path of the data flow) is carefully designed as a soft state protocol. In the event of a failure of part of the network, RSVP heals itself to move the state to the new path of the traffic and to automatically discard state along the old path. 328 CHAPTER 13 Architectural Models Although one may describe MPLS traffic engineering as the estabUshment of virtual circuits through the Internet, the use of RSYP-TE as the signaling protocol ensured that the necessary state was kept as soft as possible. In particular, the failure of a link or node is automatically detected and causes the removal of control plane and forwarding state. Further, the abihty to place path computation function at the edge of the network based on information advertised from within the network but stored at the edges clearly fits well with the end-to-end principle. The problem is complicated somewhat by the requirements of traffic engineering in a multi-domain environment. In this case where is the end of the service? Where is the edge of the network? Two models are currently being considered. In the first, each domain is considered a network in its own right, the service is "regenerated" at each domain boundary, and it is reasonable (and necessary) for additional state information to be held at those points. This model fits well with the demands of the GMPLS and optical network architectures described later in this chapter. The second approach uses the Path Computation Element (PCE) discussed in Chapter 9, and attempts to keep state out of the network by making more information available to the initiator of the service either direct or with the assistance of the PCE. GMPLS, with its control of other switching capabilities, adds a further complication to our considerations. GMPLS is used to provision connectivity through transport networks and these networks employ special considerations for dynamic behavior and survivability. In particular, circuits in transport networks apply different rules to the definition of robustness. For example, the failure of control plane or management plane connectivity is not usually allowed to disturb the data plane — the data is king and connectivity must be preserved at all costs. On the other hand, a protected service may be happy to retain provisioned resources even in the event of a data plane failure, so the control plane must not withdraw state even when traffic can no longer pass through the connection.