
INTERCONNECTS Undestanding RapidIO, PCIe and Ethernet By Barry Wood Tundra While there are many ways to connect components in embedded systems, the most prominent are the high speed serial standards of Ethernet, PCI Express, and RapidIO. All of these standards leverage similar Serialiser/Deserialiser (SerDes) technology to deliver throughput and latency performance greater than what is possible with wide parallel bus technology. For ex- Figure 1: RapidIO embedded control symbols and PCIe DLLPs. ample, RapidIO and PCI Express leveraged the XAUI SerDes tech- Transaction Layer Packets (TLPs) nology developed for Ethernet. such as reads and writes, and The trend towards leveraging a smaller quantities of link-specific common SerDes technology will information called Data Link Layer continue with future versions of Packets (DLLPs). DLLPs are used these specifications. The implica- for link management functions, in- tion is that raw bandwidth is not a cluding physical layer flow control. significant differentiator for these PCIe was designed to be back- protocols. Instead, the usefulness wards compatible with the legacy of each protocol is determined by of PCI and PCI-X devices, which how the bandwidth is used. assumed that the processor(s) sat at the top of a hierarchy of buses. Protocol summaries This had the advantage of lever- Most designers are familiar with aging PCI-related software and basic Ethernet protocol charac- hardware intellectual property. As teristics. Ethernet is a ‘best effort’ discussed later in this article, the means of delivering packets. The PCI bus legacy places significant software protocols built on top of constraints on the switched PCIe the Ethernet physical layer, such protocol. as TCP/IP, are necessary to provide RapidIO technology has been reliable delivery of information, as optimised for embedded systems, Ethernet-based systems gener- particularly those which require ally perform flow control at the multiple processing elements to network layer, not the physical cooperate. Like PCIe, the RapidIO layer. Typically, the bandwidth of protocol exchanges packets and Figure 2: RapidIO Multicast Event Control Symbol and PCIe DLLP. Ethernet-based systems is over- smaller quantities of link-specific provisioned between 20 and 70 information called control sym- Physical layer time. The limited physical layer per cent. Ethernet is best suited bols. RapidIO has characteristics At the physical/link layer, the pro- flow control means that Ethernet for high latency inter-chassis of both PCIe and Ethernet. For tocols have different capabilities networks discard packets to deal applications or on-board/inter- example, RapidIO provides both when it comes to flow control with congestion. board applications where band- reliable and unreliable packet and error recovery. Ethernet flow In contrast, PCIe and RapidIO width requirements are low. delivery mechanisms. RapidIO control is primarily implemented physical-layer flow control mech- PCI Express (PCIe), in contrast, also has many unique capabilities in software at the network layer, anisms ensure reliable delivery of is optimised for reliable delivery of which make it the optimal inter- as this is the most effective for packets. Each packet is retained packets within an on-board inter- connect for on-board, inter-board, large networks. Ethernet’s only by the transmitter until it is ac- connect where latencies are typi- and short distance (<100 m) inter- physical layer flow control mech- knowledged. If a transmission er- cally in the microsecond range. chassis applications. anism is PAUSE, which halts trans- ror is detected, a link maintenance The PCIe protocol exchanges mission for a specified period of protocol ensures that corrupted 1 eetindia.com | EE Times-India packets are retransmitted. Unit Interval of jitter and 50 nano- PCIe ensures delivery using seconds of latency per switch, DLLPs, while RapidIO uses control regardless of packet traffic. symbols. Unlike DLLPs, RapidIO control symbols can be embed- PCIe and Ethernet may choose ded within packets. This allows to extend their respective speci- RapidIO flow control informa- fications in future to allow events tion, such as buffer occupancy to be distributed with low latency. levels, to be exchanged with low Introduction of a control-symbol latency, allowing more packets like concept would be a large to be sent sooner. Figure 1 illus- step for Ethernet. Several initia- trates this concept. In the leftmost tives are underway within the panel, Device A cannot send any Ethernet ecosystem to improve packets to Device B because the Ethernet’s abilities within stor- buffers in Device B are full. Device age applications that may need B is sending packets to Device A a control-symbol like concept. continually. In the middle panel, Ethernet is also being enhanced one buffer in Device B becomes to incorporate simple XON/XOFF free. At this point, Device B must flow control. inform Device A that it can send PCIe currently does not allow Figure 3: Read, write, and message semantics. The read semantic retrieves a packet. In the PCIe panel on the DLLPs to be embedded within data 01 02 03 04 from address 0x1234_0000. The write semantic writes data bottom right, the DLLP cannot be TLPs as this concept is incompat- FF FE FD FC to address 0x1234_0000. The message semantic determines the transmitted until transmission of ible with the legacy of the PCI/X location of Fault_Handler without knowing referencing the memory map. the current packet is complete. In bus operation. DLLPs embedded the RapidIO panel on the top right, within TLPs create periods where a control symbol is embedded in no data is available to be placed a packet that is being transmitted. on the legacy bus. PCIe end-points This allows the RapidIO protocol could operate in store-and-for- to deliver packets with lower la- ward mode to ensure that packets tency and higher throughput than are completely received before the other protocols. The ability to being forwarded to the bus, at embed control symbols within the cost of a drastic increase in packets allows the rest of the latency and lowered throughput. RapidIO flow control story to be Given the PCIe focus of on-board much richer than PCIe or Ethernet, interconnect for a uniprocessor as discussed later in this article. system, and the continued need to maintain compatibility with Beyond more efficient flow legacy bus standards, it is un- control, the ability to embed con- likely that the PCIe community trol symbols within packets gives will be motivated to allow DLLPs RapidIO an ability that currently to be embedded within TLPs. neither PCIe nor Ethernet can of- fer. Control symbols can be used Bandwidth options to distribute events throughout a Beyond flow control and link RapidIO system with low latency maintenance, the most obvious and low jitter (Figure 2). This en- difference between Ethernet, ables applications such as distribu- PCIe and RapidIO at the physi- tion of a common real time clock cal/link layer are the bandwidth signal to multiple end-points, or a options supported. Ethernet has frame signal for antenna systems. a long history of evolving band- It also can be used for signalling width by ten times with each other system events, and for de- step. Ethernet currently operates bugging within a multi-proces- at 10Mbps, 100Mbps, 1Gbps, and Figure 4: RapidIO’s virtual output queue backpressure mechanism. sor system. As shown in Figure 2, 10Gbps. Some proprietary parts PCIe DLLPs introduce a significant also support a 2Gbps (2.5 Gbaud) terconnects require power to be while RapidIO supports lane rates amount of latency and jitter ev- option. Next generation Ethernet matched with the data flows. As of 1, 2, 2.5, 4 and 5Gbps (1.25, 2.5, ery time the DLLP is transferred will be capable of operating at 40 a result, PCIe and RapidIO support 3.125, 5, and 6.25 Gbaud). Both PCIe through a switch. In contrast, the and/or 100Gbps. more lane rate and lane width and RapidIO support lane width RapidIO protocol allows a signal PCIe and RapidIO take a dif- combinations than Ethernet. PCIe combinations from a single lane to be distributed throughout a ferent approach, as on-board, 2.0 allows lanes to operate at ei- up to 16 lanes. PCIe also supports RapidIO fabric with less than 10 inter-board and inter-chassis in- ther 2 or 4Gbps (2.5 and 5 Gbaud), a 32 lane port in its specification. 2 eetindia.com | EE Times-India For a given lane width, a RapidIO it is not certain if it will be adopted semantics. To send a message, messaging semantics is the en- port can supply both more and within the PCIe ecosystem. PCIe the originator only needs to capsulation of other protocols. less bandwidth than PCIe, allow- components also support NTB know the address of the recipient. RapidIO has standards for encap- ing system designers to tune the functionality, whereby one PCIe Addressing schemes are generally sulating a variety of protocols. amount of power used to the data hierarchy is granted access to a hierarchical, so that no node must The ability to encapsulate gives flows in a system. defined memory range in another know all addresses. Addresses system designers many advan- PCIe hierarchy. System reliability may change as a system evolves, tages, including future-proofing Transport layer and availability of this approach enabling software components to RapidIO backplanes. Any future, The RapidIO and Ethernet speci- are discussed further on in this be loosely coupled to each other. legacy or proprietary protocol can fications are topology agnostic. article. These attributes are necessary for be encapsulated and transported Any set of end-points can be con- Many Ethernet-based systems data plane applications. using standard RapidIO compo- nected using any fabric topology, are also functionally limited to tree nents. For example, the existing including rings, trees, meshes, structures.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-