Circuit-Switched Coherence

Circuit-Switched Coherence

Circuit-Switched Coherence ‡Natalie Enright Jerger, ?Li-Shiuan Peh, and ‡Mikko Lipasti ‡Electrical and Computer Engineering Department, University of Wisconsin-Madison ?Department of Electrical Engineering, Princeton University Abstract— Our characterization of a suite of commercial and scientific workloads on a 16-core cache-coherent chip Scientific Commercial multiprocessor (CMP) shows that overall system perfor- 1.25 mance is sensitive to on-chip communication latency, and 1.2 can degrade by 20% or more due to long interconnect la- 1.15 tencies. On the other hand, communication bandwidth de- 1.1 mand is low. These results prompt us to explore circuit- Runtime 1.05 switched networks. Circuit-switched networks can signifi- d 1 cantly lower the communication latency between processor 0.95 cores, when compared to packet-switched networks, since 0.9 once circuits are set up, communication latency approaches pure interconnect delay. However, if circuits are not fre- 135711 quently reused, the long setup time can hurt overall per- Normalize formance, as is demonstrated by the poor performance of Per Hop Delay traditional circuit-switched networks – all applications saw a slowdown rather than a speedup with a traditional circuit- Fig. 1. Effect of interconnect Latency switched network. To combat this problem, we propose hybrid circuit switch- ing very low link contention given our simulation configu- ing (HCS), a network design which removes the circuit ration (See Table IV). Wide on-chip network channels are setup time overhead by intermingling packet-switched flits with circuit-switched flits. Additionally, we co-design a significantly underutilized for these workloads while over- prediction-based coherence protocol that leverages the exis- all system performance is sensitive to interconnect latency. tence of circuits to optimize pair-wise sharing between cores. An uncontended 5-cycle per-hop router delay in a packet- The protocol allows pair-wise sharers to communicate di- rectly with each other via circuits and drives up circuit reuse. switched network can lead to 10% degradation in overall Circuit-switched coherence provides up to 23% savings in system performance. As the per-hop delay increases, ei- network latency which leads to an overall system perfor- ther due to deeper router pipelines or network contention, mance improvement of up to 15%. In short, we show HCS overall system performance can degrade by 20% or more. delivering the latency benefits of circuit switching, while sus- taining the throughput benefits of packet switching, in a de- Looking forward, as applications exhibit more fine grained sign realizable with low area and power overhead. parallelism and more true sharing, this sensitivity to inter- connect latency will become even more pronounced. This I. Introduction latency sensitivity coupled with low link utilization moti- As per-chip device counts continue to increase, many-core vates our exploration of circuit-switched fabrics for CMPs. chip multiprocessors are becoming a reality. Shared buses Our investigations show that traditional circuit-switched and simple rings do not provide the scalability needed to networks do not perform well, as circuits are not reused meet the communication demands of these future many- sufficiently to amortize circuit setup delay. As seen in core architectures, while full crossbars are impractical. To Figure 2, every application saw a slowdown when using date, designers have assumed packet-switched on-chip net- traditional circuit-switched networks versus an optimized works as the communication fabric for many-core chips packet-switched interconnect for a 16 in-order core CMP [11, 23]. While packet switching provides efficient use of (for machine configuration details see Table IV). link bandwidth by interleaving packets on a single link, it adds higher router latency overhead. Alternatively, circuit- switching trades off poorer link utilization with much lower 1.07 1.06 latency, as data need not go through routing and arbitra- 1.05 tion once circuits are set up. 1.04 For the suite of commercial and scientific workloads eval- 1.03 1.02 uated, the network latency of a 4x4 multi-core design can 1.01 have a high impact on performance (Figure 1) while the 1 bandwidth demands placed on the network are relatively 0.99 low (Workload details can be found in Table V). Figure 1 Counts Cycle Normalized 0.98 illustrates the change in overall system performance as the Ocean TPC-H 1 Barnes per-hop delay is increased from 1 to 11 processor cycles. TPC-W Raytrace Radiosity SpecJBB When a new packet is placed on a link, the number of con- SpecWeb current packets traversing that link is measured (including Fig. 2. Performance of Traditional Circuit Switching the new packet)2. The average is very close to 1, illustrat- This observation motivates a network with a hybrid This research was supported in part by the National Science Foundation under grants CCR-0133437, CCF-0429854, CCF-0702272, router design that supports both circuit and packet switch- CNS-0509402, the MARCO Gigascale Systems Research Center, an ing with very fast circuit reconfiguration (setup/teardown). IBM PhD Fellowship, as well as grants and equipment donations from Our results show our hybrid circuit-switched network with IBM and Intel. up to 23% reduction in network latency over an aggressive 1This comprises router pipeline delay and contention. 2This measurement corresponds to a frequently-used metric for eval- packet-switched fabric, leading to up to 7% improvement uating topologies, channel load [6]. in overall system performance due solely to interconnect design. In Section IV, we will demonstrate that while our II. Hybrid Circuit-Switched Network initial exploration of circuit-switching has been predicated The key design goal of our hybrid circuit-switched net- on the low network loads in our applications, our final de- work is to avoid the circuit setup delay of traditional circuit- sign works well for high traffic loads as well, demonstrating switching. As shown in Figure 4, our design consists of two the effectiveness of our design where packet-switched flits separate mesh networks: the main data network and a tiny can steal bandwidth when circuits are unused. setup network. As systems become more tightly coupled in multi-core The main data network supports two types of traffic: architectures, co-design of system components becomes in- circuit-switched (CS) and packet-switched (PS). In the data creasingly important. In particular, coupling the design of network, there are C separate physical channels, one for the on-chip network with the design of the coherence pro- each circuit. To allow for a fair comparison, each of these tocol can result in a symbiotic relationship providing supe- C channels have 1/C the bandwidth of the baseline packet- rior performance. Our workloads exhibit frequent pair-wise switched network in our evaluations. A baseline packet- sharing between cores. Prior work has also shown that switched network has a channel width of D data bits along processor sharing exhibits temporal locality and is often with a logV virtual channel ID. Flits on the data network of limited to a small subset of processors (e.g. [3,7]). Design- our hybrid circuit-switched network are D/C wide, plus the ing a router architecture to take advantage of such sharing virtual channel ID for the packet-switched flits (logV ) and patterns can out-perform even a highly optimized packet- an additional bit to designate flit type (circuit- or packet- switched router. switched). A single setup network is shared by all C cir- Figure 3 illustrates the percentage of on-chip misses that cuits. can be satisfied by cores that recently shared data with the We compare our design against a baseline packet- requester. The numbers on the x-axis indicate the number switched router shown in Figure 5a and b. Under low loads, of sharers maintained in a most recently shared list and the router pipeline is a single cycle (Figure 5a), achieved the corresponding percentage of on-chip misses that can be through aggressive speculation and bypassing. If the input captured. For example, with the commercial workloads, the queue is empty and no other virtual channels are request- two most recent sharers have a cumulative 65% chance of ing the same output channel, the incoming flit can bypass sourcing data for the next miss. Such application behavior the first two stages and proceed directly to switch traver- inspires a prediction-based coherence protocol that further sal. This optimization effectively reduces the pipeline la- drives up the reuse of circuits in our hybrid network and tency to a single stage but only for very low loads (for improves overall performance. The protocol predicts the additional detail see [16]). With the presence of contention likely sharer for each memory request so the requester can due to higher loads, the packet-switched baseline degrades go straight to the sharer for data, via a fast-path circuit, to a three cycle pipeline (Figure 5b). To lower delay, we rather than experiencing an indirection to the home di- assume lookahead routing, thus removing the routing com- rectory node. Our simulations show this improves overall putation from the critical path; lookahead routing occurs system performance by up to 15%. in the first stage with the buffer write (BW). In the second stage, speculative virtual channel and switch allocation is Sharing Not Sharing performed; if speculation fails, virtual channel or switch al- 100% location will be performed again in the next cycle. This 90% single-cycle pipeline is based on an Intel design of a 4-stage 80% pipeline [17], which found it not possible to fit the BW into 70% the VA/SA stage and still maintain an aggressive clock of 60% 16FO4s. Each input contains 4 virtual channels with 4 50% 40% packet buffers each. 30% 20% A.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us