A Shared-Variable-Based Synchronization Approach to Efficient Cache Coherence Simulation for Multi-Core Systems

A Shared-Variable-Based Synchronization Approach to Efficient Cache Coherence Simulation for Multi-Core Systems

A Shared-Variable-Based Synchronization Approach to Efficient Cache Coherence Simulation for Multi-Core Systems Cheng-Yang Fu, Meng-Huan Wu, and Ren-Song Tsay Department of Computer Science National Tsing Hua University HsinChu, Taiwan {chengyangfu, mhwu, rstsay}@cs.nthu.edu.tw Abstract—This paper proposes a shared-variable-based ap- order according to the simulated time instead of at every proach for fast and accurate multi-core cache coherence simu- cycle. To execute events in a temporal order, timing syn- lation. While the intuitive, conventional approach— chronization is required before each event, as shown as in synchronizing at either every cycle or memory access—gives Figure 1(b). While a correct execution order of events will accurate simulation results, it has poor performance due to clearly lead to an accurate simulation result, in practice not huge simulation overloads. We observe that timing synchroni- zation is only needed before shared variable accesses in order every action requires synchronization. If all actions are to maintain accuracy while improving the efficiency in the included as events without discrimination, the synchroniza- proposed shared-variable-based approach. The experimental tion overhead can be massive. results show that our approach performs 6 to 8 times faster As an example, since the purpose of cache coherence is than the memory-access-based approach and 18 to 44 times to maintain the consistency of memory, an intuitive syn- faster than the cycle-based approach while maintaining accu- chronization approach in cache coherence simulation is to racy. do timing synchronization at every memory access point. Each memory operation may incur a corresponding cohe- Keywords- cache-coherence; timing synchronization rence action—according to the type of memory access, the I. INTRODUCTION states of caches, and the cache coherence protocol speci- fied—to keep local caches coherent. In order to maintain the memory consistency of multi- To illustrate the idea, Figure 2 shows how coherence ac- core architecture, it is necessary to employ a proper cache tions work to keep local caches coherent in a write-though coherence system. For architecture designers, cache design invalidate policy. When Core issues a write operation to parameters, such as cache line size and replacement policy, 1 the address @, the data of @ in memory is set to the new need to be taken into account, since the system perfor- value and a coherence action is performed to invalidate the mance is highly sensitive to these parameters. Additionally, copy of @ in Core ’s cache. Therefore the tag of @ in software designers also have to consider the cache cohe- 2 Core ’s cache is set to be invalid. Next, when Core wants rence effect while estimating the performance of parallel 2 2 to read data from the address @, it will know that the local programs. Obviously, cache coherence simulation is crucial for both hardware designers and software designers. A cache coherence simulation involves multiple simula- Event Context switch overhead tors of each target core. To keep consistent simulated time of each core, timing synchronization is required. A cycle- 1 cycle based synchronization approach synchronizes at every cycle as shown as in Figure 1(a), and the overhead due to Core1 … the frequent synchronization heavily degrades the simula- Core2 tion performance. At each synchronization point, the simu- Simulated Clock Simulation Time lation kernel will switch out the executing simulator and (a) put it in a queue according to the simulated time, and then switch in the ready simulator with the earliest simulated time to continue execution. Highly frequent synchroniza- tion causes a big portion of the simulation time spent on Core1 … context switching instead of intended functional simulation. Core2 As far as we know, existing cache coherence simulation Simulated Clock Simulation Time approaches are making a tradeoff between simulation speed (b) and accuracy. For instance, event-driven approaches [6-9] select system state changing actions from simulation sys- Figure 1: (a) Simulation diagram of the cycle-based approach; (b) tem as events and keep these events executed in a temporal Simulation diagram of the event-driven approach. 978-3-9810801-7-9/DATE11/©2011 EDAA variation of trace-driven approach that tries to reduce syn- Core Cache coherent Core 1 2 chronization overhead against cycle-based and event-driven Local cache Local cache approaches by adjusting only timing but not event points. Invalidate @ @ V @ I Old value A. Cycle-Based Simulation Cycle-based approaches [1-4] (or lock-step simulation approaches) synchronize each single-core simulator at every cycle time. All cache data accesses and coherence Update new value for @ actions are simulated in a precise temporal order and simu- External Memory lation is guaranteed to be accurate. However, as discussed @ previously, the issue is that heavy synchronization over- head slows down the simulation speed. In practice, the per- formance of cycle-accurate simulation is observed to be only about 1 to 3 MIPS (million instructions per second). Figure 1: A cache coherent example for a two-core system based on a A variation to cycle-based approach is to synchronize at write-through invalidate policy. every N cycle instead of every cycle [3-4]. Nevertheless, improved simulation performance comes with a penalty of cache is invalidated and that it must obtain a new value greater inaccuracy. from the external memory. Therefore, if timing synchronization is done at every B. Event-Driven Simulation memory access point, the cache-coherent simulation will be Another approach is event-driven [6-9], which is also accurate. However, Hennessy et. al. [5] reports that, in gen- known as the discrete event simulation approach. Instead of eral, over 30 percent of executed instructions of program synchronizing at every cycle, the event-driven approach are memory access instructions. Hence, this approach still synchronizes only at where events occur and makes sure suffers from heavy synchronization overhead. they are executed in a strictly chronological order. To further reduce synchronization overhead in cache co- 1) Cache coherence simulation herence simulation, we propose a shared-variable-based Among our surveyed event-driven approaches, Ruby and synchronization approach. As we know, coherence actions Timepatch [6, 9] focus on handling cache coherence simu- are applied to ensure consistency of shared data in local lation in multiprocessor systems. caches. In parallel programming, variables are categorized Ruby [6] is a well-known event-driven multiprocessor into shared and local variables. Parallel programs use memory system timing simulator. It performs synchroniza- shared variables to communicate or interact with each other. tion at the next scheduled message-passing event on the Therefore, only shared variables may reside on multiple queue instead of at every cycle. In practice, this approach caches while local variables can only be on one local cache. may generate too many events—close to that of lock-step Since memory accesses of local variables cause no consis- simulation, and perform poorly when applied to cache co- tency issue, the corresponding coherence actions can be herence simulation. safely ignored in simulation. Based on this fact, we can Timepatch [9] exploits the fact that functional and timing synchronize only at shared variable accesses and achieve correctness of a simulation can be ensured by executing better simulation performance while maintaining accurate timing synchronization at user-level synchronization opera- simulation results. As confirmed and verified by the expe- tions, such as lock. By keeping the order of user-level syn- rimental results, the simulation results are accurate and the chronization operations, the simulated execution can faith- speed is 6 to 8 times faster than synchronizing at every ful reproduce the factual execution on the target parallel memory access and 18 to 44 times faster than the cycle- machine. However, this approach cannot capture data rac- based approach. ing situations of unprotected shared variables and can lead The rest of this paper is organized as follows. Section II to inaccurate results. reviews related work. Section III explains the idea of using 2) General multi-core simulation shared variable in cache coherence simulation. Section IV The remaining event-driven approaches focus on different describes implementation details for our simulation ap- issues in multi-core simulation [7-8]. These approaches proach. Experimental results are presented and discussed in either are inefficient or cannot guarantee accuracy when section V. Finally, our conclusion and future work are applied to cache coherence simulation. summarized in section VI. CATS is a technique that accurately simulates bus trans- fer in MPSoC by synchronizing at the boundaries of data II. RELATED WORK transfers, or equivalently granted memory access, on bus In this section, we first review cycle-based and event- [7]. In general, frequent data transfers generate huge num- driven simulation approaches for multi-core simulation. In ber of synchronization points and hence slow down simula- addition, we also review the timing-adjustment approach, a tion performance for this approach. The data-dependence-based synchronization approach [8] A. Execution Order was the first elaborated shared memory synchronization In cache coherence simulation, it is crucial to know the approach that could maintain

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us