Flops, Less Watts. Epiphany Offers Floating-Point

Flops, Less Watts. Epiphany Offers Floating-Point

ADAPTEVA: MORE FLOPS, LESS WATTS Epiphany Offers Floating-Point Accelerator for Mobile Processors By Linley Gwennap {6/13/11-02} ................................................................................................................... Most mobile-processor vendors focus on improving puting can enable gesture-based gaming, advanced user in- MIPS per watt, a common measure of power efficiency. terfaces, augmented reality, and even improved health and Adapteva, however, has taken on a different problem: safety. Visual processing, however, requires many more increasing floating-point operations per second (flops) per flops than voice processing. Adapteva’s architecture can watt. The tiny startup has developed and tested a unique deliver the performance required for visual computing. architecture that delivers industry-leading flops per watt. Although some people think floating-point (FP) perfor- Custom Architecture With FP Focus mance is needed only in supercomputers and specialized To optimize its CPU for power, Adapteva started with a signal-processing applications, this type of powerful FP clean slate instead of a standard instruction set. Epiphany engine could soon be coming to a smartphone near you. uses a simple RISC instruction set, which focuses on Adapteva offers its Epiphany multicore architecture floating-point operations and load/store operations, so it as an intellectual-property (IP) core that scales to various omits complex integer operations such as multiply and performance levels. The company rates its basic 16-core divide. Each 32-bit entry in the 64-entry register file can design at 19Gflops while drawing just 270mW (typical) hold an integer or a single-precision floating-point value. when implemented in a 28nm LP process. At 2mm2, this As Figure 1 shows, the CPU itself is a simple two- design would only modestly increase the cost of a typical issue design capable of executing one integer operation mobile application processor. Configured as a coprocessor, Epiphany could deliver impressive FP capability within the power budget of a typical mobile device. Why would a mobile device need such capability? Voice recognition can ease text entry on handheld devices with tiny or nonexistent keyboards, but the voice capabili- ties of most mobile devices are adequate only for simple command-and-control functions. For general speech-to- text capability, many advanced voice-recognition algo- rithms depend on floating-point performance. Services such as Google Voice Search decode the voice signal on a remote server, which has plenty of FP performance, but this approach adds latency and doesn’t work if the network is inaccessible. Performing high-quality voice recognition directly on a mobile processor will require a new approach. Figure 1. Epiphany CPU core and network design. Each core In a recent article, we discussed the trend toward vis- contains a simple CPU and a router for the mesh network. ual computing in mobile devices (see MPR 5/30/11, The network can be extended to a 64×64 array, although the “Visual Computing Becomes Embedded”). Visual com- initial implementation contains 16 cores. JUNE 2011 2 Adapteva: More Flops, Less Watts and one FP operation per cycle. The CPU relies on Designing for Low Power Adapteva’s compiler to optimally arrange the instructions Adapteva’s goal is to maximize Gflops per watt, so simply rather than reordering instructions in hardware. To mini- creating a scalable FP architecture was not enough. The mize power and area, the design has no dynamic branch simplified CPU design yields many power savings com- prediction, although its short (six-stage) integer pipeline pared with a typical high-performance design such as keeps the misprediction penalty small. As a scalar integer ARM’s Cortex-A9. Epiphany’s short pipeline reduces the design, the CPU achieves an EEMBC CoreMark score of need for latches and bypass logic, and its simpler instruc- about 1.3/MHz—a little less than that of an ARM9 CPU. tion set avoids functions such as ARM’s preshift that are By comparison, a modern high-performance CPU such as difficult to implement in hardware. Unlike the A9, it does Cortex-A9 can achieve 2.9/MHz. not waste power reordering instructions; having no legacy Epiphany is optimized for floating-point programs, code base, Adapteva can rely on the compiler for this task. not an integer test like CoreMark. FP loads and FP stores The flat unprotected address space eliminates the power count as integer operations, so the CPU can execute an FP that a memory-management unit (MMU) would consume. calculation while loading data for the next calculation. The A big power savings comes from the use of SRAM instruction set supports load and store double instructions instead of cache. A cache burns power on each access to that access two consecutive 32-bit registers, taking advan- search through all the tags to find a match (or not). Multi- tage of the 64-bit path from the SRAM to the register file. core designs typically snoop the caches to maintain cache Using these instructions, the CPU can load two operands coherency, thus using more power. A cache must also de- per cycle. termine when to transfer data to and from main memory The single-precision FPU can execute one FP and then perform such transfers. In a cacheless design, multiply-accumulate (FMAC) per cycle to achieve its peak software takes on the burden of managing the SRAM and rate of two FP ops per cycle. The FPU is not fully IEEE programming the DMA engine to handle the transfers. 754–compliant but uses standard data formats and Some CPU cores can be configured with tightly coupled rounding modes. It is optimized for FMAC operations and memory (TCM) instead of cache, achieving a similar has a latency of four cycles. The instruction set also sup- power savings. Most designers prefer cache, however, be- ports FP addition, subtraction, and multiplication but not cause it simplifies the software. complicated operations such as divide and square root. In Epiphany’s unified-memory design, programmers Lacking support for double precision, denorms, and divi- must also be concerned about SRAM-bank conflicts. sion, the FPU is not suited to scientific or technical com- Avoiding conflicts between instruction fetches and data puting; it is tuned for signal processing. loads can be challenging. When working with long data The CPU core contains a single direct-mapped 32KB vectors, however, the data loads and the DMA accesses SRAM. Software is responsible for loading program tend to naturally synchronize, using different banks on instructions and data into this SRAM. The design also es- each cycle. chews memory management of any kind, implementing a The mesh network reduces power compared with a flat 32-bit memory space without any protection. This ap- traditional crossbar interconnect. All signals travel from proach eliminates both the die area and performance over- one tile to its immediate neighbor, minimizing signal head of a traditional TLB. length and thus the drive current. These short signals also As Figure 1 shows, Epiphany uses a tile approach to enable the network to operate at the same high clock speed arrange the cores in a mesh network. This approach is as the CPU. The tile approach also simplifies physical de- similar to Tilera’s (see MPR 11/5/07, “Tilera’s Cores sign, as the designer can connect each tile to the next sim- Communicate Better”). Each core can transfer 64 bits per ply by placing them beside each other. cycle in each of five directions: north, south, east, west, and The downside is that transactions to cores other than to/from the CPU. Using the mesh, each CPU can access immediate neighbors require multiple cycles to complete. the SRAM of any other CPU. With this approach, the 16- In a 16-core mesh, the average number of hops is 2.625, core design can be viewed as having 512KB of directly ac- assuming a completely random distribution of accesses. cessible SRAM, albeit with variable latency. The mesh de- Optimized programming can greatly reduce this figure. sign can be easily expanded to include additional cores. For a read request, the latency is twice the number of hops, For optimal performance, instructions and data must since the transaction must travel from the target to the be preloaded into the CPU’s local SRAM. Each core con- source and back again. This latency is not guaranteed, tains a DMA engine that can be configured to autono- since congestion on the mesh can cause delays. mously prefetch data under software control. The SRAM is Clock distribution often consumes a sizable portion divided into four 64-bit-wide banks, allowing it to ideally of total chip power, because large drivers are required to support an instruction fetch, a load, a DMA access, and an minimize skew. For clock distribution, Adapteva chose a external access (e.g., from another core) on each cycle. simple wave-propagation scheme, as Figure 2 shows. JUNE 2011 Adapteva: More Flops, Less Watts 3 In this approach, clock skew accumulates as the signal company’s first customer, will sell these chips in a DSP progresses through the cores. Because the tiled design accelerator card. eliminates global signals, this accumulated skew is not important. The skew between any two neighboring cores is Now Available for Licensing still minimal. In addition to reducing clock power, this With its new funding, Adapteva retargeted its 16-core de- method simplifies routing. sign to GlobalFoundries’ 28nm SLP technology. Moving Epiphany also saves power by using the now- from a high-speed 65nm process to a low-power 28nm common techniques of extensive fine-grained clock gating, process will boost the clock speed from 500MHz to shutting off the clock to unused function units and entire 600MHz at maximum power efficiency.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us