Behind the Numbers

Total Page:16

File Type:pdf, Size:1020Kb

Behind the Numbers

Behind the numbers:

Is the Core Meltdown Just a False Alarm?

By Dave Bursky*

As time-to-market pressures continue to mount, the companies that are designing complex application-specific integrated circuits (ASICs) are licensing more and more intellectual property (IP). They hope to shorten their design cycles by using this IP, which comprises embedded CPU cores, memory blocks, MPEG encoders/decoders, USB interfaces, Ethernet ports, and more. Yet in the consumer market in particular, products tend to have short lives. As a result, designers are looking for ways to extend the life of the ASIC solutions that they’re crafting.

One approach that’s being adopted by many companies is to embed programmable CPU cores, which can be programmed to handle specific tasks. In last month’s issue of Chip Design Trends, Erach Dasai’s analysis did note a decline in the use of embedded processor cores. Yet that decline may be a temporary market condition. It also could be a conclusion based on an interpretation of the data with which the semiconductor intellectual-property vendors don’t agree. “It’s very hard to really calculate the use of CPU cores,” explains Richard Wawrzyniak, Senior Analyst for ASICs at Semico Research Corp., a market research company based in Phoenix, Ariz. “There are many ways of counting the cores and what, for example, constitutes a CPU versus a dedicated processor for audio or video.”

That view is echoed by Steve Liebson, the Technology Evangelist at Tensilica Inc., a provider of CPU-core IP in Santa Clara, Calif. “Tensilica provides generic CPU cores that the customer can configure as well as a few application-optimized cores, which are dedicated to audio and video processing, in its recently released Diamond series. These application-optimized cores are based on the company’s Xtensa CPU core, but include application-specific instructions to handle specific audio or video operations. The question is, do these cores get counted as CPU cores or as function-specific blocks of IP?” There is no absolute answer and thus each analyst may interpret the data differently.

The traditional view of the ASIC or system-on-a-chip approach has changed over the last decade, explains Steve Roddy, Tensilica’s Vice President of Marketing. In 1997, the common approach included a general control processor and dedicated hardwired logic to perform audio, video, and other support functions. In 2007, however, the small features used in today’s processes allow very area-efficient and high-performance CPU cores to be used to perform many tasks that were previously done by hardwired logic (see the Figure). Each processing block can execute its own firmware, thereby allowing designers to change or enhance the functions in each block independently just by changing the firmware. This approach eliminates costly silicon respins and can even allow field upgrades. In another example of core counting, ARM Ltd. of Cambridge, England offers its AMR11MP core, explains Dave Steer, Director of Segment Marketing. “This core actually includes anywhere from one to four CPUs, but is treated as a single block of IP. But how does it get counted—as a single core or as multiple cores? The debate will continue for years as to how to count such blocks of IP.” While also looking at parallel CPU approaches, designers at ARC International have developed single- instruction/multiple-data blocks of IP based on its ARC750 processor core. The company also offers several application-optimized cores targeted at media-processing and audio applications.

What ARM does see, though, is an increase in the number of instances of its processor being used on new ASIC designs. “The small size and low power of the latest cores in the Cortex family, for example, allow designers to use multiple cores—each tackling an independent task on the ASIC. By using programmable cores, designers can also ‘future- proof’ their ASIC silicon since changes to the firmware that executes on these deeply embedded CPUs can be downloaded to upgrade the functionality or fix a bug.”

ARM also has been enjoying an increase in the number of new licensees—although that rate of increase has slowed since the company already has licenses with most major companies, said Steer. The quarter-by-quarter results for the last three quarters of 2006, for example, show increases in the total number of licenses of 21, 14, and 15 for Q2, Q3, and Q4, respectively (see the Graph). These licenses are split between new licenses, derivative licenses, and upgrades of existing licenses. ARM splits these numbers even finer by looking at multi-use, per-use, and term licenses (not shown). According to Steer, about one-third are new licensees. About one-quarter of all the new licenses are from customers upgrading their existing license arrangements to use some of the new- generation cores to either replace the older cores or to use them in conjunction with the older cores.

Another IP provider experiencing an increase in licenses, MIPS Technologies Inc., has seen its licensing revenue increase about 13% quarter over quarter with nine new license agreements and seven new customers. The company’s totals are now 117 licensed customers and almost 200 license agreements. In its fiscal second quarter of 2007, the company recorded shipments of 89 million units—up 36% year over year—with a total of 330 million units shipped in the prior four quarters. It also is seeing strong acceptance of the MIPS32 24K family cores, which now have 33 licensees.

The use of multicore architectures also has exploded in the entertainment market. Each of the latest game consoles employs a multicore solution. Microsoft’s Xbox360 and the Nintendo Wii both have processors with multiple PowerPC cores, while the Sony PS3 has a processor with up to nine cores (a Power processor and eight identical programmable single-instruction/multiple-data processing engines). Cell phones also are moving to multiple-CPU-core solutions. ARM already estimates that on average, there are about 1.5 ARM processors in each cell phone. As the phones include more and more functionality, they’ll have to include more processors. In such handheld applications, power consumption is a major concern. Chip power is directly related to operating frequency. As a result, the lower the frequency, the lower the power consumption and thus the longer the battery life. One approach to adding more functionality leverages the fact that by running a processor faster, it can do more. In battery-powered systems, however, the result is diminishing returns. The higher speed translates into higher power consumption and thus shorter battery life. The solution comes from the old military tactic referred to as “divide and conquer:” Use multiple “slower” CPU cores to divide the compute tasks into smaller blocks. Running the CPUs slower reduces the power consumption. For example, two cores running at 100 MHz use less power than one core running at 200 MHz—even though more logic might be needed to implement the multiple cores.

Software for all the embedded programmable engines is another major concern, notes Wawrzyniak. New software tools are needed to allow programmers to rapidly develop the programming for all of the on-chip processors. In addition to the tools, such as compilers and debuggers, operating systems must be designed to handle the high levels of parallelism. ASIC design teams are increasingly becoming dominated by software engineers, observes Wawrzyniak. Already, many companies report that more than half of their development teams consist of software engineers. These engineers must develop the firmware and application software needed to execute the intended applications— everything from soft codecs for audio to image processing and much more.

In addition to the CPU cores used in SoC designs, there also is a growing market for embeddable CPUs in field-programmable-gate-array (FPGA) fabrics. Although a few FPGA vendors have integrated “hard” CPU cores into the FPGA silicon, the most popular approach is to use “soft” cores. These register-transfer-level (RTL) descriptions of the processor are integrated into the rest of the logic and then synthesized with all the other logic. The resulting configuration can be downloaded into the FPGA configuration memory as a bitstream.

FPGA vendors, such as Actel, Altera, and Xilinx, have developed their own soft- processor cores. They also have allowed third-party CPU IP to be incorporated into the logic configuration. The use of the FPGA vendor’s home-grown soft cores is hard to track, as there are no royalty fees and customers typically don’t tell the world what they’re doing inside the FPGAs. With that said, however, Altera has gathered some statistics. Many of its customers are using more than one instance of its Nios II family of embedded processor cores.

Is there a slowdown in the use of embedded CPU cores? I don’t think so. Perhaps there’s a small lull as companies transition from generation to generation of the cores. Overall, however, the momentum seems to be sustained.

Figure captions: 1. ASIC designs are transitioning from the use of a single control processor (a la the 1990s) to the use of distributed processing power in today’s complex SoC designs (source: Tensilica Inc.). 2. The continuing growth in new-processor IP licenses at ARM Ltd. shows a decrease in new licenses in the last two quarters. But many customers are taking out additional licenses for the company’s next-generation cores.

Figure 1 is the diagram from page 3 of Steve Roddy’s presentation at the global press summit. Figure 2 is a graph I am creating.

*Dave Bursky is a Contributing Editor for Chip Design and Chip Design Trends. Bursky also is the Technical Editorial Manager at Maxim Integrated Products Inc. in Sunnyvale, Calif.

Recommended publications