March 2013 / Volume 3, Issue 3

www.electronicspecifier.com Is it safe?

Can open source software deliver greater security through complete visibility?

In this issue: • Improving detection with future protection Hi-Rel: • Opening up to hardware independence • Going to extremes to protect components • At the heart of the problem • Space-ready for terrestrial deployment • We’ve got it covered • Certifying to IEC 61508 Communications: • Ensuring code quality • Why Ethernet is getting everywhere • Is open source software the safest choice? Interconnect: Processors & FPGAs: • When the going gets tough... • Making room for multi-core MCUs • Why reliability is getting smaller THOUSANDS OF PRICES REDUCED ACROSS A BROAD RANGE OF ELECTROMECHANICAL, CONNECTORS AND SEMICONDUCTORS

Check out our new prices on www.farnell.com

www.farnell.com Could the mandatory adoption of open Model-based design for MCUs source software remove the last hiding place Small cell deployment from MWC for security problems? NFC gets renewed input from GSMA MCU ranges extended for greater coverage

EU drives patent unification Software as a Security; can embedded Partnership pushses pre-tested platforms software alone provide the level of security Resistance is useful! now necessary for the Internet of Things? NIBS

Electronic Specifier Design | March 2013 | 3 Is it Safe? 28 Ethernet is Everywhere! 43 Going to extremes 10 Can open source software deliver Where can Ethernet go next? How do sensitive components greater security in critical systems? survive extreme environments? The heart of the problem 44 Make room for the many 32 How MSOs can help test and debug Space-ready goes terrestrial 14 Solving real-time problems with the latest FPGAs. How to achieve high reliability in multicore . embedded memory. We’ve got it covered 48 Improving detection 36 MCU manufacturers look at both the Certifying to IEC 61508 16 Delivering future proof innovation to highs and the lows. Software now forms the underlying the most demanding customers. technology for . When the going gets tough 50 Hardware independence 40 The need for efficiency is driving Ensuring code quality 22 How OpenCL can overcome connector design innovation. The Automotive sector is driving hardware demands to provide debug tool development in a bid to engineers with a greater degree of Reliability gets smaller 52 guarantee reliability. design freedom. Connectors take on extremes.

4 | March 2013 | Electronic Specifier Design Software as a Security? WWORLD’SORRLD’SLD’S FIFIRSTRST here’s a lot of activity in the IT world, revolving around Software as a Service — or TSaaS — which is, of course, closely related to (and dependent upon) cloud-based services, a trend that has commoditised servers in favour of UUSBSB 3.0 software. While the same model would be difficult to emulate in the embedded space, due to its OSCILLOSCOPE inherent dependence on hardware, there is growing demand to provide greater levels of security in that hardware. Given that security risks are always changing, is there now cause to develop software-based security countermeasures that can be embedded and perpetually updated? The security risks ‘hidden’ in embedded software are now beginning to focus the attention of the industry at large, and formed a major theme at this year’s Embedded World Exhibition & Conference. This issue takes a closer look at security and software, and how the two may be more synonymous than previously expected. INS WOUR Philip Ling, Editor, Electronic Specifier Design Y INE ONL

www.usb3scope.com/www.usb3scope.com/TR114TR114 PicoScope 3207A 3207B ElectronicSpecifier Ltd. Bandwidth 250 MHz 250 MHz Ellion House, 6 Alexandra Road, Sampling 1 GS/s 1 GS/s Tonbridge, Kent TN9 2AA, UK Memory 256 MS 512 MS Signal generator Function generator AWG Editor: Philip Ling Email: phil@electronicspecifi er.com Price £1099 £1199 Tel: +44(0)1622 679 221 Power supply From USB port Contributing Editors: Compatibility USB 2.0 & 3.0 Sally Ward-Foxton Vanessa Knivett ALL MODELS INCLUDE PROBES, FULL SOFTWARE AND 5 YEAR WARRANTY. SOFTWARESOFTWWAREARE INCLUDESINCLUDES MEMEASUREMENTS,ASUREMENTS, SPSPECTRUMECTTRRUM ANANALYZER,ALALYZERYZER, FULL SDSDK,K, ADVADVANADVANCEDANCED TRTRIGGERS,IGGERS, Advertising: Tim Anstee CCOLOROLOR PPERSISTENCE,ERSISTENCE, SERSERIALIAL DDECODINGECODING (C(CAN,AN, LILIN,N, RRS232,S232, I²C, FLEFLEXRAY,XRAXRAY,Y,Y, SPSPI),I), MASKS,MASKS, Email: [email protected] MAMATHTH CHANNELCHANNELS,S, ALL AS SSTANDARD,TTANDANDARD, WIWITHTH FREE UPDUPDATES.AATETES. Tel: +44(0)1732 366 624 Fax: +44 0)1732 366 052

Publisher: Steve Regnier Email: [email protected] Tel: +44(0)1732 366 617 Fax: +44(0)1732 366 052 Subscriptions Distribution Blog 8 Subscribe to the Digital 8 Find out what’s happening Edition of ElectronicSpecifier in the world of distribution Model-based and test those models in a plifying the PIL process,” generating optimised code design for Processor in the Loop (PIL) said Michael Buffa, General for their Cortex processor- simulation. The C code gen - Manager Di - based projects,” said MCUs erated by the tool will run vision, STMicroelectronics. Richard York, Director of Developers using MCUs will on ST’s STM32 Cortex-M Buffa also believes that by Embedded Systems Market - now benefit from code gen - based MCUs. enabling DSP-standard ing, ARM. eration, debug and model - “Based on strong customer tools to run on ST’s Cortex- Mathworks’ Design Automa - ling tools from Mathworks, demand for MATLAB and M based products will allow tion Marketing Director, following the collaborative Simulink support, ST ag - OEMs to target a wider Paul Barnard, described the efforts of ST and ARM. gressively developed the ca - range of applications. effort as an important first The result is developers can pabilities to go beyond pure “This initiative enables de - step in providing a smooth create algorithms in MAT - Cortex-M processor support velopers to easily and effi - design flow for engineers LAB and Simulink, and by creating additional pe - ciently develop and explore using model-based design then target, integrate, debug ripheral blocks, further sim - numerous models before to develop with MCUs.

Increased It pushes the proprietary core’s maximum clock Small cell performance frequency up to 50MHz, deployment Renesas has increased the compared to 32MHz for At this year’s Mobile World performance of its RX200 the existing RX220 fam - Conference, Maxim Inte - range of 32-bit MCUs with ily. This delivers a maxi - grated and Freescale jointly the addition of the RX210 mum of 78DMIPS for the demonstrated a field-de - group. RX210, as opposed to ployable LTE/3G picocell DSP cores — with Maxim’s 50DMIPS for the base station, in a passively RF to Bits transceiver and RX220. cooled enclosure. targets outdoor or corpo - As well as a num - It brings together rate small cell deployments. ber of various Freescale’s QorIQ Qon - The demonstration is field timers — includ - verge — which features two deployable but will also ing motor control Power Architecture e500 serve as a reference design timers — the cores and two StarCore to OEMs. RX210 MCUs fea - ture on-chip Upscaling the -M4 with floating point safety and relia - DSP extensions, as already bility functions Toshiba Electronics Europe adopted by IDMs targeting for supporting has added the Cortex-M4 to the industrial control sector. IEC60730. its portfolio of ARM-based However, Toshiba cites MCUs, which until now has other applications for the Near-field advanced technical training been restricted to the Cor - TMP440, including com - services, application and tex-M3 core. puter peripherals and sen - confidence design support, project The latest devices employ sor systems. In a bid to build momen - management and techno - tum behind NFC based sys - logical consulting in all ar - tems worldwide, NXP is eas related to contactless now collaborating with the and NFC applications. Mobile World Capital With the support of the Barcelona Foundation, GSMA, the initiative will launching the m-Knowl - target the entire mobile edge Centre. It will become ecosystem and provide sup - a global competence team port for anyone looking to of technical experts offering deploy an NFC service.

6 | March 2013 | Electronic Specifier Design One platform for 8-, 16- and 32-bit developmenent - with Microchip’s MPLAB® X IDE

MPLAB® X IDE is the free, integrated toolset for all of Microchip’s 900+ 8-, 16- and 32-bit PIC® Microcontrollers, dsPIC® Digital Signal Controllers, STARTSTTAART DDEVELOPINGEVELOPING TTODAYODAAYY and memory devices. Based on the open-source NetBeans platform, MPLAB X runs on Windows® OS,MAC® OS and Linux, supports many Download a free copy of MPLAB X third-party tools, and is compatible with many NetBeans plug-ins. and choose from a choice of new C compilers: MPLAB XC compilers help increase code s Ŷ MPLAB XC8 ffoor 8-bit MCUs Microcontroller or dsPIC® digital signal co Ŷ also cutting code size by 35%. These new c MPLAB XC16 ffoor 16-bit MCUs designers the choice of Free, Standard or P and DSCs levels ffoor 8-bit, 16- or 32-bit developmen Ŷ MPLAB XC32 ffoor 32-bit MCUs suite to support all Microchip Microcontro Ŷ MPLAB XC Suite ffoor all digital signal controllers. 900+ PIC MCUs and dsPIC DSCs. Microchip’s tool chaainin of compatible com and debuggerer//programmers operate seamlessly within the universal, cross pla and open-source MPLAB® X integrated development environonment, reducing bot learning curves and tool investments.

Evaluate MPLAB X today! www.microchip.com/get/eumplabxwww.microchiipp.com//geget//eeumplababx

The Microchip name and logo, PIC, dsPIC, and MPLAB are registered trademarks of Microchip TeecÚologycÚology Inc. in the USA and other countries. All other trademarksks mentioned herein are the property of their respective coompaniesmpanies. © 2012, Microchip TecÚology Incorporated. All Rights Reserved. ME1020Eng/04.12 Unifying handle the enforcement and would see London becom - shows not only the confi - patents validity of patents. It will also ing the base for a new cen - dence in our legal sector allow the regulations creat - tral division of the unified but also the strength of the Industry ministers from 24 ing the Unitary Patent, patent court, which would UK’s intellectual property EU member states, includ - adopted in December 2012 be responsible for handling regime,” Business Secretary ing the UK, France and Ger - by the European Parliament, disputes relating to pharma - Vince Cable said in a state - many, have signed an agree - to come in to force. ceutical patents. The other ment. “Agreement on a ment pushing for a central At least 13 member states central divisional courts unified patent regime is a European patent system. — including UK, France will be based in Paris and good result as it will mean Once ratified the agreement and Germany — must agree Munich. defending a patent across will enable the establishment to ratify the agreement, un - “The decision that London Europe will now be much of a single court system to der the terms of which should host this new court simpler.”

Pre-tested partnership App-proved DSM Computer has formed a Rohde & Schwarz has in - partnership with Real-Time troduced what it believes is Systems, in order to offer com - the industry’s first inte - pact embedded systems with grated end-to-end test solu - pre-tested, ready to use soft - tion that allows operators to ware packages. It will see RTS monitor the behaviour of Hypervisor being installed on smart phone applications, selected DSM multi-core sys - using deep packet inspec - tems, allowing a real-time ap - tion, to ensure they’re not plication to run alongside a This includes standard operat - The first DSM product with misusing resources. GUI on a single platform. ing systems such as Windows the RTS hypervisor is the The combination of RF and The hypervisor allows distrib - 8 running alongside real-time Galaxy G4-QM57 control cabi - application testing in a sin - uted operating systems to run operating systems without im - net PC, which has been de - gle instrument will reveal on individual cores of a multi- pacting the performance of ei - signed specifically for harsh the performance of wireless core processor, unchanged. ther, even if one crashes. environments. communications and mo - bile phone networks. IP acquisition No resistance Targeting the ‘Internet of Building on its existing Things’ amongst other mar - range, andersDX has kets, Cadence has an - launched a new open-frame nounced it is to purchase touch display platform, privately owned IP devel - powered by TI’s dual-core oper, Cosmic Circuits. OMAP processor and fea - Cosmic’s technology and turing a 4-wire resistive talent will improve Ca - touch-screen. dence’s position as a The platform is powered by provider of analogue and a fanless SBC based on TI’s meaning it can tackle the little or no knowledge of mixed signal IP. DM3730, which matches an most graphic-intensive ap - driver development, which The IC and Systems busi - ARM Cortex-A8 with TI’s plications. could help manufacturers ness of Cosmic Circuits will own TMS32006x DSP core. A software development kit more easily add embedded be spun-off in a a separate The DX2 supports 2D/3D helps developers can create computing with a colour company to existing share - graphics with OpenGL, software applications with touch interface. holders of Cosmic Circuits.

8 | March 2013 | Electronic Specifier Design Need a Decode Expert but have Limited Budget? LeCroy’s New WaveSurfer MXs-B Oscilloscopes. Leaves the Competition Standing.

Tools for Efficient Design and Debug from 200 MHz – 1 GHz

Excellent Performance Rich Feature Set Widest Range of Serial Data Tools ‹ 200 MHz, 400 MHz, 600 MHz, ‹ WaveStream™ Fast Update Mode ‹ I 2C, SPI, UART and 1 GHz Bandwidth ‹ WaveScan™ Search and Find ‹ CAN, LIN, FlexRay™ ‹ 10 GS/s maximum Sample Rate ‹ LabNotebook™ Documentation ‹ USB ‹ 25 Mpts Memory standard and Report Generator ‹ Audio (I2S, LJ, RJ, TDM) ‹ 18/36 Digital Channels optional ‹ MIL-STD-1553, ARINC 429 ‹ MIPI D-PHY, DigRF 3G, DigRF v4

www.rs-components.com/lecroy Hi-Rel Going to extremes How do sensitive electronic devices survive the increasingly hostile environments experienced by downhole drilling equipment? Sally Ward-Foxton investigates.

ith the increased need for deeper the downhole industry terms operations in these and deeper oil and gas wells, the harsh environments ‘hostile’,” says Jeff Watson, punishing environments that drilling systems applications engineer in the Industrial eWWlectronics has to survive are also getting more and Instrumentation strategic business segment extreme. The recent emphasis on HPHT (high at Analog Devices. Watson has previously pressure high temperature) wells has driven worked as a design engineer in the downhole oil ambient temperatures from 150-175°C up to and gas instrumentation industry, and describes 200°C and beyond, while pressure in these the environment encountered in downhole wells can exceed 25kpsi. drilling of HPHT wells as one of the most severe “Couple the extreme temperature and pressure on the planet. with the high shock and vibration encountered Electronics used in these extreme conditions is during a drilling operation and you can see why required to help maximise well productivity; drilling operators need accurate information about Figure 1. During a drilling operation, electronics and sensors are used to conditions downhole. monitor the health of the equipment “During a drilling operation downhole. The application can broadly electronics and sensors are be split into measurement while drilling used to monitor the health of (MWD) and logging while drilling (LWD). the equipment downhole,” Watson says, explaining that the application can broadly be split into measurement while drilling (MWD) and logging while drilling (LWD). MWD includes such tasks as directional drilling, in which precision geosteering instrumentation guides the drill to an exact geologic target to optimise oil production. LWD includes electronics that logs data on geologic formations around the drill. “Sensors located near the bit while drilling can give us much finer resolution information than we can obtain using surface seismic techniques,” he says.

10 | March 2013 | Electronic Specifier Design So, how does the electronics survive these harsh temperatures and pressures to provide reliable information about downhole conditions?

Process technologies Manufacturers have developed special Figure 2. A Junction Isolated Process vs a Silicon on Insulator Process. The black arrows show the paths for current leakage. process technologies to avoid the performance and reliability issues The need for reliability in these systems almost caused by high temperature environments. Most goes without saying – a failed electronics use proprietary variations of silicon on insulator assembly on a drill string operating 10,000 feet (SOI) technology, in which a layer of SiO2 is underground can take more than a day to used as a dielectric to prevent parasitic leakage retrieve and replace, and the rate for operating into the substrate (see Figure 1). a complex deep-water offshore rig can be more As an example, Analog Devices’ AD8229 low than US$500,000 (Euro 374,000) per day. noise instrumentation amplifier was designed Standard silicon devices used in this from the ground up for operation up to 210°C. It environment would not last long. Watson uses the company’s proprietary bipolar SOI explained that device performance can suffer process and has low 1nV/√Hz input noise, tremendously from the effects of increased 0.1 µV/°C input offset voltage drift, and very parasitic substrate leakage current that is high common mode rejection. characteristic of standard junction-isolated “The process is modelled above 200°C, which processes. Variation in device parameters (such allowed the IC designers to manage the variation as V T, ß , and V SAT ) also can degrade of device parameters and achieve extremely high performance, often in unpredictable ways in performance operation at high temperature,” complex devices. says Watson. “Remember that the performance of standard ICs is not characterised nor production tested at Non-volatile Flash elevated temperature, so when using these Mont Taylor, sector manager for high devices beyond datasheet rating there can be temperature products and known good die significant performance variance between (KGD) at Texas Instruments, explains that TI uses production lots or after an IC process change,” many different process technologies to support its he adds. high temperature product line, including SOI. At the silicon level, electromigration can also “Extensive silicon qualification, characterisation, cause problems – this is the movement of metal and packaging all allow us to specify parts to atoms in the die interconnect, due to current flow higher temperatures. We also make modifications and high temperature. to several die at the mask level to enable high “The displacement of the metal atoms can either temperature operation,” he says. form voids, which can cause open-circuits, or The company recently launched the ‘hillocks’, where the atoms pile up and can cause SM28VLT32-HT (see Figure 2), which it says is shorts to adjacent interconnects,” he says. “The IC the industry’s first non-volatile Flash memory designer must follow design rules to size interconnect device for high temperature operation (up to appropriately for a given current and temperature. If 210°C). the designer is not using rules that take into account “The SM28VLT32-HT has an operational HT operation it can lead to interconnect failure.” capacity of 4Mbyte, and eliminates the need for

Electronic Specifier Design | March 2013 | 11 Hi-Rel

that the company uses high performance semiconductor technologies such as Silicon on Insulator, Silicon on Sapphire, Gallium Nitride, Silicon Carbide alongside conventional silicon, which in some cases are tested at API’s independent test house facility for suitability to the extended temperature requirements. “Typical levels which are achieved are operating temperatures of -40 to 230°C, Figure 3. Parts like TI’s storage temperatures of -55 to 250°C, SM28VLT32-HT can eliminate temperature cycling, shock and vibration the need for costly up-screening testing at the ultra high temperatures, for and qualification testing. each axis, and all these aspects are tested and verified onsite,” he says. costly up-screening and qualification testing of Hunt said that while standard plastic packaged industrial-grade components for temperature silicon die with soldered terminations could be ranges outside data sheet specifications,” says used on a PCB for downhole drilling applications, Taylor, adding that the device is intended for this is only good up to 150-175°C. data logging at extreme temperatures and is “These [solutions] are inherently lower guaranteed for at least 1,000 hours of reliability due to degradation of the PCB over operating life in the harsh environment of down time at temperature, corrosive out-gassing of the hole drilling. plastic package and intermetallic Of course, making sure the silicon will work at incompatibility,” he says. high temperatures is only half the battle. The A ‘mix and match’ range of thick-film device’s packaging and interconnects are equally materials, conductive and non-conductive as important. Standard silicon devices’ plastic adhesives, wire bond thicknesses and metal packaging is not up to the job, leaving two main types and hermetic package variants are used options: ICs supplied in ceramic packaging, or to achieve the levels of ICs supplied as ‘known-good’ bare die for mechanical robustness required in these incorporation into specially designed multi-chip challenging environmental conditions. Thick-film modules (MCMs). based ceramic hybrid MCMs give significant “Supplying known good die enables customers benefits in reliability by using known good die, to build multi-chip modules to ease integration,” ‘clean’ adhesive materials and matched says Taylor. “Downhole tools are very space precious metal interconnects. They also offer limited and many customers want the ability to physical size benefits compared to PCB build their own multi-chip modules with the HT solutions. An alternative to solder must also be functions available in die form. Some HT boards used, of course. are less than one inch wide, so MCMs are “Wherever possible within an API hermetic thick- essential to keep electronics as small as possible.” film hybrid multi-chip-module (MCM), soldered connections are avoided,” he says. “‘Clean’ Multi-Chip Modules conductive and non-conductive materials are used High temperature specialists RF2M to make the mechanical and electrical connections Microelectronics (part of API Technologies) have avoiding corrosive out-gassing that affects sensitive developed manufacturing processes that allow its components. Where soldering is essential, we use MCMs to withstand the ultra high temperatures matched ‘high melting point’ solders to ensure as well as shock and vibration levels found 250°C robust reliable connections.” behind the drill bit. Bob Hunt, Head of Engineering at RF2M Microelectronics, explains 8 Return to contents page.

12 | March 2013 | Electronic Specifier Design StrongIRFET™ Rugged, Reliable MOSFETs

Specifications Features: • Ultra low R ID@ R max@ Qg@ DS(on) Part Number B DS(on) Package VDSS 25°C Vgs = 10V Vgs = 10V • High current capability • Industrial qualified IRFH7004TRPbF 40 V 100 A 1.4 m1 134 nC PQFN 5x6 • Broad portfolio offering IRFH7440TRPbF 40 V 85 A 2.4 m1 92 nC PQFN 5x6

IRFH7446TRPbF 40 V 85 A 3.3 m1 65 nC PQFN 5x6 Applications: DirectFET IRF7946TRPbF 40 V 90 A 1.4 m 141 nC • Battery Packs 1 Medium Can • Inverters IRFS7437TRLPbF 40 V 195 A 1.8 m1 150 nC D2-Pak • UPS IRFS7440TRLPbF 40 V 120 A 2.8 m1 90 nC D2-Pak • Solar Inverter IRFS7437TRL7PP 40 V 195 A 1.5 m1 150 nC D2-Pak 7pin • DC Motors IRFR7440TRPbF 40 V 90 A 2.5 m1 89 nC D-Pak • ORing or Hotswap IRFB7430PbF 40 V 195 A 1.3 m1 300 nC TO-220AB

IRFB7434PbF 40 V 195 A 1.6 m1 216 nC TO-220AB

IRFB7437PbF 40 V 195 A 2 m1 150 nC TO-220AB

IRFB7440PbF 40 V 120 A 2.5 m1 90 nC TO-220AB

IRFB7446PbF 40 V 118 A 3.3 m1 62 nC TO-220AB

IRFP7430PbF 40 V 195 A 1.3 m1 300 nC TO-247

For more information call +49 (0) 6102 884 311 or visit us at www.irf.com THE POWER MANAGEMENT LEADER

11746AD_StrongIRFET_210x297mm PCIM.indd 1 11/02/2013 17:02 Hi-Rel ‘Space-ready’ memory goes terrestrial What are the embedded memory options when your application is high rel? Vanessa Knivett investigates.

emand for high-reliability and high-availability Microcontrollers for Automotive explains how high electronic systems has expanded beyond the reliability requirements have impacted ST’s eFlash realms of traditional military, aerospace and developments. “In the last 5 years, high reliability has DDouter space applications. Today, as intelligence and become of big significance to us. The standard we functionality increase in applications like qualify to remains AEC-Q100, but it’s not nearly communications infrastructure, industrial, medical and enough to address high transportation, high reliability operation and high reliability needs. We are availability components are required as much on governed by functional terra firma as in space. safety requirements, which Shrinking critical dimensions are making electronic broadly means the absence components more vulnerable to single-event upsets, a of unreasonable risk caused by phenomenon that was formerly restricted to space malfunctioning of the circuit. But the electronics. Whereas a radiation-induced error might nature of risk has changed from result in a satellite failing to broadcast, a single-event being fault tolerant to fail-safe. upset could lead to a telecom router shutting down or a Whereas in the past, high car’s power system locking up. reliability was about making ‘the At the same time, these terrestrial applications are perfect device’, now it’s typically subject to tighter cost and design time restraints about making them as good than their military, aerospace or space-going as you can and being counterparts and are made in orders of magnitude tolerant to failure i.e greater volume. Which is why certain consumer implementing strategies to technologies — namely memory — have had to get avoid both systematic and random failures during the increasingly ‘space-ready’. operation of the device.” If the nature of risk has changed, have the causes Non-volatile memory multiplied? Not necessarily, says Duncan: “In the past, “Embedded Flash and embedded nonvolatile memories failures were usually hard; i.e. something breaks. But make high performance and high reliability applications soft failures are many times more probable.” Soft errors in automotive and other high-rel markets possible,” first became known in the 1970s with the introduction of states Web-Feet Research in a report that predicts high dynamic RAM. Alpha particle emission caused by growth for embedded Flash MCUs. With its low power radioactive contaminants in chip packaging led to lost requirements and some security features, embedded data bits. Package contaminants are now under control Flash has many qualities to recommend it and in recent but other sources have since been proved to be at work. years, manufacturers have made great strides in These include cosmic rays that shower energetic addressing weaknesses of the past, namely reliability neutrons which can cause soft errors in circuits, and high cost. One of these is STMicrosystems, who especially at altitude. Latterly, the move to smaller together with Freescale, developed the world’s first technology nodes has made single event upsets more MCU certified to the ISO 26262 Functional Safety frequent. Hence, the industry’s quasi acceptance of Standard, which incorporates embedded Flash. failure and focus on strategies to live with it. Most recently, ST has implemented 55nm eFlash technology into MCUs destined for automotive Rad-hard applications such as engine management and For ST’s high reliability MCUs, this means making them transmission, car body controllers and safety/ADAS. radiation hardened, including the RAM and the Flash. Martin Duncan, ST’s Marketing Manager, Duncan recounts: “In the memory, we use error

14 | March 2013 | Electronic Specifier Design correction code and an error detection circuit for the and time within typical operating ranges won’t have any error correction! In essence, this means that the original effect on it.” data is compared with the data that has been corrected, Novocell’s Smartbit anti-fuse NVM technology which is then decoded and re-encoded so that one step boasts higher reliability than EEPROM, Flash or later, all three data sources can be compared to make competitive anti-fuse technology. Compeau explains: sure we have the same result.” “Our patents allow us to perform ‘dynamic ST also implements an end-to-end correction scheme programming’, a method that senses when irreversible and in the device itself there are lots of replicated parts. oxide hard breakdown has been completed and Notes Duncan: “For critical paths, flip-flops are triggers a ‘done’ signal when the data has been fully triplicated with binary two-out-of-three voting at each programmed. Other anti-fuse technologies employ a flip-flop. We also run checkers of the whole circuitry time study where they apply the high voltage for a set during runtime and have onboard voltage amount of time and then test it to quantify what ratio detectors, clock detectors and fault of bits, on a normal curve, have been programmed. correction unit. If that wasn’t enough, The advantage of our Smartbit technique is that we we have to respect the ISO are much more amenable to variations in oxide 26262 rules that entails a thickness across the wafer, and to accommodating whole procedure for how to other variables in the manufacturing process.” develop a device, requiring As anti-fuse uses a standard CMOS transistor device, external certification to there are no extra masks needed to create it, unlike ensure that all the functional safety Flash or EEPROM. Meanwhile, alterations to the concepts have been addressed in accordance transistor are invisible to conventional methods used to with the standard.” determine the stored contents illicitly. Compeau notes another advantage: “One of the reasons why we are NVM alternatives seeing increased interest in our products in more Automotive applications require a highly reliable ‘down to earth’ applications such as medical non-volatile memory for firmware storage on implantables is that we aren’t having to use error MCUs and until now, it’s been multi time correction circuitry or redundant bits within the design, programmable (MTP) memories that offer the which inflates the size of the chip. Instead, if you need most flexibility and ease of use. Historically 256 bits, we place 256 bits and that’s all that’s provided by EEPROMs, Flash has now replaced necessary. This is entirely due to our hard breakdown EEPROM for this function, but it still remains costly detector circuitry which can actively monitor when with embedded Flash adding as much as 50% more breakdown has been completed.” cost to a standard logic CMOS MCU. Flash is also So what is the trade-off with OTP anti-fuse? Walt susceptible to tampering or reverse engineering to Novosel, Novocell’s President and CEO says: “The access the stored data or security codes. An alternative programming is dynamic so it could take 5 or 50 s, but technology, for applications that do not require much re- the upside of that is no need for any post test.” programmability, is one-time programmable (OTP) Explaining Novocell’s focus on trimming, code storage, anti-fuse NVM. reprogramability and configuration applications, he Mike Compeau, director of Sales & Marketing at adds: “For larger size devices — 4 to 8Mbyte for Novocell Semiconductor, a supplier of OTP anti-fuse example — where we are competing more with floating NVM, explains the difference between anti-fuse and gates, chip area may be a trade-off too.” Notably, anti- Flash: “Flash relies on a floating gate to trap electrons fuse OTP is proving a popular solution for analogue that represent data. Anti-fuse is far simpler, using the calibration and trimming of the analogue/mixed signal gate oxide of a MOS transistor as the storage media. A circuits populating the large number of sensors found in ‘1’ is stored when a high voltage is applied and the automotive electronics. ‘fuse’ is created. Once the high voltage of programming The solution may be a mix and match of memories — causes the ‘hard breakdown’ of the oxide layer we space-ready and standard — and reliability techniques, have created a solid pathway of molecules through this from both a system and on-chip perspective. oxide, and it really won’t go away because it’s not a trapped charge. Effectively, changes of temperatures 8 Return to contents page.

Electronic Specifier Design | March 2013 | 15 Hi-Rel Certifying industrial systems using IEC 61508

With recent advances in automation, software is no longer a small part of electro-mechanical systems, but instead forms the underlying technology providing functional safety for many products. By Shrikant Satyanarayan

he requirement for software functional IEC 61508 makes it an ideal ‘blank canvas’ for safety has become critical in industrial seamless integration of these application- automation, transportation, nuclear dependent factors. TeTnergy generation and other markets. To In most situations, safety is achieved by a ensure functional safety, many have adopted number of systems that rely on many IEC 61508 as the basic standard, on which technologies (including mechanical, hydraulic, sector-specific values are built. pneumatic, electrical, electronic, programmable The IEC 61508 standard is a risk-based electronic). Any safety strategy must therefore approach for determining the SIL (Safety consider not only all the elements within an Integrity Level) of safety instrumented individual system (for example; sensors, functions. If computer system technology is to controlling devices and actuators), but also all be effectively and safely exploited, it is subsystems which make up the safety-related essential that the available guidance on these system as a whole. safety-related aspects is adequate to make As industrial systems depend more and more correct decisions. on software, the primary focus in certifying Notably, implementing a process standard for software to IEC 61508 lies with IEC 61508-Part software development involves much more than 3, which discusses software requirements, and simply understanding the rules and knowing IEC 61508-Part7, which describes different how to apply them. To implement a standard techniques and measures required to achieve effectively, it is essential to integrate the the relevant SIL for the application. standard into the entire development lifecycle from requirements through test. This paper will Creating a software safety lifecycle demonstrate how an automated process can IEC 61508 describes a software safety lifecycle close the loop, assuring developers and the IEC which involves the systematic development 61508 regulators that the code — from process, requirement traceability, software requirements through to test — is traceable and validation and modifications. As can shown in has been verified as compliant. Figure 1, the structure of the software safety lifecycle divides the software development IEC 61508 Basics lifecycle into defined phases and activities. A number of applications use Starting with requirements specification, Electrical/Electronic/Programmable Electronic developers must specify the requirements for (E/E/PE) safety-related systems in a variety of safety-related softwar e of each E/E/PE safety- application sectors that involve a wide range of related system and achieve the SIL specified for complex, hazard and risk potentials. The each safety function allocated to that system. required safety measures for each application IEC 61508 specifies four SIL levels with SIL1 depend on many factors. The generic nature of demanding the lowest level of safety and SIL4

16 | March 2013 | Electronic Specifier Design the most rigorous. Each technique or measure The following sections define the software has an associated level of recommendation used process necessary to achieve IEC 61508 to select a safety integrity level. The higher the compliance: SIL, the more highly recommended a practice is. • 7.2 Software safety requirements The software safety validation plan details how specification the software design and development, • 7.3 Validation plan for software aspects of hardware/software integration and any system safety modifications required achieve standard • 7.4 Software design and development compliance. Fundamental to IEC 61508’s • 7.4.3 Requirements for software validation process is bidirectional traceability, a architecture design process of linking all aspects of the software • 7.4.4 Requirements for support tools, development lifecycle together. Bidirectional including programming languages traceability ensures that each system • 7.4.5 Requirements for detailed design requirement links to the relating code, tests, and development – software system verification and documentation and that any design change to any of the linked processes transfers • 7.4.6 Requirements for code information forward and downward through all implementation phases of development. Specifications, and all • 7.4.7 Requirements for software module plans for software safety validation, software testing modification, software design specification and • 7.4.8 Requirements for software software verification (including data verification) integration testing as information is added or deleted. • 7.5 Programmable electronics integration The development process starts with detailing (hardware and software) the safety requirements as shown in Figure 2. • 7.6 Software operation and modification Software safety processes specify safety procedures function and safety integrity requirements. The • 7.7 Software aspects of system safety safety function requirements influence the validation input/output sequences that perform safety- • 7.8 Software modification critical operation such as • 7.9 Software verification detection of faults in sensors, actuators, Figure 1: Overview of software safety lifecycle, which includes requirement specification, programmable software development, software integration, software modification and safety validation. electronics hardware and so on. The safety integrity requirements of a system are composed of diagnostics and other fail-safe mechanisms which ensure that failures of the system are detected and that the system goes into a safe-state mode if it’s unable to perform a safety function. As with safety-critical system software development, the

Electronic Specifier Design | March 2013 | 17 Hi-Rel

IEC 61508 enforces static analysis as the first step in comprehensive control. During static analysis, the source code is reviewed against programming standards like MISRA and CERT to detect latent errors and vulnerabilities, such as array bounds, divide by zero, uninitialised pointers, which in turn can be exploited during the execution of the software. In this phase, consistency of the source code is checked to detect the presence of any dead code Figure 2: V-model software development process. or uncalled procedures. It determines the quality of the software by measuring design is derived from safety requirements — metrics including clarity, maintainability, both for the safety critical and non-safety-critical testability and complexity. components — to meet the required levels of Data flow analysis generates a series of safety and integrity. During design, if any safety analytical verifications of variable usage, functionality is overlooked at the requirements procedure interaction and interrupts present in level, it would potentially compromise the the source code. A control-flow diagram criticality of each software module developed. consists of a subdivision to show sequential In order to avoid this, IEC 61508 requires that steps, with if-then-else conditions, repetition, traceability be established between and/or case conditions. Suitably annotated requirements and software architecture, and the geometrical figures are used to represent software design specification. operations, data, or equipment, and arrows are Software architecture further considers used to indicate the sequential flow from one to selection of the tools including languages, another. The extent of rigour in enforcing the compilers, user interfaces and run-time standard depends on the safety integrity level interfaces, all of which contribute to the safety needed for the safety-related systems. of the system as per the requirements. The Table 1 explains about the different toolset includes verification and validation tools techniques/measures for static analysis and its such as static analysers, test coverage monitors recommendation as per the safety integrity level. and functionality testers. After static analysis is complete, dynamic Moving further into the development process, analysis is performed in an effort to uncover we get into the phase of software subtle defects or vulnerabilities. Dynamic implementation. Implemented software should analysis is the testing and evaluation of a fulfil all the safety functionality described in the program by executing data in real-time. The software architecture and software design objective is to find errors in a program while it specification, including complete traceability. is running, rather than by repeatedly The software should also be compatible with the examining the code offline. It is performed at programmable electronic target. unit, module and system level to achieve all

18 | March 2013 | Electronic Specifier Design

Hi-Rel safety functionality at the required level of Technique/Measure SIL 1 SIL 2 SIL 3 SIL 4 safety integrity. The validation test plan is developed by Boundary value analysis R R HR HR designing test cases for unit, module or system levels to provide complete coverage of the Checklists R R R R software’s functionality. The test cases need to be designed for all input combinations, Control flow analysis R HR HR HR boundary conditions, ranged values and timing Data flow analysis R HR HR HR mechanisms, and checked against the expected output to validate the safety functionality of the Error guessing R R R R system. The validation plan and test cases need Formal inspections, to be traced back to the requirements to make R R HR HR sure the desired level of integrity and safety including specific criteria functionality is achieved and that complete Walk-through (software) R R R R traceability between requirements and module integration (hardware/software) test Symbolic execution — — R R specifications, and the software safety validation plan is in place. Design review HR HR HR HR Dynamic analysis needs to be done in two Static analysis of run stages; functionality analysis (black box R R R HR testing), and structural code analysis (white time error behavior Worst-case execution box testing). In black box testing, the test data R R R R (inputs and expected outputs) for the test time analysis cases are derived from specified safety Table1: List of HR Highly Recommended functional requirements. This phase offers techniques and complete requirement-based testing because in measures which are R Recommended this approach only the functionality of the carried out as part of NR Not Recommended static analysis and its software module is affected by executing the — No Recommendation software with the desired level inputs and recommendations for subsequently checking the results with the each level of SIL. expected outputs. Boundary value analysis White box testing includes coverage metrics uses a functional testing technique where like entry points, statement, branch, extreme boundary values (maximum, conditions, Modified Condition/Decision minimum, just inside/outside boundaries, Coverage (MC/DC) to make sure each part of typical values and error values) are used in the software has been covered and tested the system to ensure no defects exist. against the requirements to comply with the In white box testing, the structure of the source required level of integrity. These coverage code is tested by having the inputs exercise all metrics are outlined in the safety integrity level paths though code and determine the of the system. appropriate outputs. It tests paths within a unit, During the testing phase, software is tested paths between units during integration, and on programmable electronic hardware to between subsystems during a system level test. meet all safety functionality. After validating The design of the test cases is intended to: the software’s safety, some corrections • Exercise independent paths in unit or module and/or enhancements may be necessary to • Exercise the logical decisions on both true ensure compliance with requirements. The and false. enhancements or corrections in the software • Execute loops at their boundaries and must also have complete traceability to operational bonds. requirements. All modifications need to be • Exercise the internal data structure to ensure documented along with an impact analysis their validity. that determines which software modules and

20 | March 2013 | Electronic Specifier Design verification activities are affected. Impact Technique/Measure SIL 1 SIL 2 SIL 3 SIL 4 analysis determines what modules or Test case execution from functionalities are affected by the change, R HR HR HR boundary value analysis how many new test cases must be created to cover the new functionality and whether any Test case execution from R R R R other system requirements are involved in error guessing testing this new change. Test case execution from — R R R After modification to the source code error seeding regression analysis is performed, to make sure earlier safety functionality is not Test case execution from model-based test case R R HR HR affected by the modification. Regression generation testing is selective retesting of a system or component to verify that modifications have Performance modeling R R R HR not caused unintended effects and that the Equivalence classes and system or component still complies with its R R R HR specified requirements input partition testing The advent of IEC 61508 has brought Structural test coverage isolated companies involved in either systems HR HR HR HR or software development for industrial (entry points) 100% systems together into the same process just as Structural test coverage R HR HR HR their counterparts in industries such as (statements) 100% aerospace and defence are. All disciplines Structural test coverage now face the same quality assurance efforts R R HR HR required to achieve compliance with a (branches) 100% demanding standard. Structural test coverage The need for such compliance has (conditions, MC/DC) R R R HR mandated business evolution in which 100% processes and project plans are documented, Table2: List of techniques and measures which are carried out as requirements captured, implementation and part of dynamic analysis and its recommendations as defined by verification carried out with respect to the each level of SIL. requirements, and all artefacts fully controlled in a configuration management Qualified, well-integrated tools ensure that system. developers can automate the process more Adopting IEC 61508 as a process for easily and efficiently. industrial systems software development While moving to automated compliance requires conformance to the processes, involves upfront costs and potential change activities and tasks defined by the standard. to current practices, companies can achieve Fundamentally, IEC 61508 demands that higher quality software and compliance to requirement traceability be met at all stages IEC 61508 more easily, reducing costly of the software development process. manual efforts. The higher-quality, safe product avoids expensive recalls and ensures Author profile: Shrikant Satyanarayan is a Technical that the same development process can Consultant with LDRA, in India, specialising in the underpin the maintenance and upgrade development, integration and certification of mission- and process. Not only do these factors contribute safety-critical systems in avionics, nuclear, industrial to the manufacturer’s bottom line, but the safety and automotive domains. With a solid background company achieves significant ROI in in software development, software testing and real-time improved credibility and reputation. operating systems, Shrikant guides organisations in selecting, integrating and supporting their embedded 8 More from LDRA systems from development through to certification. 8 Return to contents page.

Electronic Specifier Design | March 2013 | 21 Hi-Rel Ensuring code quality in mission critical systems

The automotive sector has driven a great deal of debug tool development in recent years, as no major brand wants to risk a recall, reliability issues or a catastrophic failure. By Barry Lock

mbedded systems are now widely define the original system design parameters. used in mission critical systems. The Parameters such as measuring best and worst obvious applications include avionics case response times may be considered, but EEand medical, while less obvious might be a the operation of the cache and obtaining diesel engine management system. details of code victims and misses is also Complex multicore engine management invaluable if the system is expected to run at systems now enable highly efficient diesel its optimum level. There are many similar engine development, enabled by the latest interlinked aspects to analysing the operation debug technologies available to engineers of the final and proving it that can help ensure code quality and is fit for purpose. performance. Proving code meets the target specification An essential investment is not easy; even something as simple as A popular Abraham Lincoln quote reads: proving code has actually executed (code ‘Give me six hours to chop down a tree and I coverage) is rarely undertaken by will spend the first four sharpening the axe,’ development teams. The tracking of obscure and is an excellent analogy to debug tools. or random bugs can be so difficult that when The more capable the debug environment, a code alteration ‘fixes’ the problem, the the better the development experience for all developer moves on without any concrete concerned, resulting in better code, faster proof that the original problem was found development and a reliable in-field and fixed. performance. Another challenge is that debugging code At the start of every project, engineers get properly is often harder than it needs to be to choose their hardware platform, their because very few teams think about what software solution, compiler and their debug debug capability will be needed when they solution.

22 | March 2013 | Electronic Specifier Design More important than the choice of tools, final clock rate. Many solutions are compiler or operating system is the choice of available, ranging in capability and price. processor. In order to get non-instrumented Some include important features such as a coverage information from the system, a logic analyser and high-speed serial trace processor with an off-chip trace port is functionality. For older processors without a essential. For systems that require safety JTAG port, an In-Circuit-Emulator (ICE) can certification, a good quality trace port is be used. required. The bandwidth at the trace port will Many reference design boards come with prove a very important consideration. low cost debug solutions. These have the For multi-core devices, the trace port must advantage of being easy to set up and use, be capable of providing reliable, error free but can prove to have a very limited trace streams from all cores. This means that capability as code development progresses. the trace port should be running at a good The more powerful solutions may be more percentage of CPU clock and be wide costly and may even require some training, enough to get the data off-chip fast enough but the longer-term benefits will quickly prove to prevent overflows. a good return on investment. The big changes that debug tools and code Required hardware analysis developers are seeing is that it is no Off-chip trace hardware will be required to longer enough to simply test a system. A new interface the processor to the development trend is an increased requirement for environment. It should be capable of engineers to document and prove code collecting data with the core running at its behaviour and performance. In some instances companies are requesting engineers to be certified and qualified in software writing or mandate some form of specified code quality. With the right choice of debug tools, detailed information about code behaviour can be generated to provide the necessary proof, showing such aspects as the route taken by the code and time taken. In this respect, program flow trace can go a long way to helping companies prove code behaviour and in achieving safety specifications such as ISO26262. Long-term trace is the name given to streaming a trace to a hard drive to overcome the limitations on internal buffer size in most debug tools. This provides the ability to collect massive amounts of code performance information from a running embedded system

Electronic Specifier Design | March 2013 | 23 Hi-Rel

for the developer to detect and analyse the High speed serial trace most unpredictable and transient of bugs. ‘High Speed Serial’ trace technology is a serial Demand for long-term trace debugging is transmission technology that has a transmission being driven by market sectors such as rate of over 6Gbits on each of up to 4 lines from automotive, medical and aerospace, where the target core. To put it in perspective this data based systems are becoming rate is such that it could transmit the entire contents increasingly complex and in need of more of a DVD in 3 seconds, making it ideal for rigorous testing to comply with safety and collecting data when debugging and developing performance criteria. very high speed multicore embedded systems and One example is to imagine an engine systems requiring mission critical analysis. management system. This longer code coverage Traditionally, the trace interface used by target enables engineers to analyse the software from a processors to deliver the detailed information on cold start, then up to temperature, through the operation of their inner processes had been a acceleration and deceleration and then to shut parallel port. Over recent years this approach has down. Using long-term trace technologies, engineers struggled to keep up with the growing flood of are now able to capture code over much longer information as processors have become more periods that can extend to hours or even days. This complex, faster and with multiple cores. The step has been a very important break-through for those to high speed serial solved these problems and developing complex or safety critical code. had the side effect of reducing the pin count, which

24 | March 2013 | Electronic Specifier Design is always good news. For many developers of One with a lot of memory and one that can embedded systems it would have been unthinkable process data fast. In multi-core you will have a to undertake a development without their valued whole new level of problems to avoid, such as trace information, so a lot of effort by silicon race conditions, uncontrolled interactions and out designers and tool vendors has been made to of sequence code behaviour. The toolset will need increase the data throughput of the trace interface. to be able to collect a large amount of accurate In recent times, ARM has implemented High data to give the detail and visibility into the code Speed Serial Trace with its High Speed Serial needed to understand the behaviour and Trace Port (HSSTP). This has been followed by interactions of the processes running on the cores. AMCC with the Titan, Freescale with the QorIQ processors P4040 and P4080, and Marvell with Code optimisation the SETM3. In mission critical systems where fast and Engineers can now source a hardware interface predictable performance is essential, cache for serial trace; a universal pre-processor has analysis has proven to be an important been developed on the basis of the Aurora development. Cache memory is on-chip protocol. Only the firmware and software have to memory that can be accessed very quickly by be changed to record any of the alternative the CPU. Access to cache memory can be in protocols. This means that existing systems will the order of 10-100 times faster than access to need minimal reconfiguration for further variants off-chip memory. of serial trace protocols. This fast memory acts as a temporary An important consideration of this technology is storage for code or data variables that are that the large volume of trace data generated accessed on a repeated basis, thereby obviously requires a correspondingly large trace enabling better system performance. However, memory and careful consideration should be given having cache memory does not necessarily to the tool choice with regards to memory size equate to better performance from a system. and the ability to stream to the host. In order to get the best performance from cache-based architecture, it is important to Multi-core debugging understand how a cache-based architecture Multi-core development is an area of embedded works and how to get the best from a systems development that is becoming more particular cache architecture. Trace prevalent. It started with mobile phone technology enables analysis tools to confirm development back in 2003/4, then moved into the effectiveness of cache usage and can have telecoms (base stations, routers) about five years an important impact on the overall ago, and today, it is finding its place in the performance of the software. automotive sector as future emissions and fuel Also, these days, safety critical equipment usage specifications demand far greater analysis may be a wireless portable device, where and control. battery life and predictable power usage may With multi-core, the data is extracted through a be an important factor. With this in mind, shared trace port. There may well be four or more power management is another area that is cores with processes running in parallel and some being influenced by the use of trace tools. interacting across core boundaries. The fact is, Increasingly, engineers are relying on trace two cores are not simply ‘twice’ as complicated to technology to not only enable the location of debug as a single core; they are several times bugs, but to also create better code and more complicated, because of these interactions. provide clearer visibility and understanding of Getting to the point, budget debug technology is code behaviour. It is strongly recommended not up to the job of multi-core development. For that such tools are explored and evaluated at multi-core you will need a powerful trace tool. the earliest stage of a new project.

Author profile: Barry Lock is the UK Manager of 8 More from Lauterbach Lauterbach. 8 Return to contents page.

Electronic Specifier Design | March 2013 | 25 130204_IRBE_ELECSPEC_UK_Spread.indd 1 1/30/13 3:44 PM 130204_IRBE_ELECSPEC_UK_Spread.indd 1 1/30/13 3:44 PM Hi-Rel Is it safe?

We increasingly rely on systems where an error could cause financial disaster, organisational chaos, or in the worst case, death. Could the mandatory use of open source software improve safety and security in High Reliability applications? By Robert Dewar

oftware now plays a crucial role in all So perhaps we don’t have too much to worry complex systems, some of the start-up about and this article may end up being little more problems at Heathrow Terminal 5 have than a plea for education, so that the techniques SbSeen attributed to computer ‘glitches,’ for for generating error-free software (for example, example, while modern commercial airliners the various safety standards used for avionics depend on complex computer systems to software) would be more widely adopted. operate safely. However, the world around us has changed If we go to the necessary trouble and expense, since September 11th, 2001, and the subsequent we are actually pretty good at creating near error- attacks on London and Madrid. Now it is not free software if the specification is very clear. Not sufficient to assure ourselves that software is free one life has been lost on a commercial airliner due of bugs; we also have to be sure that it is free to a software error. That’s not a bad record. from the possibility of cyber-attacks. However, we do definitely have cases of Any software that is critical is a potential target people being killed by software errors, notably for attack. This includes such examples as the a patient was killed by an excessive dose of software used to control nuclear reactors, power radiation from a medical device, and a distribution grids, chemical factories, air traffic Japanese worker killed by a berserk robot. Both control ... the list goes on and on. the latter cases could probably have been prevented, by imposing more stringent controls Safe and secure? on the relevant software. It is very much harder to deal with protecting Indeed from the point of view of preventing software against such attacks than making it error bugs, we have pretty good technology if we free. Consider for example the important tool of care to use it. In some cases, we can use testing. No amount of testing of software can mathematical ‘formal’ methods to demonstrate convince us it is secure against future attack modes that the code is error-free. Such an approach is that have yet to be devised. being used for iFACTS, the new air traffic-control To think otherwise would be to take the attitude system for the UK. that since no one had attacked the world trade

28 | March 2013 | Electronic Specifier Design centre for decades, it must have been safe from Why is this software kept secret? Well the future attacks. So how do we guarantee the easy answer is that nearly all software is kept security of software? secret as a matter of course. Rather On an episode of the American television series surprisingly, in both Europe and the USA, you ‘Alias’, Marshall, the CIA super-hacker is on a can keep software secret and copyright it at the plane, clattering away on the keyboard of his same time; surprising because the fundamental laptop during take-off preparations. When Sydney idea of copyright is to protect published works. tells him he has to put his laptop away, he Companies naturally gravitate to maximum explains that he has hacked into the flight control secrecy for their products. The arguments for system to make sure the pilot has properly protecting proprietary investment and completed the take-off checklist. Intellectual Property Rights seem convincing. Just how do we make sure that such a scenario The trouble is that the resulting secrecy all too remains an amusing Hollywood fantasy and not a often hides shoddy design and serious errors terrifying reality? In this article, we will argue that that render the software prone to attack. one important ingredient is to adopt the phrase Can we afford such secrecy? I would argue from the movie Hackers ‘No More Secrets’, and that in this day and age, the answer must be systematically eliminate the dependency on no. First of all, there is no such thing as a secrecy for critical systems and devices. secret, there are only things that are known by The disturbing fact is that the increasing use of just a few people. If the only people with embedded computers, controlling all sorts of access to the knowledge is a small number of devices, is moving us in the opposite direction. people at the company producing the software Traditionally, a device like a vacuum cleaner and there are some bad guys willing to spend could be examined by third parties and whatever it takes to discover these secrets, do thoroughly evaluated. we feel safe? Organisations like Which? in the UK devote their At a recent hacker’s convention, there was a energies to examining such devices. They test them competition to break a Windows, Mac, or thoroughly, but importantly they also examine and Linux operating system using a new dismantle the devices to detect engineering technique, hitherto unknown. The Mac was defects, such as unsafe wiring. If they find a the first to be successfully attacked, in under device unsafe it is rated as unacceptable and the two minutes. public is protected against the dangerous device. But as soon as embedded computer systems are involved — and they are indeed appearing on even lowly devices like vacuum cleaners — we have no such transparency. Cars, for example, are now full of computers and without access to the software details, there is no way to tell if these cars are ‘Unsafe at Any Speed’.

Electronic Specifier Design | March 2013 | 29 Hi-Rel

Freely licensed open source software In recent years, a significant trend has been far greater production and use of FLOSS (Freely Licensed Open Source Software). Such software has two important characteristics; firstly it is freely licensed, so anyone can copy it, modify it, and redistribute it. Secondly, the sources are openly available, which means it can be thoroughly examined and any problems that are found can be openly di scussed and fixed. What we need is to establish the tradition that Then we are in big trouble, which we can’t even the use of FLOSS is at least desirable, and detect by close examination of the application sources, perhaps even mandatory for all critical software. since there is no trace there. Sure, this makes things a little easier for the bad Dennis Ritchie, father of the C language and a key guys, but they were willing to do whatever it influence on the development of Unix, warned of such takes to break the secrecy anyway. Importantly subversion in his famous Turing lecture ‘Reflections on what this does is to make it possible for the Trusting Trust’. It is far easier to subvert proprietary worldwide community of good guys to help software in this manner than FLOSS. ensure that the software is in good shape from a After all, early versions of Microsoft’s Excel program security point of view. contained a fully featured flight simulator hidden from At the very least, we can assure ourselves that view. If you can hide a flight simulator, you can easily the software is produced in an appropriate best- hide a little security ‘glitch’ like the one described available-technology manner. If we opened up a above. This would be far harder to do with a compiler television set and saw a huge tangle of improperly whose sources are very widely examined. insulated wires, we would deem the manufacturing The second aspect is to make the application code defective. The embedded software in many itself FLOSS, allowing the wider community to examine machines is in much worse state than this tangle of it. Now Which? magazine could employ experts to wires, but is hidden from view. look at the software inside the vacuum cleaner as part There are two aspects involved in the use of of their careful evaluation, and reject devices with FLOSS in connection with security-critical unsafe software. software. First we gain considerably by the The arguments above seem easily convincing from use of FLOSS tools in the building and the point of view of the public, so what’s the construction of such software. One way that objection? The problem is that companies are software can be subverted is, for example, to dedicated to the idea that they must protect use a compiler that has been subverted in a proprietary software. nefarious manner. For example, suppose our An extreme example of this is Sequoia Voting compiler is set up so that it looks for a Systems, which has reportedly refused to let anyone statement like: examine the software inside its machines on the if Password = Stored_Value then grounds that it is proprietary, with threats of lawsuits and converts it to against anyone trying to carry out such examinations. if Password = Stored_Value Here we have a case where one company is putting or else Password = “Robert Dewar” its proprietary rights ahead of essential confidence in

30 | March 2013 | Electronic Specifier Design our democratic systems. The situation with voting suspect that such protection is not really machines is perhaps even more critical in Europe, needed. where in some cases, e.g. for the European Parliament Suppose Boeing were forced to disclose the elections, complex voting systems are used, where we software controlling its new 787 ‘Dreamliner’. Would totally depend on complex computer systems to this suddenly give Airbus a huge advantage? Most implement these systems accurately and without likely not, as you can’t just lift the 787 avionics and possibility of subversion. drop them into an Airbus 350. What’s to be done? We do indeed have to ensure Yes, probably Airbus could learn useful things by that the proprietary rights of companies are sufficiently studying the software, just as they learn useful things protected that there is sufficient incentive to produce by studying the hardware and design of the Boeing the innovation we desire, but this cannot be done at planes. If everyone were forced to disclose their the expense of endangering the public through software, then this kind of cross-learning would insecure software. actually benefit competition and innovation. In most cases, the real inventions are at the We can’t just hum along on our current path here. hardware level, where traditional protection, The world is a more and more dangerous place and such as patents (which require full disclosure) the increasing use of secret software that is poorly operate effectively. Perhaps innovative forms of designed and vulnerable is increasing that danger. copyright protection can provide adequate We have to find ways of addressing this danger, and protection for software, though in most cases I more openness is a key requirement in this endeavour.

Author profile: Robert Dewar is the Co-founder and 8 More from AdaCore President of AdaCore. 8 Return to contents page.

Ultra Subminiature, Dust Proof and offers a Long Lifespan.

Snap Action Switch D2FD

The Newest Products for Your Newest Designs® .6uk.mouser.com

Omron_D2FD_UK_190x136mm.indd 1 2/19/13 3:58 PM Processors & FPGAs Make room for the many Solving real-time problems with multicore microcontrollers. By Ali Dixon

oncurrency and multicore processing are forms of multicore processing to fulfil their needs. familiar concepts in most aspects of In many cases this has meant the use of hardware, digital electronics today. In in the form of an ASIC or FPGA. These may not at CmCicroprocessors, the use of multicore (or at first sight seem like ‘multicore’ devices: but like the least ‘manycore’) devices in particular has processor with the superscalar architecture, they become increasingly prevalent. This shift dates effectively integrate many task engines on a single back to as long ago as the turn of the century, chip — albeit in the form of hardware. FPGAs are when it became evident techniques that, until frequently used to implement multiple concurrent then, had driven rapid increases in processor state machines. performance were becoming ineffective. But these approaches themselves have At this time, practical clock speeds topped out drawbacks. There may be a lack of flexibility in the low Gigahertz range and power (especially in the case of the ASIC, these devices consumption became a real issue. Meanwhile are not programmable in the traditional sense); the use of superscalar technology — which in high non-recurring engineering (NRE) costs effect parallelises tasks at the instruction level by breaking down large instructions and executing them on different units within the execution core — also ran out of steam. Many of the advanced techniques employed in represented at best a mixed blessing when transferred into the world of embedded design. Superscalar instruction-level parallelism, for example, spawned devices that attempted to keep all of their execution units busy at all times, by executing instructions out-of- order, and attempting to predict the results of branch instructions. While on one level this approach to multicore and parallelism may seem resource-efficient, the penalty is that the timing of code execution becomes increasingly unpredictable. In fact, for systems that run control loops, and for tasks like signal processing in which execution is highly data-dependent, such techniques are often worse than useless. As a result, embedded designers — and Figure 1: xCORE devices are constructed from tiles, subdivided into a particularly designers of systems with set of logical cores. This illustration shows the recently-announced 10- real-time requirements — turned to other core XS1-L10-128.

32 | March 2013 | Electronic Specifier Design translating to poor cost-effectiveness at low to interconnected with a built-in communication medium volume; and high unit cost and power network. Almost all instructions execute in a single- consumption in the case of FPGAs. cycle (the exceptions are the divide instructions In practice many embedded designers resort to and those which block to input data or an even more fundamental multicore strategy; synchronise to other tasks), so program execution they partition their system between several chips. is itself timing-deterministic. On-chip hardware is It is not uncommon to see a design that includes provided to connect I/O ports directly with logical a microcontroller (MCU), a DSP and an FPGA. cores, dramatically reducing latency of response Taking this approach means that the processes on to external events, and eliminating the need for each device are separated, and cannot interfere interrupts — the single largest source of with each other unless they need to interact. This unpredictability in embedded design. Devices increases predictability. The penalty is not only include timing and scheduling hardware, an increase in system complexity (including implementing the functions of an RTOS on-chip. communication between the various system Although the architecture itself is very different, blocks) and therefore bill of materials cost: such the development model will be very familiar to a design also requires programming of the MCU any programmer. An xCORE device is in C, of the DSP in a separate design programmed in a version of C with extensions to environment (probably using assembly code) and handle the multicore aspects of the design. DSP of the FPGA in RTL. instructions are included, eliminating the need for However, whenever an embedded system needs multiple design environments. And the low-latency to respond predictably, or in real time, the need I/O and single-cycle execution mean that xCORE for parallelism is not far away. A common solution programs behave so predictably that they can is to employ a single processor resource, with a actually be simulated like a hardware solution. real-time operating system (RTOS) to divide and conquer the task. The RTOS schedules and Time slicing manages critical processes into the processor An xCORE device is constructed from one or more resource, but makes it ‘look’ to each individual tiles. The tile includes processor resource, memory, process as if it has exclusive use of the processor. interconnect, and hardware response ports Again there are drawbacks: embedded designs connecting the processor resource with the outside are typically memory-constrained, and the RTOS world. There is also a scheduling and timing itself takes up valuable memory footprint. And the block, which manages the subdivision of the tile scheduling processes can be a drain on the resources into four, five, six or eight logical cores. processing power available for the task at hand. This is effectively achieved by apportioning each Another common alternative is to implement an logical core a guaranteed portion of the interrupt-based system to manage task processing resource, time-sliced in sequence. partitioning. The first issue with this approach is The logical cores on a tile share a single that interrupts are asynchronous, and hence the memory, with unconstrained read and write state of the processor at the time of the interrupt access. Because there are no caches, execution is will be unknown. The programmer must be careful highly predictable; the worst-case execution time to restore the system to its previous state when the of a straight section of code is virtually the same interrupt has been handled. Second, an incoming as the best-case execution time. Data-dependency interrupt inherently delays whatever task it is may introduce uncertainty, but that is implicit in interrupting: in a real-time system this may be the algorithm: the overall effect is that designs can unacceptable. The commonest solution is to over- be engineered efficiently and economically, engineer the system to ensure that timings are met. without large margins. The XMOS xCORE architecture seeks to combine This innovative approach to microcontroller the advantages of an MCU equipped with an design often allows developers to combine the RTOS, with all the benefits of multicore functions of several devices into a single xCORE concurrency. Each device can be viewed as a chip. Meridian Audio, for example, used xCORE network of 32-bit logical microcontroller cores to replace a microcontroller, DSP and an FPGA,

Electronic Specifier Design | March 2013 | 33 Processors & FPGAs

xCORE systems are programmed in C with extensions for concurrency, communications, and real-time I/O. XMOS provides an easy- to-use toolchain called xTIMEcomposer based around the LLVM compiler that translates C into xCORE machine code. Both the instruction set and language extensions have been designed to facilitate highly efficient code generation. The design environment is backed by a range of off-the-shelf xSOFTip soft Figure 2: xCORE devices respond 100x faster than traditional peripherals that make use of the ‘software-as- MCUs and with a consistent latency independent of the hardware’ capabilities of the architecture. As number of inputs. well as high speed USB 2.0 and Ethernet interfaces, these include S/PDIF, I2S, SPI and as well as two external devices, with a substantial CAN. In addition, the developer can code bill of materials saving. any arbitrary interface they require. This It is not only the multicore processing approach makes the architecture particularly suitable itself that brings benefits. The on-chip Hardware- for protocol bridging applications, for Response ports allow xCORE devices to respond example XMOS demonstrated CAN with extremely low latency (Figure 2). While this connectivity over AVB at the Embedded approach is inherently more responsive, it also World exhibition in February 2013. scales better than traditional MCU architectures The range of devices available has already with increasing numbers of inputs. expanded rapidly beyond the original 8-core The combination of responsiveness and and 16-core options. As well as system-in- predictable code execution means that xCORE package solutions with integrated USB, devices can implement, in software, functions that XMOS has recently added 6-core, 10-core require hardware in other device architectures. and 12-core versions. Most recently, the This is particularly useful when building company announced the XS1-L4-64 device, a communication interfaces; unlike a traditional 4-core variant believed to be the industry’s MCU, an XCORE device can be software- lowest-cost multicore microcontroller, priced configured with the exact combination of below $2 in volume. interfaces required for a particular application. xCORE is already proving the value of A good illustration of this advantage is the multicore processing in embedded strength of the architecture when implementing applications. The combination of deterministic real-time digital communications and infotainment. processing, parallelism and predictable I/O For example, xCORE is already a dominant force mean that the architecture can address many in the field of USB audio, in which data needs to applications that are simply beyond be delivered predictably to enable the highest traditional microcontrollers. This, along with audio quality. This has brought design wins at the ability to reduce BOM costs by customers including Sennheiser. The architecture integrating DSP, control and interfacing also scales well to multi-channel systems; for functions in a single device, mean that an instance for the emerging Ethernet AVB standards, increasing number of designs in industrial, XMOS offers a complete xCORE-powered multi- consumer and automotive applications will channel evaluation kit that allows multiple audio make use of this innovative approach to talkers and listeners to be connected together embedded systems design. quickly and easily. Author profile: Ali Dixon is the Director of Product 8 More from XMOS Semiconductor Marketing, and Co-Founder of XMOS Semiconductor 8 Return to contents page.

34 | March 2013 | Electronic Specifier Design

Processors & FPGAs Improving detection with future protection Delivering future-proof innovation to the most demanding customer-base requires the adoption of the best-in- class solutions. By Stephane Monboisset and Peter Stemer

hile the general public’s perception of security may not extend beyond the (increasingly sWWophisticated) X-Ray machines employed by border officials, microelectronic technology now plays a key role in many other techniques which are used to validate the authenticity of ingredients in an ever-growing range of consumables and comestibles. One of the most important and useful techniques available to detect the presence of unwanted substances in any number of daily items, for example, is the use of High As a result, customer demand drove Agilent to Performance Liquid Chromatography, or push the instrument designers to their limits; the HPLC. Sometimes referred to as High Pressure modules used in the HPLC instruments fell in to a LC, it uses a well-established scientific product life cycle of approximately 10 years and technique of separating particles held in small with each new generation the enabling samples across a spectrum to determine the electronics are expected to advance significantly. composite elements of a substance suspended In 1995, the second generation of modules was in a liquid. It is a technique used not only in developed, followed in 2005 by the Nucleus toxicology but wherever chemicals are found, range. Today the family of modules under such as the food industry, agriculture and development is the Fusion range and, true to environmental control. form, the engineering team will be looking to Agilent Technologies has been a leading employ a platform that can provide significant supplier in the field of HPLC since 1984, using scope to meet not only today’s requirements but a modular approach to manufacturing the those of the next 10 years. large and complex machines that include not only sophisticated electronics but a range of Future-proof precisely developed valves, pumps and The incumbent module design employs a PowerPC heaters. As the enabling technologies processor coupled to an FPGA; for the next developed, the range of applications that could generation Fusion range, the platform will be a make use of HPLC grew and with that came a single chip based on the Zynq programmable SoC demand for ever-greater performance in terms from . The Zynq-7000 platform goes beyond of accuracy and speed of results. the traditional FPGA format, by tightly integrating

36 | March 2013 | Electronic Specifier Design Consolidation It can take up to three years to fully develop a module for HPLC instrumentation, so any design reuse that can be achieved will naturally be critical. The Nucleus family of modules employed a PowerPC processor, while the Zynq-7000 platform uses the ARM Cortex-A9, but porting the software will be simplified thanks to the use of ENEA’s operating system, OSE. Moving from a single-core to a multicore platform offers numerous benefits; consolidation is one — systems that previously needed multiple processors can be redesigned to use a single multicore device. For Agilent’s HPLC modules the consolidation was not in the form of multiple processors but multiple devices, specifically a processor and an FPGA. Bringing together these two elements in to a single device could be achieved using an ASIC, but that wouldn’t give Agilent the longevity it needs from a single platform. For this application — and for many others — the flexibility and extensibility of the Zynq-7000 platform is the only way to achieve consolidation while delivering increased performance, without sacrificing design flexibility. hard IP with an advanced FPGA fabric and the Porting any software application from a world’s leading embedded processor sub-system. single core to a multicore platform can make The decision to use the Zynq SoC was made to sense but in order get the best from the allow as much design reuse as possible; both the platform it is important to understand how firmware and the VHDL code used in the Nucleus the software will make use of it. ENEA’s real- range can be ported to the Zynq-7000 platform. time operating system, OSE, is now ‘multicore This gives the engineering team a platform that aware’ but more than that, ENEA has will be capable of meeting the needs of HPLC developed a Multicore Migration Platform instruments for at least 10 years. that further eases the overall process. Initially the intention is to use the dual core ARM Instinctively most software is written to Cortex-A9 processing system to provide greater execute linearly; the programming language functionality in terms of the data acquisition. This C doesn’t inherently support parallelisation, will allow the design team to develop new so when moving to a multicore platform it is modules and, over time, enable old modules to be important to understand the different replaced with a single, common architecture. approaches possible. Specifically, these are Agilent’s design team is also aware of the symmetric multiprocessing (SMP) and indirect benefits of using a single chip solution, asymmetric multiprocessing (AMP). In general such as lower cost, but it is the extensibility of the terms, the former assumes the operating platform that is key; the available performance system will run on one core and the will ensure a long time-in-service for the Fusion application software will be free to run on modules which will eventually replace all of the whichever core has most availability at any existing modules in its HPLC instruments. given time, while an AMP approach often

Electronic Specifier Design | March 2013 | 37 Processors & FPGAs

given application. In many cases the best solution will be to employ a hypervisor; a software layer that provides abstraction between the operating system and the underlying hardware architecture. This allows the application software to ‘believe’ it is running on a single core, when in reality the hypervisor decides what runs on which core, by closely monitoring and controlling the available resources. There is no ‘turn assumes each core will have an instantiation key‘solution when migrating to a multicore of the operating system running on it and that platform; it requires effort from the design team, the application will be ‘hard-partitioned’ such particularly in order to achieve the best that specific functions are ‘tied’ to run on a performance gains. This is where the ENEA specific core. In the example of Multicore Migration Manual comes in to play; it consolidation, where multiple processors are provides guidance and advice on identifying and being replaced with a single multicore dealing with shared resources, how and where to processor, AMP may be the most appropriate partition code and how to use the load balancing approach. Where a single core is being framework built in to the multicore version of OSE. replaced with a multicore device, as in the A key requirement of Agilent’s design team is case with Agilent’t HPLC modules, then SMP that the hardware platform they develop will not may seem like the right solution. only support all the different modules produced In practice, however, the correct solution today but have the ability to grow as the demand will also depend heavily on the software; for more features and performance in the HPLC where there exists a high degree of instruments increases over the next 10 years. parallelisation in the code an AMP approach Supporting any platform for 10 years is may be better than SMP, but for code with a challenging enough but to also guarantees a large number of interdependencies the best performance roadmap using the same hardware solution could be SMP. is almost unheard of before the advent of the Another consideration is the use and Zynq-7000 platform. Because it tightly integrates distribution of shared resources in the multicore the industry’s most widely supported and platform, such as memory. This can have an adopted multicore processing system — MPCore effect on which approach is better suited to a from ARM — with the latest 28nm programmable logic fabric from Xilinx, the Zynq-7000 platform Author profiles: Stephane Monboisset is the Senior truly is future-proof. Manager, Processing Platforms Product Marketing for Xilinx, and Peter Stemer is the R&D Section Manager 8 More from Xilinx System Control for Agilent Life Science. 8 Return to contents page.

38 | March 2013 | Electronic Specifier Design

Processors & FPGAs Opening up to hardware independence OpenCL offers the capability to accelerate compute intensive algorithms, completely independent to hardware. By Wolfgang Eisenbarth & Philipp Zieboll

he average amount of data required for OpenCL is an open and royalty-free high-definition image capture and programming standard for general-purpose processing applications in the health care computing on heterogeneous systems. The TsTector is continually increasing. Furthermore OpenCL standard was developed by software the algorithms used in image processing are specialists from leading industrial concerns, who becoming more complex and compute- then submitted a draft to the Khronos Group for intensive. Typically high-performance standardisation. hardware solutions — such as multi-core The Khronos Group, founded in January 2000, processors, Accelerated Processing Units is a non-profit, member-funded consortium focused (APUs), Graphics Processing Units (GPUs) or on the creation of royalty-free open standards for Field Programmable Gate Arrays (FPGAs) — parallel computing, graphics and dynamic media are used in order to deal with this increased for a wide variety of platforms and devices. AMD, computing load. Intel, NVIDIA, SGI, Google and Oracle are just a These devices offer high computing few of the over 100 members. Today, OpenCL is performance, but hardware-specific code or maintained and further developed by Khronos. manufacturer-dependent extensions were needed. The OpenCL specification is now available in Today, the solution is Open Computing Language versions 1.1 and 1.2 (www.khronos.org/opencl/). (OpenCL 1.0 was released in late 2008). Complex tasks in medical image processing are Standardisation now frequently carried out by standardised The goal of OpenCL is to provide a standardised processor modules that support OpenCL, enabling programming interface for efficient and portable uniform programming for various high- programs (Figure 1). Users can thus get what they performance hardware architectures. have long been asking for; a vendor-independent, non-proprietary solution for accelerating their applications on the basis of their selected multi-core CPU, APU and GPU cores. The OpenCL specification consists of the language specification as well as Application Programming Interfaces (APIs) for the platform layer and the runtime. The language specification describes the syntax and the programming interface for writing compute kernels, which can be executed on multi-core CPUs or GPUs. Figure 1: OpenCL is an open, royalty-free standard for programming of A compute kernel is the basic unit of heterogeneous systems Source: Khronos Group executable code. The language used is based on a subset of ISO C99,

40 | March 2013 | Electronic Specifier Design OpenCL allows to group together work-items to form work-groups for synchronisation and communication purposes. OpenCL defines a multi-level memory model consisting of four memory spaces: Private Memory (visible only to individual compute units of the device); Local Memory; Constant Memory, and; Global Memory, which can be used by all compute units in the device. Depending on the actual memory subsystem, different memory spaces can be merged together. Figure 2 shows the memory hierarchy defined by OpenCL. The host processor is responsible for allocating and initialising the memory objects that reside in this memory space. The memory model is also based on the separation of host and device. Thanks to the hardware-independence and easy Figure 2: Overview of the memory hierarchy defined by portability of OpenCL, companies can reuse their OpenCL Source: AMD significant investment in source code, hence greatly reducing the development time for today’s which is a popular programming language among complex image processing systems. developers. OpenCL’s platform model consists of a host, COM support which establishes the connection to one or more Further optimisation of the design cycle is possible OpenCL devices. Host and device are logically by making use of standard PC building blocks separated from each other and this preserves such as a high-performance processor module. portability. The access to routines is obtained via Such a Computer-On-Module (COM) can be the platform layer API, which queries the number easily mounted onto a baseboard via a and the types of devices existing in the system. The standardised connector, whereby the baseboard developer can select and initialise the desired implements the application-specific functions. compute devices in order to execute the tasks. Computer-On-Modules are available in a range of Compute contexts as well as queues for job different versions offering scalable processor submission and data transfer requests are created power and a choice of interfaces. This module in this layer. The runtime API offers the possibility based technology thus provides a simple upgrade to queue up compute kernels for execution. It is path for higher performance. Because the also responsible for managing the computing and modules offered all meet defined standard memory resources in the OpenCL system. specifications regarding form factor and connectivity, they are easily interchangeable with Compute kernels products from different vendors. The execution model describes the types of the The MSC C6C-A7 module family supports compute kernels. Since OpenCL is designed for OpenCL and is implemented using the well multi-core CPUs and GPUs, compute kernels can established COM Express form factor. With the be created either as data-parallel, which fits well new Type 6 pin-out, there are two significant to the architecture of GPUs, or task-parallel, which improvements compared with the predecessor matches better to the architecture of CPUs. When Type 2 pin-out: Type 6 pin-out can support up a kernel is submitted for execution on an OpenCL to three independent Digital Display Interfaces device by the host program, an index space is (DDIs) and also adds support for USB 3.0. This defined. An instance of the kernel executes for embedded platform in compact form factor each point in this index space. Each element in the (95x95mm) is based on AMD’s Embedded R- execution domain is a work-item, whereby Series Accelerated Processing Units (APUs) and

Electronic Specifier Design | March 2013 | 41 Processors & FPGAs

Figure 3: The MSC C6C-A7 module excellent graphics family is based on AMD’s Embedded capabilities, offers R-Series Accelerated Processing Units support for OpenCL (APUs) and supports OpenCL 1.1, OpenGL 4.2 and DirectX 11. The modules support up to four independent displays for imaging applications. HDMI, MPEG-2 decoding, H.264 and VCE (video compression engine) support is also included. The MSC C6C-A7 COM Express module family offers six PCI Express x 1 channels and a PCI Express graphics (PEG) x 8 interface. In addition, all modules feature four USB 3.0 and features very powerful graphics and excellent four USB 2.0 ports, LPC, Gbit Ethernet, HD audio parallel computing performance with low and four SATA interfaces at up to 300Mbyte/s. power dissipation. Featuring DisplayPort 1.2 and HDMI interfaces The MSC module also integrates the AMD R- (3x digital display interface) supporting 460L 2.0GHz (2.8GHz Turbo) or AMD R-452L resolutions up to 4096 x 2160 (DP) and 1920 x 1.6GHz (2.4GHz Turbo) quad-core processors. 1200 (HDMI), along with LCD and VGA The thermal design power (TDP) levels are 25W interfaces, the MSC C6C-A7 modules offer and 19W, respectively. The two dual-core module comprehensive display support. versions can be populated with the AMD R-260H The platform can run Microsoft Windows 2.1GHz (2.6GHz Turbo) processor or the AMD R- Embedded Standard 7 operating system, as well 252F 1.7GHz (2.3GHz Turbo) processor — each as Linux. The AMI based BIOS includes UEFI featuring 17W TDP. All processors support the support. In addition to the Computer-On- AMD64 technology and the AMD-V virtualisation Modules, MSC offers Starter Kits and suitable technology. The AMD Fusion Controller Hub carrier boards, as well as cooling solutions and (FCH) A75 chipset is used in combination with all memory modules. CPU versions. The main memory can be expanded Thanks to the powerful computing and graphics to 16Gbyte DDR3-1600 dual-channel SDRAM via capabilities, the platform is especially suited for two SO DIMM sockets. demanding applications where 3D graphics, high- The Radeon HD7000G-Series graphics engine definition videos or the control of large displays integrated into the AMD R-Series APU, with its are required. Typically such applications can be found in the fields of medical technology, Author profiles: Wolfgang Eisenbarth is the Vice infotainment, digital signage and gaming. President of Embedded Computer Technology, MSC Vertriebs GmbH. Philipp Zieboll is an FAE, Embedded 8 More from MSC Vertriebs Computer Technology, MSC Vertriebs GmbH. 8 Return to contents page.

42 | March 2013 | Electronic Specifier Design Communications Ethernet is Everywhere! Over the last 40 years, Ethernet has become the quintessential and standard network solution for an increasingly diverse array of applications. By Gary Newbold

e it wired or wireless, enterprise or telecom Evidence of this came in late 2012 when Hyundai networks, Ethernet is endemic. The technology announced plans to partner with Broadcom to wire a transitioned from an ingenious invention by number of its new models with Ethernet to converge BBRobert Metcalfe to carry information from one printer once disparate systems into a single network, including to another at Xerox Palo Alto Research Center the infotainment consoles, safety, ABS brakes, and GPS. (PARC) in the early 1970s, to a much more scalable Streamlined Ethernet cabling will reduce the weight of and adaptable system running at Gigabit speeds for vehicles, ultimately boosting fuel efficiency, and helping Internet and office applications in the 1990s. automakers achieve improved fuel economy standards. More recently, it has evolved from its enterprise roots And it won’t stop there; AVB lets Ethernet play a role to infiltrate today’s high performance data centers and in modern AV systems (sound and video where the Metro network rings at 10GbE speeds, offering highest quality is required), be it within conference predictable performance with quality of service, rooms, television and radio broadcast studios, concerts improved latency and seamless access to a wide array and stadiums and auditoriums. of applications and services, powering phone calls and Using IEEE standards for synchronisation and QoS data servers. Ethernet will enable perfectly synched audio and video, However, even today, Ethernet’s history is far from deliver pitch-perfect sound, and create a virtual written. It is not just bigger and faster networks that will boardroom to bring global networks of executives to the extend its reach in the coming decade. Ethernet keeps same table with stunning quality and at a value. proving that its potential is limitless due to its flexibility. And as proof that the movement is rapidly expanding, Just as phone calls have been enhanced and expanded Extreme Networks of Santa Clara is the first enterprise- over the last several years with the introduction of VoIP, class vendor to ship AVB with scalable Gigabit, 10GbE one can expect to see Ethernet revolutionise everything and 40GbE switches supporting AVB standards. And from the way we conduct board meetings, to the way just ask pro AV companies like, Harman International, we watch movies, and even how we drive our cars. Meyer Sound, BiAmp Systems and Axon who are The recent introduction of Audio Video Bridging (AVB) today working with the AVnu alliance for the education standards backed by the IEEE allows Ethernet to support and promotion of AVB using Ethernet. highly sophisticated audio and video over the Ethernet Ethernet powers the systems that power our world, network, and this is a big step. Going forward with providing the speed, ease-of-use through interoperability AVB, Ethernet will find its way into millions of between networking devices, reducing the need for automobiles in the next decade, due to its lower cost, complex network setup, offering a seamless solution for lower weight of cabling, performance and simplicity. the delivery of any service.

Author profile: Gary Newbold is the Regional Director 8 More from Extreme Networks for Northern EMEA with Extreme Networks. 8 Return to contents page.

Electronic Specifier Design | March 2013 | 43 Processors & FPGAs Getting to the heart of the problem

How do you test and debug a device with hundreds of thousands of internal logic cells and transceiver speeds up to 28Gbit/s? Such is the challenge facing designers employing today’s industry leading FPGAs. By Daniel Ruebusch

ebugging FPGAs is inherently Digital debug of FPGAs has traditionally been challenging, due to the large numbers the domain of logic analysers. Offering from 32 to of internal logic nodes that are hundreds or even thousands of digital channels, DDinaccessible to traditional probing. Limitations synchronous and asynchronous acquisition, and on pin count bound a designer’s ability to complex state triggering conditions. The logic ‘brute force’ a solution. Innovations such as analyser is, by design, a powerful tool for analysis JTAG and the internal logic analyser have and debug of digital signals. However, for many helped alleviate these problems and route applications logic analysers are not the best tool important signals out of the FPGA for analysis. for the job. The mixed signal oscilloscope (MSO) Traditionally, logic analysers have been used is an extremely powerful tool for applications that to debug FPGAs, however more recently mixed require both digital and analogue measurements. signal oscilloscopes (MSO) have become Analogue channels allows designers to make commonly used due to their ability to measure critical analogue measurements on their digital both analogue and digital signals. In recent devices; for instance, testing the transceiver on years, a new class of applications have placed an FPGA. Further, if something in the logic incredible demands on FPGA transceiver appears incorrect the designer has correlated speeds, increasing the need for accurate and analogue channels readily available for deeper high bandwidth analogue measurements along investigation. MSOs offer broad analogue and with digital analysis in FPGA debug. digital triggering capabilities and deep memory From the perspective of digital debug, the in a familiar and easy-to-use interface. While the biggest challenges arise from the MSO is not without its tradeoffs (typically being inaccessibility of critical logic nodes and a limited to a maximum channel count of 20 and limitation on the number of available physical only capable of asynchronous acquisition based pins. Innovations such as internal signal on an internal sampling clock), it is a critical muxing, JTAG communication, and internal tool for designers of mixed signal systems such logic analysers have helped alleviate these as an FPGA. challenges. However, all of these techniques The MSO has traditionally been found on low offer tradeoffs and require the correct test and and medium bandwidth scopes in order to address measurement equipment. the heart of the mixed signal market. However, new applications have demanded ever higher data rates, as evidenced by the 28Gbit/s transceiver speeds available on today’s FPGAs, driving the need for high bandwidth mixed signal oscilloscopes that can handle both logic analysis and the critical signal integrity Figure 1: Routing signals out to physical pins to be probed by the digital channels of challenges of high speed serial a mixed signal oscilloscope measurements.

44 | March 2013 | Electronic Specifier Design Muxing out signals A variation on direct routing to pins, signals can be muxed out to physical pins on the FPGA, as shown in Figure 2. This approach offers many critical benefits; primarily, the designer is no longer so Figure 2: Diagram of FPGA debug using constrained by physical digital channels of Agilent MSO 90000 pins, as the number of X-Series. 256 logic nodes are routed out internal nodes available to 16 physical pins through a 16:1 Mux to probe is many times the number of physical pins. Using the example Digital Debug shown in Figure 2, let’s assume the designer Given the challenges of a lack of internal has dedicated 16 pins to logic debug. A 16:1 visibility, different approaches have emerged in mux allows the designer to route 256 internal FPGA debug. The most common approaches to nodes to the multiplexer, and observe them all FPGA debug are: direct routing from logic nodes using only 16 physical pins. In most to pins, muxing out signals to pins, and internal implementations, the mux selection is controlled logic analysers. using the JTAG interface on the FPGA. This The simplest way to access internal nodes in an flexibility dramatically reduces the need to FPGA is to leverage the programmability of the redesign the FPGA to observe additional nodes device to route these signals out to physical pins and improves the time between iterations. where they can be probed by the digital channels Further, as signals are still being directly of a mixed signal oscilloscope. A simplified observed at the physical pins, both state and diagram of this approach is shown in Figure 1. timing modes remain available. This method, while effective, comes with In many instances, FPGA vendors provide significant limitations. First, in many cases internal logic analysers (ILAs) built into their designers are limited by the number of physical FPGAs to aid debug. The ILA features trigger pins available on the FPGA package. This circuitry and uses the internal memory to store approach requires the designer to make a traces. JTAG communication between the FPGA tradeoff between the number of physical pins and a PC is used to configure the ILA and read available and the number of internal nodes the logic signals it outputs (shown in Figure 3). available to probe for test and debug. Further, it is The convenience of this setup is that no often difficult to predict which nodes will need to incremental physical pins are needed and only a be observed while debugging the FPGA logic. This PC is required for basic logic analysis. However, challenge becomes aggravated when directly there are many limitations to this technique. The routing nodes to pins and only 8 or 16 pins are ILA can be a resource hog, monopolising FPGA available to dedicate to debug. If new signals slices and internal memory needed for the need to be probed, the FPGA must be redesigned working logic. Further, only state mode is to route these signals out to the physical pins. This available using an ILA, timing mode, which process of manually managing design and node- allows designers to observe signals relative to to-pin routing results in equal (and relatively long) one another and measure asynchronous events, is time investment between iterations. Inefficiencies not supported. aside, this tried and true debug technique is That being said, ILAs are common and many simple and provides both state and timing modes vendors offer hybrid debug solutions where both for thorough analysis of the probed signals. an ILA and muxing can be employed for

Electronic Specifier Design | March 2013 | 45 Pxxrxocessors & FPGAs

Insertion loss, reflections, cross talk, and other analogue challenges that can be safely ignored at data rates less than 1Gbit/s can be catastrophic at 28Gbit/s, often resulting in completely closed eye diagrams. Real-time Figure 3: Diagram of FPGA debug using digital channels of Agilent MSO 90000 X- Series. Internal logic analyser communicates logic information to MSO via JTAG oscilloscopes, including MSOs, interface offer signal integrity software that can dramatically improve maximum flexibility. In order to remain a truly the quality of analogue measurements at these multipurpose tool, mixed signal oscilloscopes offer data rates. For example, a lossy channel can be JTAG decode software so that logic signals output de-embedded from the signal path. This allows by an ILA can be analysed directly on the scope. the user to observe the signal as it appeared prior to propagation through the channel. Analogue measurements The old adage of needing to see at least the There are certainly many instances where analogue 3rd harmonic would require a measurement measurements can be invaluable to a digital system with a minimum of 42GHz bandwidth to designer. For instance, being able to see the measure a 28Gbit/s signal. However, this is underlying analogue signal can clarify anomalous often not the case. Measuring a 28Gbit/s behaviours in the logic. That being said, state of the PRBS^7 signal at both 33 and 63GHz of art FPGA testing represents a class of truly mixed acquisition bandwidth can show very little signal applications where the analogue and digital difference aside from the unavoidable high challenges can be equally critical to device frequency noise in the 63GHz acquisition. An performance. The latest FPGAs offer transceiver FFT of this signal shows the third harmonic is speeds up to 28Gbit/s. Research and development 30dB below the fundamental and almost entirely into ever faster ethernet speeds — a critical market negligible in real world measurements. Still, supported by FPGA makers — has driven the need only recently have MSOs expanded into for these bleeding edge transceiver speeds. Current analogue bandwidths high enough to capture work on 100Gbit Ethernet has focused on 10 lanes this 28Gbit/s signal. The industry’s highest at 10Gbit/s implemented in CFP modules. A second analogue bandwidth MSOs have achieved 20, generation of 100GbE is being developed and most recently 33GHz. employing a 4 by 25Gbit/s architecture to be There are critical challenges in both digital and integrated into CFP2 and ultimately CFP4 modules. analogue test and debug of state-of-the-art FPGAs. The advantages of increasing serial data rates Digital signals are difficult to access and physical and reducing parallelism include a significant pin count is limited. Meanwhile, transceiver speeds reduction in power dissipation and module size reaching 28Gbit/s bring analogue non-idealities enabling higher densities. In order to support to the forefront. Making measurements in this the 25Gbit/s serial data lanes demanded by the challenging environment calls for a high standard, FPGAs have pushed transceiver bandwidth mixed signal oscilloscope combining speeds out to 28Gbit/s. Peering another >30GHz analogue bandwidth, superior signal generation into the future, 400GbE calls for 16 integrity, and 16 digital channels into one by 25Gbit/s. integrated instrument. Until recently, such an Designing and measuring a serial data stream instrument didn’t exist. However, the latest class of at 28Gbit/s is very much an analogue problem. high bandwidth MSOs are uniquely positioned to support these challenging measurements. Author profile: Daniel Ruebusch manages strategic marketing of high performance oscilloscopes at Agilent 8 More from Agilent Technologies Technologies. 8 Return to contents page.

46 | March 2013 | Electronic Specifier Design One less hat to wear. Let us be your power expert. We understand that you don’t have the time to master every aspect of electronic design. As a leading manufacturer of power supplies we are here to collaborate with you to ensure your next project is a success.

Novum® Ac-Dc Dc-Dc Advanced Power Power Supplies Converters

www.cui.com/PowerExpert Processors & FPGAs We’ve got it covered

The breadth of applications now covered by 32bit devices means IDMs are extending their reach in both directions, to make sure all use-cases are covered, as Philip Ling discovers.

biquitous is probably too small a word It is the Cortex-M family that now dominates the to describe ARM’s reach in to the embedded space, with almost all IDMs active in processor IP market; while it remains this sector offering a Cortex-M based product UUonly one solution, there’s no disputing it is the portfolio. The most recent large IDM to join these most prevalent. That is, of course, due in large ranks was Infineon, who launched its first Cortex- part to its number of licensees and the M based product range in January 2012, the freedom they are afforded when developing XMC4000 family, which features the -M4 core. solutions. But it’s aided too by the breadth of This January, Infineon announced the XMC1000 solutions available, which remain — to some family, which is enabled by the -M0 core. degree — compatible. Infineon’s target market for the XMC1000 family These days, ARM has focused its route to remains industrial control, but it claims that with market around the Cortex range, which this family it is breaking down the cost barrier to features three distinct families, denoted by - using 32-bit architectures, delivering it an a price A, -R and -M. To differentiate these families, comparable to the incumbent 8-bit solutions. It has ARM maintains that the -A range in intended achieved this largely by adopting a large-scale for Application Processors, the -R for approach to the design and manufacture; using a processors targeting end-applications where leading-edge 65nm embedded Flash process on Real-Time performance is a must, while the - 300mm wafers. This economy of scale allows the M are for what is generically termed entry-level XMC1000 products — the 1100 variants ‘embedded’, but is most often the choice for — to be brought to market at just €0.25 in volume. Microcontrollers. The ‘high-end’ version, the 1300, is priced at €1.25 but adds a number Infineon’s low-end XMC families of features, including a Field-Oriented Control (FOC) engine, which allows the parts to be used in more complex motor control applications. By being so aggressive with its pricing, Infineon says it overcomes the the five factors that currently inhibit 32-bit devices breaking in to low-end applications, namely: performance; peripherals; program memory; portfolio scalability, and; price. Clearly, the most influential

48 | March 2013 | Electronic Specifier Design is price, but it follows that IDMs like Infineon aren’t able to tackle the first four without pushing price up. By taking the bold step of manufacturing in high volumes or the most cost- effective process from launch, this barrier is effectively breached. Notably, Infineon admits it isn’t targeting the ‘ultra low power’ end of the market, where Freescale and, more recently NXP, have launched devices based on the ‘slimmed down’ Cortex- M0+ core, stating that it wouldn’t have brought any benefits to the XMC1000 family.

High-end While Infineon builds its portfolio by adding low- end devices, Atmel has just released details of its latest SAM family which features a Cortex-A5 Atmel’s newest high-end family core. This is ARM’s ‘entry-level’ Application Processor core and as such is not normally deployed in what is essentially a microcontroller generator is also included for generating family. However, the -A family is better placed to unique keys. offer a migration path for code previously The XMC1000 also uses AES and in this case deployed on ARM’s ARM9 and ARM11 Infineon is developing a method of using it to architectures and as such may find favour in the encrypt the software embedded at the factory, embedded space. As stated, the Cortex-A5 is using a key that is limited to a certain number ARM’s ‘entry level’ Application Processor core, of MCUs. While relatively straightforward in but Atmel claims it matches or even surpasses the concept, Infineon stated it is still working out Cortex-A8 in many areas. the logistics of the approach. The more capable core means the SAMA5D3 is There’s still a growing number of ARM- able to take on more demanding operating based MCUs entering the embedded market systems not normally deployed on on an almost daily basis, and for a long time microcontrollers, which includes Android 4.0. the ‘sweet spot’ in performance was served Atmel intends to make a port of this and Linux by the Cortex-M3. More recently the focus available, via www.at91.com/android4sam and has moved to ultra-low-power and for this the www.linux4sam.com respectively. Cortex-M0+ is emerging as the preferred One of the key application areas for the choice. It’s refreshing, therefore, to see two SAMA5D3 will, like Infineon’s XMC, be the new families that appear to defy these industrial sector, where faster connectivity is cited trends, particularly with the use of a Cortex- as one of the drivers for more processing A5 in an MCU-type family. Could this signal performance. It’s also being manufactured on a a new age of embedded processing and put 65nm process. even more pressure on the venerable 8-bit architectures? Infineon certainly seems to Security believe so, stating that it sees ‘no future’ for Both Atmel and Infineon identify security as an 8-bit and that it is focusing all of its important feature of their respective new development on 32-bit architectures. product lines. The SAMA5D3 features a secure boot loader and a hardware encryption 8 More from Infineon engine with AES, along with 3DES and SHA 8 support to encrypt/decrypt data or More from Atmel communications. A true random number 8 Return to contents page.

Electronic Specifier Design | March 2013 | 49 Interconnect When the going gets tough…

The race for radical efficiency improvements in aircraft design is driving innovation in connector design. By Stephen Webster

s airline operators continue to struggle Through this development process the connector with volatile fuel prices, aircraft industry has worked closely with engine manufacturers are striving for lighter manufacturers to produce interconnect solutions aAAnd more fuel efficient planes which, in turn, that can cope with extreme conditions and are pushing engine manufacturers to focus on temperatures of up to 250°C, without signal powerplant design and development. Initially, degradation. For these environments, the material manufacturers looked into using lighter selection process is crucial and must be considered structural materials, which were usually exotic early on in the design process. With experience alloys and proved too costly – the solution already gained in the industrial, medical and therefore was composite materials. The focus automotive environments, Molex had already met has now moved to the internal workings of the this design challenge with its EXTreme Power 150A plane, much like in the automotive industry, to low-profile power connectors. optimise design or replace the traditional mechanical, hydraulic or pneumatic systems Electric flight with electronics. Another route being followed is the All Electric and One of the key success stories is the design of a More Electric Aircraft (AEA/MEA) concept; the idea more efficient engine for mid-range aircraft; the to replace the traditional hydraulic and pneumatic Lean Burn Engine, which is installed on the new secondary power systems (SPS) with electrical systems A320 Neo, is said to achieve fuel savings of 15% was first proposed back in 1945 but was not actively and an additional flight distance of 500 nautical researched until the 80s, at a time when fuel prices miles (950 kilometres), or the ability to carry two began to soar. Since then, the AEA/MEA concept has tonnes more payload at a given range. For the been adopted and implemented in new aircraft environment, fuel savings translate into some designs, such as the Boeing B787 and Airbus A380. 3,600 tonnes less CO2 per aircraft per year, as Moving on into further weight savings, aircraft well as a double-digit reduction in NOx emissions manufacturers are now turning their eyes to fibre and reduced engine noise. optic technology. Until recently, fibre optics has been seen as a ‘black art’ — in terms of mechanical and environmental stability and cost of active components — however as the commercial market has shown for many years, this is no longer true. Cabling systems that use rugged cables and interconnect products, and the transceiver technology, is advancing apace, while costs are reducing. Away from the PCB, a solid OEM acceptance has grown as the termination, test and cleaning of systems have become much more robust and repeatable processes. When comparing copper telephone wire used in the Plain Old Telephone System (POTS) with fibre cabling, it is clear that fibre is faster than copper, offers a wider bandwidth and its distance performance with respect to losses is greater, as the

50 | March 2013 | Electronic Specifier Design Cable Type Max Reach at 10 Gbit/s have similar harsh environments and strict demands for longevity of supply. For one of these products to Passive Copper ≈7m be considered for mil/aero COTS use, it is often just a matter of retesting the connector according to the Active Copper ≈12m relevant military standards — whether it’s a round, rectangular or power connector. COTS optical cable assemblies, such as Military MM OM-3 Fibre ≈100m 38999- and 28876-style circular connector cable assemblies and ARINC 400- and 600-style avionic MM OM-4 Fibre ≈150m optical assemblies, provide readily available cost- effective and short lead-time solutions. Uniquely SM Fibre ≈4km designed connectors, which can be terminated to Silicon-photonics both discrete fibre and optical ribbon cable, are ≈10km CMOS with SM Fibre ideal for use in avionics, flight-control equipment, mobile tactical field command platforms, EMI table describes. It is for this very reason that fibre sensitive equipment, security and other harsh- technology was used in the Typhoon Eurofighter, environment applications. which was developed in the 1980s, and in today’s Flexible circuitry, backplane systems and Joint Strike Fighter (JSF) Programme as well as many standard connectors (FC, LC, MT, MPO, MTP, SC other civil aviation platforms. and ST) are all available for customised terminations to meet specific application Getting tougher requirements. State of the art aircraft will use Ruggedisation, weight reduction, micro- more and more fibre optic technology to support miniaturisation and performance are all key the requirement for more efficient airframes in elements for the Military and Aerospace industry’s terms of weight reduction, while supporting more selection process of interconnect solutions. services, such as control, communication and Complex C4ISR applications, for example, require infotainment, on the various platforms. Standards incredible speed, enormous bandwidth, signal Committees, such as the SAE, JEDEC and ARINC, integrity and analytic capabilities, without adding are working with the leading connector heat to the system. Lightweight design is a priority manufacturers focusing heavily on fibre in unmanned vehicles; here, the smallest technology, particularly transceiver modules, interconnect systems can help keep the system backplane and on-board infrastructure in support light and nimble. With these design criteria, fibre of these high-speed intra-platform links. optics is becoming the transmission medium of Let’s not forget the role of copper however; with choice and, due to the intrinsic benefits of security the advent of unshielded-twisted-pair cabling (UTP) and immunity to RFI/EMI, the use of such links is the margins between copper and fibre are not just desirable but is now essential. narrowing. Connector and cabling manufacturers In parallel with these fundamental design have marched through at least five generations, changes, the Military/Aerospace market is each time to keep up with the increasing gradually adopting the use of COTS (Commercial bandwidth requirements of Local Area Networks off the shelf) or MOTS (Modified off the shelf) (LANs). We are now seeing 10Gbit Ethernet, as components for cost saving purposes — previous to opposed to the 10Mbit/s Ethernet networks we this, mil-grade components were designed had 15 years ago. This technology is now specifically for a particular task and were often ruggedised for the industrial environment and it is over-engineered and extremely costly. Many anticipated that it will also migrate into military connector manufacturers, such as Molex, actively and aerospace applications, complementing fibre work in the automotive and industrial markets that optic technology.

Author profile: Stephen Webster is the European Industry 8 More from Molex Manager, Military/Aerospace, with Molex. 8 Return to contents page.

Electronic Specifier Design | March 2013 | 51 Interconnect Relibility just got smaller

Connectors must ensure signal integrity even when the environment is far from benign; great forces of shock and vibration, and extremes of temperature may be experienced. By Wendy Jane Bourne

ize, weight, performance and cost; these on market feedback over many years to its are the key determining factors that similarly high-reliability Datamate range. A 2mm influence the choice of interconnect pitch cable-to-cable, cable-to-board and board-to- SsSystem. High-reliability applications such as board connector family, Datamate has been aircraft, aerospace, robotics, undersea specified for use in harsh operating conditions exploration, down-hole drilling, medical, motor many times, featuring on the Robonaut 2 NASA sport and robotics make even greater demands. space robotics program, missile guidance and While certain Hi-Rel connectors used in military roadside bomb detection systems, UAVs, auto applications must endure the horrors and sport telemetry systems, dialysis and patient associated physical stresses that face their military monitoring equipment. Datamate’s contact design counterparts, devices used on commercial features a stamped, four-fingered, gold-plated aeroplanes are entrusted to carry millions of beryllium copper clip with highly stressed contact passengers, so the choice of components used is beams. The clips reside in the female half of the no less critical — simply put; failure is not an connector clasping round pins tightly, ensuring the option. As well as the vital systems that keep an integrity of electrical connection even under aeroplane in the air and in communication with air severe conditions. Additions to the Datamate traffic control, commercial aircraft are increasingly family include miniature, lightweight mixed equipped to enhance the passenger experience technology versions offering a multitude of through a range of sophisticated entertainment configurations for signal, power and/or coax, systems. Whilst not ‘mission critical’, if the in-flight Datamate Mix-Tek connectors feature power movie system fails travellers may well be tempted contacts rated for use at up to 20A; particularly to switch to another carrier, making the systems suited to commercial aircraft designs based on commercially critical. ‘everything always on’, that necessitate In satellites it is the cost of failure that is interconnect systems with a high current carrying prohibitive. Even a modest CubeSat costs an capability. Datamate’s signal contacts are rated at estimated $150,000 to make and launch, more up to 3A; coax contacts are rated at 50 Ω. The sophisticated commercial and defence satellites Datamate range also includes REMI shielded cost many times that. Even terrestrial applications versions (Datamate S-Tek) and a variety of such as oil and gas exploration cost £1000s per latching options for added security. day, so a connector failure in those high But inevitably and inexorably, industry is vibration/high temperature scenarios is simply requiring smaller devices. Smaller connectors, yet not acceptable. with the same ability to withstand high shock and vibration, make up a significant part of the Hi-Rel Pitch perfect market. UAVs are a case in point. Whether for The new Gecko range of 1.25mm pitch Hi-Rel military or increasingly commercial applications connectors from Harwin has benefited from its such as agriculture and traffic monitoring, flight track record of manufacturing high-reliability time is totally governed by the weight of all connectors for safety-critical applications, drawing components on board. Hence Harwin’s newly-

52 | March 2013 | Electronic Specifier Design launched, Gecko 1.25mm pitch connector family which suit demanding applications and deliver high performance at commercial price levels. The low profile G125 series connectors are designed to offer high performance in a miniature package. The 1.25mm pin spacing results in a 35% space saving over other high-performance connectors such as Micro-D and a 45% reduction in required PCB space over 2mm pitch interconnect systems. The connectors are rated to handle 2A per contact. Tested and proven to allow high performance in extreme conditions, the G125 family can operate within a wide temperature range (-65 to +150°C) and under extreme vibration (Z axis 100g 6m/s). Addressing cost, it’s fair to say that today, all budgets whether the application is military or Design considerations commercial, are subject to strict scrutiny. Certainly The performance of miniature connectors under the days of being able to over-specify a military- extreme conditions is largely dependent on the style (and military-priced) device for an industrial contact design. Many styles are used across application are long gone. Gecko addresses this industry for Hi-Rel applications but Gecko’s high concern, with price-per-contact levels set at performance is made possible by Harwin’s industrial levels running at an order of magnitude patented four-finger Copper Alloy contact. The of military devices. contact, which delivers high performance and Many times engineers simply wait (or extreme reliability, at the 1.25mm pitch level is forget) about connectors until the last shown in in the photo. minute. The result is that often they leave Offering up to 50 contacts per connector and themselves with little to no board space (or available in dual row cable-to-board and board- volume in general) for the connector. to-board configurations, Gecko family connectors Understandably, design teams are more include a wealth of features including polarisation concerned with the more glamorous features points that prevent mis-mating, easy identification of their projects — the processing, imaging, of the No 1 position for fast visual inspection and motion and so on — commonly neglecting the optional latches that allow simple and fast de- more mundane process of connecting board latching that requires no special tooling. Cable to board, or getting signals to and from their connectors feature a rear potting wall adding an boxes. Yet of course, every link in the chain extra level of strain relief. is as critical as the next, and a failed G125 connectors are manufactured to withstand connector will knock out a system just as fast high numbers of mating cycles but also feature low as a failed sensor or processor. New breeds insertion and extraction forces. Moldings are of connectors — such as Gecko — deliver the manufactured from RoHS-compliant, performance and reliability demanded by hi- environmentally friendly materials, eliminating rel applications, the miniaturisation that is harmful chemicals even before they are added to increasingly required, and the restricted substances lists. Pre-assembled cable cost/performance ratio expected by project configurations are available in a variety of layouts teams with tight budgets. Gecko is fully- featuring male and female, single and double supported with a range of design tools ended, in a variety of standard lengths. Board including Eye Diagrams and CAD Models. connectors are packaged in tape and reel format. Instruction videos are also available.

Author profile: Wendy Jane Bourne is a Technical & 8 More from Harwin Marketing Engineer with Harwin 8 Return to contents page.

Electronic Specifier Design | March 2013 | 53 21st International Trade Fair and Congress for Optical Technologies— Components, Systems and Applications

LIGHT APPLIED

40 YEARS OF YEARS LASER WORLD OF PHOTONICS. DRIVING INNOVATION THAT MEANS 40 YEARS IN THE LEAD.

It is the most important international marketplace and a think tank at the same time: As the world’s leading trade fair, LASER World of PHOTONICS has been bringing together all the key players in science and industry for 40 years. By combining research and applications, it promotes tech nol ogi- cal development. And gives you a complete market overview and concrete solutions for your daily business. Innovations and trends? They are presented here fi rst. Practice- oriented? Our application panels thrive on it. Join us as we take the lead. Register online at www.world-of-photonics.net

MAY 13–16, 2013 www.world-of-photonics.net MESSE MÜNCHEN